#1910 Issue closed: ReaR Include Exclude configuration with the SAN disks

Labels: support / question, fixed / solved / done

manums1983 opened issue at 2018-09-12 04:27:

Relax-and-Recover (ReaR) Issue Template

Fill in the following items before submitting a new issue
(quick response is not guaranteed with free support):

  • ReaR version ("/usr/sbin/rear -V"):
    2.3 /Git

  • OS version ("cat /etc/rear/os.conf" or "lsb_release -a" or "cat /etc/os-release"):
    SUSE 12.2

  • ReaR configuration files ("cat /etc/rear/site.conf" and/or "cat /etc/rear/local.conf"):

# Begin example setup for SLE12-SP2 with default btrfs subvolumes.
# Since SLE12-SP1 what is mounted at '/' is a btrfs snapshot subvolume
# see https://github.com/rear/rear/issues/556
# and since SLE12-SP2 btrfs quota via "snapper setup-quota" is needed
# see https://github.com/rear/rear/issues/999
# You must adapt "your.NFS.server.IP/path/to/your/rear/backup" at BACKUP_URL.
# You must decide whether or not you want to have /home/* in the backup.
# It depends on the size of your harddisk whether or not /home is by default
# a btrfs subvolume or a separated xfs filesystem on a separated partition.
# You may activate SSH_ROOT_PASSWORD and adapt the "password_on_the_rear_recovery_system".
# For basic information see the SLE12-SP2 manuals.
# Also see the support database article "SDB:Disaster Recovery"
# at http://en.opensuse.org/SDB:Disaster_Recovery
# In particular note:
# There is no such thing as a disaster recovery solution that "just works".
# Regarding btrfs snapshots:
# Recovery of btrfs snapshot subvolumes is not possible.
# Only recovery of "normal" btrfs subvolumes is possible.
# On SLE12-SP1 and SP2 the only exception is the btrfs snapshot subvolume
# that is mounted at '/' but that one is not recreated but instead
# it is created anew from scratch during the recovery installation with the
# default first btrfs snapper snapshot subvolume path "@/.snapshots/1/snapshot"
# by the SUSE tool "installation-helper --step 1" (cf. below).
# Other snapshots like "@/.snapshots/234/snapshot" are not recreated.
# Create rear recovery system as ISO image:
OUTPUT=USB
#OUTPUT=ISO
# Store the backup file via NFS on a NFS server:
BACKUP=NETFS
#BACKUP=DP
# BACKUP_OPTIONS variable contains the NFS mount options and
# with 'mount -o nolock' no rpc.statd (plus rpcbind) are needed:
#BACKUP_OPTIONS="nfsvers=3,nolock"
# If the NFS server is not an IP address but a hostname,
# DNS must work in the rear recovery system when the backup is restored.
BACKUP_URL=usb:///dev/disk/by-label/REAR-000
# Keep an older copy of the backup in a HOSTNAME.old directory
# provided there is no '.lockfile' in the HOSTNAME directory:
NETFS_KEEP_OLD_BACKUP_COPY=yes
# Have all modules of the original system in the recovery system with the
# same module loading ordering as in the original system by using the output of
#   lsmod | tail -n +2 | cut -d ' ' -f 1 | tac | tr -s '[:space:]' ' '
# as value for MODULES_LOAD (cf. https://github.com/rear/rear/issues/626):
#MODULES_LOAD=( )
# On SLE12-SP1 and SP2 with default btrfs subvolumes what is mounted at '/' is a btrfs snapshot subvolume
# that is controlled by snapper so that snapper is needed in the recovery system.
# In SLE12-SP1 and SP2 some btrfs subvolume directories (/var/lib/pgsql /var/lib/libvirt/images /var/lib/mariadb)
# have the "no copy on write (C)" file attribute set so that chattr is required in the recovery system
# and accordingly also lsattr is useful to have in the recovery system (but not strictly required):
REQUIRED_PROGS=( "${REQUIRED_PROGS[@]}" snapper chattr lsattr )
# Snapper setup by the recovery system uses /usr/lib/snapper/installation-helper
# that is linked to all libraries where snapper is linked to
# (except libdbus that is only needed by snapper).
# "installation-helper --step 1" creates a snapper config based on /etc/snapper/config-templates/default
COPY_AS_IS=( "${COPY_AS_IS[@]}" /usr/lib/snapper/installation-helper /etc/snapper/config-templates/default )
# Files in btrfs subvolumes are excluded by 'tar --one-file-system'
# so that such files must be explicitly included to be in the backup.
# Files in the following SLE12-SP2 default btrfs subvolumes are
# in the below example not included to be in the backup
#   /.snapshots  /var/crash
# but files in /home are included to be in the backup.
# You may use a command like
#   findmnt -n -r -o TARGET -t btrfs | grep -v '^/$' | egrep -v 'snapshots|crash'
# to generate the values:
BACKUP_PROG_INCLUDE=( /var/cache /var/lib/mailman /var/tmp /var/lib/pgsql /usr/local /opt /var/lib/libvirt/images /boot/grub2/i386-pc /var/opt /srv /boot/grub2/x86_64-efi /var/lib/mariadb /var/spool /var/lib/mysql /tmp /home /var/log /var/lib/named /var/lib/machines )
BACKUP_PROG_EXCLUDE=( "${BACKUP_PROG_EXCLUDE[@]}" "/var/crash" "/export/Doc" "/export/archive" "/usr/sap/hostctrl" "/home/oracle" "/home/oraprd" "/oracle" "/oracle/PRD" "/oracle/PRD/12102" "/oracle/PRD/mirrlogA"  "/oracle/PRD/oraarch" "/oracle/PRD/origlogB" "/oracle/client" "/oracle/oraprd" "/oracle/stage" "/oracle/PRD/mirrlogB" "/oracle/PRD/origlogA" "/oracle/PRD/sapreorg" "/sapmnt/PRD" "/sapmnt/PRD/exe" "/usr/sap/PRD" "/usr/sap/SMD" "/usr/sap/tmp" "/oracle/PRD/sapdata1" "/oracle/PRD/sapdata2" "/oracle/PRD/sapdata3" "/oracle/PRD/sapdata4" "/oracle/PRD/sapdata5" "/oracle/PRD/sapdata6")
EXCLUDE_RECREATE=( "${EXCLUDE_RECREATE[@]}" "fs:/var/crash" "fs:/export/Doc" "fs:/export/archive" "fs:/usr/sap/hostctrl" "fs:/home/oracle" "fs:/home/oraprd" "fs:/oracle" "fs:/oracle/PRD" "fs:/oracle/PRD/12102" "fs:/oracle/PRD/mirrlogA" "fs:/oracle/PRD/oraarch" "fs:/oracle/PRD/origlogB" "fs:/oracle/client" "fs:/oracle/oraprd" "fs:/oracle/stage" "fs:/oracle/PRD/mirrlogB" "fs:/oracle/PRD/origlogA" "fs:/oracle/PRD/sapreorg" "fs:/sapmnt/PRD" "fs:/sapmnt/PRD/exe" "fs:/usr/sap/PRD" "fs:/usr/sap/SMD" "fs:/usr/sap/tmp" "fs:/oracle/PRD/sapdata1" "fs:/oracle/PRD/sapdata2" "fs:/oracle/PRD/sapdata3" "fs:/oracle/PRD/sapdata4" "fs:/oracle/PRD/sapdata5" "fs:/oracle/PRD/sapdata6" )
# The following POST_RECOVERY_SCRIPT implements during "rear recover"
# btrfs quota setup for snapper if that is used in the original system:
POST_RECOVERY_SCRIPT=( 'if snapper --no-dbus -r $TARGET_FS_ROOT get-config | grep -q "^QGROUP.*[0-9]/[0-9]" ; then snapper --no-dbus -r $TARGET_FS_ROOT set-config QGROUP= ; snapper --no-dbus -r $TARGET_FS_ROOT setup-quota && echo snapper setup-quota done || echo snapper setup-quota failed ; else echo snapper setup-quota not used ; fi' )
# This option defines a root password to allow SSH connection
# whithout a public/private key pair
SSH_ROOT_PASSWORD="password"
# Let the rear recovery system run dhclient to get an IP address
# instead of using the same IP address as the original system:
#USE_DHCLIENT="yes"
# End example setup for SLE12-SP2 with default btrfs subvolumes.
  • Hardware (PC or PowerNV BareMetal or ARM) or virtual machine (KVM guest or PoverVM LPAR):
    HPE GEN9 DL380

  • System architecture (x86 compatible or PPC64/PPC64LE or what exact ARM device):
    64 Bit

  • Firmware (BIOS or UEFI or Open Firmware) and bootloader (GRUB or ELILO or Petitboot):
    UEFI

  • Storage (lokal disk or SSD) and/or SAN (FC or iSCSI or FCoE) and/or multipath (DM or NVMe):
    OS installed in the local disk. Also, have SAN disks (3PAR)

  • Description of the issue (ideally so that others can reproduce it):
    I have faced issue during the recovery, one of its directory (/oracle/client) was not restored as part of the recovery procedure. We need this directory to be backed up for the second node two. details mentioned below.
    Configuration:
    /oracle/client ----> This is under local disk (/dev/sda3) for node2 server , this directory is required for the SAP application. Using Service guard cluster server.
    Whereas for node 1 (active server) this directory (/oracle/client) is on SAN disk (3PAR).

/oracle/ -----> This directory is also set in the rear's exclude list because SAN disks are presented to both nodes, therefore, we do not want to delete or restore any data on SAN disk (3PAR).

May you suggest us what is the correct include and exclude lists for this configuration. My requirement is to backup /oracle/client only for the second node server.

  • Work-around, if any:

gdha commented at 2018-09-12 11:49:

See https://github.com/rear/rear/blob/master/usr/share/rear/conf/default.conf#L2162

jsmeix commented at 2018-09-20 13:58:

@manums1983
I have no personal experience with SAN storage
which is probably the reason why I do not understand
your description of your disk layout on your two systems.

Nevertheless FYI some general basic info
(which might be totally unrelated to your actual issue here):

In ReaR the word "backup" usually means what the backup tool
does that creates a backup of the files (usually tar).

You can include/exclude things related to the backup tool
and you can include/exclude things related to the disk layout
where the latter is about "components" (e.g. disks, partitions,
filesystems and things like that) i.e. things that are kept
in ReaR's disklayout.conf file.

When you have different systems with different disk layouts
you need usually to use different ReaR config files when
you need to include or exclude different things.

There are some automatisms in ReaR regarding
including or excluding things.
To avoid those automatisms regarding what is included
in the backup you can use the config variables
BACKUP_ONLY_INCLUDE and BACKUP_ONLY_EXCLUDE
when you use tar for the backup, see the default.conf file.
Other automatisms cannot be changed, e.g. when you
exclude one partition all other partitions on its disk
get also automatically excluded, see
https://github.com/rear/rear/issues/1771

Because you use ReaR 2.3 I like to mention
https://github.com/rear/rear/issues/1767
which is about bugs in ReaR 2.3 related to excluding things.
To avoid that you are hit by those issues I would recommend
to try out if ReaR version 2.4 works better for you, cf.
http://relax-and-recover.org/download/

jsmeix commented at 2018-10-19 08:31:

Because "no news is good news"
I assume the issue was sufficiently answered.

manums1983 commented at 2018-10-19 08:37:

Sorry, I forgot to update my issue is corrected. Thanks for the tech notes.


[Export of Github issue for rear/rear.]