#1796 Issue closed: No code has been generated to recreate pv:/dev/mapper/mpathc_part2 (lvmdev)

Labels: support / question, fixed / solved / done, external tool

bern66 opened issue at 2018-05-04 12:32:

I am in the process to test ReaR. To test the restore, I use the same machine where "rear mkrescue" was run. In the below output of "rear -D recover" you can see the following messages:

No code has been generated to recreate pv:/dev/mapper/mpathc_part2 (lvmdev).
    To recreate it manually add code to /var/lib/rear/layout/diskrestore.sh or abort.

UserInput -I ADD_CODE_TO_RECREATE_MISSING_PVDEVMAPPERMPATHCPART2LVMDEV needed in /usr/share/rear/layout/prepare/default/600_show_unprocessed.sh line 33

Manually add code that recreates pv:/dev/mapper/mpathc_part2 (lvmdev)

Do I really have to add code to complete a restore? Or did I missed something while configuring the site.conf for my environment?

I really hope I will not have to add code at restore because we have hundreds of systems. In a situation of DR it would be another problem on the pile.

RESCUE tstinf01:~ # rear -D recover
Relax-and-Recover 2.3 / 2018-04-20
Using log file: /var/log/rear/rear-tstinf01.log
Running workflow recover within the ReaR rescue/recovery system
Will do driver migration (recreating initramfs/initrd)
IBM Spectrum Protect
Command Line Backup-Archive Client Interface
  Client Version 8, Release 1, Level 2.0
  Client date/time: 05/04/18   11:07:05
(c) Copyright by IBM Corporation and other(s) 1990, 2017. All Rights Reserved.

eode Name: STSTINF01
Session established with server GISPA: Linux/x86_64
  Server Version 8, Release 1, Level 4.000
  Server date/time: 05/04/18   07:07:12  Last access: 05/04/18   07:06:39

Domain Name               : QAASBA
Activated Policy Set Name : QAASBA
Activation date/time      : 03/26/18   10:14:56
Default Mgmt Class Name   : QAASBA
Grace Period Backup Retn. : 30 day(s)
Grace Period Archive Retn.: 365 day(s)


MgmtClass Name                  : HDB
Description                     : Management Class for HANA Database


MgmtClass Name                  : HDBNL
Description                     : Management Class for HANA DB No Limit Retention


MgmtClass Name                  : HLOG
Description                     : Management Class for HANA Logs


MgmtClass Name                  : QAASBA
Description                     : MGMT Class default pour QAAS

TSM restores by default the latest backup data. Alternatively you can specify
a different date and time to enable Point-In-Time Restore. Press ENTER to
use the most recent available backup
Enter date/time (YYYY-MM-DD HH:mm:ss) or press ENTER [30 secs]:
Skipping Point-In-Time Restore, will restore most recent data.

The TSM Server reports the following for this node:
                  #     Last Incr Date          Type    Replication       File Space Name
                --------------------------------------------------------------------------------
                  1     01-05-2018 22:16:40     BTRFS   Current           /
                  2     01-05-2018 22:10:59     BTRFS   Current           /.snapshots
                  3     01-05-2018 22:11:12     BTRFS   Current           /boot/grub2/powerpc-ieee1275
                  4     01-05-2018 22:11:26     XFS     Current           /home
                  5     01-05-2018 22:11:38     BTRFS   Current           /opt
                  6     01-05-2018 22:11:12     BTRFS   Current           /srv
                  7     01-05-2018 22:11:20     BTRFS   Current           /usr/local
                  8     01-05-2018 22:11:14     BTRFS   Current           /var/cache
                  9     01-05-2018 22:11:24     BTRFS   Current           /var/crash
                 10     01-05-2018 22:11:03     BTRFS   Current           /var/lib/libvirt/images
                 11     01-05-2018 22:11:12     BTRFS   Current           /var/lib/machines
                 12     01-05-2018 22:11:03     BTRFS   Current           /var/lib/mailman
                 13     01-05-2018 22:11:12     BTRFS   Current           /var/lib/mariadb
                 14     01-05-2018 22:11:24     BTRFS   Current           /var/lib/mysql
                 15     01-05-2018 22:11:03     BTRFS   Current           /var/lib/named
                 16     01-05-2018 22:11:12     BTRFS   Current           /var/lib/pgsql
                 17     01-05-2018 22:11:12     BTRFS   Current           /var/log
                 18     01-05-2018 22:11:24     BTRFS   Current           /var/opt
                 19     01-05-2018 22:11:26     BTRFS   Current           /var/spool
                 20     01-05-2018 22:11:26     BTRFS   Current           /var/tmp
Please enter the numbers of the filespaces we should restore.
Pay attention to enter the filesystems in the correct order
(like restore / before /var/log)
(default: 1 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20): [30 secs]
The following filesystems will be restored:
/
/boot/grub2/powerpc-ieee1275
/home
/opt
/srv
/usr/local
/var/cache
/var/crash
/var/lib/libvirt/images
/var/lib/machines
/var/lib/mailman
/var/lib/mariadb
/var/lib/mysql
/var/lib/named
/var/lib/pgsql
/var/log
/var/opt
/var/spool
/var/tmp
Is this selection correct ? (Y|n) [30 secs]
Setting up multipathing
Activating multipath
multipath activated
Listing multipath device found
mpathc  (254, 0)
Comparing disks
Device dm-0 has expected (same) size 107374182400 (will be used for recovery)
Disk configuration looks identical
UserInput -I DISK_LAYOUT_PROCEED_RECOVERY needed in /usr/share/rear/layout/prepare/default/250_compare_disks.sh line 146
Proceed with recovery (yes) otherwise manual disk layout configuration is enforced
(default 'yes' timeout 30 seconds)
yes
UserInput: No choices - result is 'yes'
User confirmed to proceed with recovery
No code has been generated to recreate pv:/dev/mapper/mpathc_part2 (lvmdev).
    To recreate it manually add code to /var/lib/rear/layout/diskrestore.sh or abort.
UserInput -I ADD_CODE_TO_RECREATE_MISSING_PVDEVMAPPERMPATHCPART2LVMDEV needed in /usr/share/rear/layout/prepare/default/600_show_unprocessed.sh line 33
Manually add code that recreates pv:/dev/mapper/mpathc_part2 (lvmdev)
1) View /var/lib/rear/layout/diskrestore.sh
2) Edit /var/lib/rear/layout/diskrestore.sh
3) Go to Relax-and-Recover shell
4) Continue 'rear recover'
5) Abort 'rear recover'
(default '4' timeout 300 seconds)
1
UserInput: Valid choice number result 'View /var/lib/rear/layout/diskrestore.sh'
#!/bin/bash

LogPrint "Start system layout restoration."

mkdir -p /mnt/local
if create_component "vgchange" "rear" ; then
    lvm vgchange -a n >/dev/null
    component_created "vgchange" "rear"
fi

set -e
set -x

if create_component "/dev/mapper/mpathc" "multipath" ; then
# Create /dev/mapper/mpathc (multipath)
LogPrint "Creating partitions for disk /dev/mapper/mpathc (msdos)"
my_udevsettle
parted -s /dev/mapper/mpathc mklabel msdos >&2
my_udevsettle
my_udevsettle
parted -s /dev/mapper/mpathc mkpart 'primary' 1048576B 8225279B >&2
my_udevsettle
my_udevsettle
parted -s /dev/mapper/mpathc set 1 boot on >&2
my_udevsettle
my_udevsettle
parted -s /dev/mapper/mpathc set 1 prep on >&2
my_udevsettle
my_udevsettle
parted -s /dev/mapper/mpathc mkpart 'primary' 8225280B 107002667519B >&2
my_udevsettle
my_udevsettle
parted -s /dev/mapper/mpathc set 2 lvm on >&2
my_udevsettle
sleep 1
if ! partprobe -s /dev/mapper/mpathc >&2 ; then
    LogPrint 'retrying partprobe /dev/mapper/mpathc after 10 seconds'
    sleep 10
    if ! partprobe -s /dev/mapper/mpathc >&2 ; then
        LogPrint 'retrying partprobe /dev/mapper/mpathc after 1 minute'
        sleep 60
        if ! partprobe -s /dev/mapper/mpathc >&2 ; then
            LogPrint 'partprobe /dev/mapper/mpathc failed, proceeding bona fide'
        fi
    fi
fi
component_created "/dev/mapper/mpathc" "multipath"
else
    LogPrint "Skipping /dev/mapper/mpathc (multipath) as it has already been created."
fi

if create_component "/dev/mapper/mpathc1" "part" ; then
# Create /dev/mapper/mpathc1 (part)
component_created "/dev/mapper/mpathc1" "part"
else
    LogPrint "Skipping /dev/mapper/mpathc1 (part) as it has already been created."
fi

if create_component "/dev/mapper/mpathc2" "part" ; then
# Create /dev/mapper/mpathc2 (part)
component_created "/dev/mapper/mpathc2" "part"
else
    LogPrint "Skipping /dev/mapper/mpathc2 (part) as it has already been created."
fi


set +x
set +e

LogPrint "Disk layout created."

UserInput -I ADD_CODE_TO_RECREATE_MISSING_PVDEVMAPPERMPATHCPART2LVMDEV needed in /usr/share/rear/layout/prepare/default/600_show_unprocessed.sh line 33
Manually add code that recreates pv:/dev/mapper/mpathc_part2 (lvmdev)
1) View /var/lib/rear/layout/diskrestore.sh
2) Edit /var/lib/rear/layout/diskrestore.sh
3) Go to Relax-and-Recover shell
4) Continue 'rear recover'
5) Abort 'rear recover'
(default '4' timeout 300 seconds)
5
UserInput: Valid choice number result 'Abort 'rear recover''
ERROR: User chose to abort 'rear recover' in /usr/share/rear/layout/prepare/default/600_show_unprocessed.sh
Aborting due to an error, check /var/log/rear/rear-tstinf01.log for details
You should also rm -Rf /tmp/rear.nVsRkyuhN0xgWT6
Terminated

RESCUE tstinf01:~ # rear -V
Relax-and-Recover 2.3 / 2018-04-20

tstinf01:~ # arch
ppc64le

Booting via SMS

tstinf01:~ # lsb_release -a
LSB Version: n/a
Distributor ID: SUSE
Description: SUSE Linux Enterprise Server for SAP Applications 12 SP2
Release: 12.2
Codename: n/a

tstinf01:~ # cat site.conf
OUTPUT=ISO
OUTPUT_URL=nfs://tstinf02/exports/rear/iso
ISO_PREFIX="$HOSTNAME-rear-$( date "+%y%m%d" )"
ISO_VOLID=$HOSTNAME
REAR_INITRD_COMPRESSION=lzma
AUTOEXCLUDE_MULTIPATH=n
BOOT_OVER_SAN=y
BACKUP=TSM
COPY_AS_IS_TSM=( /etc/$HOSTNAME /opt/tivoli/tsm/client/ba/bin/dsmc /opt/tivoli/tsm/client/ba/bin/tsmbench_inclexcl /opt/tivoli/tsm/client/ba/bin/dsm.sys /opt/tivoli/tsm/client/ba/bin/dsm.opt /opt/tivoli/tsm/client/api/bin64/libgpfs.so /opt/tivoli/tsm/client/api/bin64/libdmapi.so /opt/tivoli/tsm/client/ba/bin/EN_US/dsmclientV3.cat /usr/local/ibm/gsk8* )
COPY_AS_IS_EXCLUDE_TSM=( )
PROGS_TSM=(dsmc)
TSM_LD_LIBRARY_PATH="/opt/tivoli/tsm/client/ba/bin:/opt/tivoli/tsm/client/api/bin64:/opt/tivoli/tsm/client/api/bin:/opt/tivoli/tsm/client/api/bin64/cit/bin"
TSM_RESULT_FILE_PATH=/opt/tivoli/tsm/rear
TSM_RESULT_SAVE=n
TSM_ARCHIVE_MGMT_CLASS=qaasba
TSM_RM_ISOFILE=y

Thanks,

jsmeix commented at 2018-05-04 12:46:

PPC and MULTIPATH (plus TSM and a special way to boot SMS)
looks very much as if only @schabrolles might actually help here...

jsmeix commented at 2018-05-04 12:53:

@bern66
it seems you use SLES12-SP2 with its default btrfs structure
but I do not see the usual config variables in your etc/rear/local.conf
(or etc/rear/site.conf) that are needed for the SLE12 btrfs structure,
cf. the example config files in usr/share/rear/conf/examples/
(there is also one for SLE12 with SAP HANA).

bern66 commented at 2018-05-04 13:52:

Thanks jsmeix! I didn't know about the details for btrfs and SLES12. I'll add those elements of configuration in my environment.

schabrolles commented at 2018-05-04 14:00:

@bern66 can you also show your /var/lib/rear/layout/disklayout.conf

just for reference, here is an example of a configuration file that is working for me (Power8 LPAR, SLES12, TSM, PXE boot server instead of ISO)

# Default is to create Relax-and-Recover rescue media as ISO image
# set OUTPUT to change that
# set BACKUP to activate an automated (backup and) restore of your data
# Possible configuration values can be found in /usr/share/rear/conf/default.conf
#
# This file (local.conf) is intended for manual configuration. For configuration
# through packages and other automated means we recommend creating a new
# file named site.conf next to this file and to leave the local.conf as it is.
# Our packages will never ship with a site.conf.

AUTOEXCLUDE_MULTIPATH=n
BOOT_OVER_SAN=y
REAR_INITRD_COMPRESSION=lzma

OUTPUT=PXE
OUTPUT_PREFIX_PXE=rear/$HOSTNAME
PXE_CONFIG_GRUB_STYLE=y
PXE_CONFIG_URL="nfs://{{ PXE_SERVER_IP }}/var/lib/tftpboot/boot/grub2/powerpc-ieee1275"
PXE_CREATE_LINKS=IP
PXE_REMOVE_OLD_LINKS=y
PXE_TFTP_URL="nfs://{{ PXE_SERVER_IP }}/var/lib/tftpboot"
OUTPUT_OPTIONS="nfsvers=4,nolock"

BACKUP=TSM
COPY_AS_IS_TSM=( /etc/adsm/TSM.PWD /opt/tivoli/tsm/client/ba/bin/dsmc /opt/tivoli/tsm/client/ba/bin/tsmbench_inclexcl /opt/tivoli/tsm/client/ba/bin/dsm.sys /opt/tivoli/tsm/client/ba/bin/dsm.opt /opt/tivoli/tsm/client/api/bin64/libgpfs.so /opt/tivoli/tsm/client/api/bin64/libdmapi.so /opt/tivoli/tsm/client/ba/bin/EN_US/dsmclientV3.cat /usr/local/ibm/gsk8* )
TSM_RESULT_SAVE=n

## SLES12
BACKUP_OPTIONS="nfsvers=4,nolock"
REQUIRED_PROGS=( "${REQUIRED_PROGS[@]}" snapper chattr lsattr )
COPY_AS_IS=( "${COPY_AS_IS[@]}" /usr/lib/snapper/installation-helper /etc/snapper/config-templates/default )

for subvol in $(findmnt -n -r -t btrfs | cut -d ' ' -f 1 | grep -v '^/$' | egrep -v 'snapshots|crash') ; do
    BACKUP_PROG_INCLUDE=( "${BACKUP_PROG_INCLUDE[@]}" "$subvol" )
done

POST_RECOVERY_SCRIPT=( 'if snapper --no-dbus -r $TARGET_FS_ROOT get-config | grep -q "^QGROUP.*[0-9]/[0-9]" ; then snapper --no-dbus -r $TARGET_FS_ROOT set-config QGROUP= ; snapper --no-dbus -r $TARGET_FS_ROOT setup-quota && echo snapper setup-quota done || echo snapper setup-quota failed ; else echo snapper setup-quota not used ; fi' )

bern66 commented at 2018-05-04 14:07:

Of course I can do that even more if it can help you helping me. The disklayout.conf below is from a test server. Our production servers will more LUNs, just in case it could matter somehow. Here it is:

tstinf01:~ # cat /var/lib/rear/layout/disklayout.conf
lvmdev /dev/system /dev/mapper/mpathc_part2 RWHsoG-C5a3-78M5-FmaZ-xMvD-dN5N-jSsKXJ 208973520
lvmgrp /dev/system 4096 25509 104484864
lvmvol /dev/system home 5120 41943040
lvmvol /dev/system root 7589 62169088
lvmvol /dev/system swap 12800 104857600
# Filesystems (only ext2,ext3,ext4,vfat,xfs,reiserfs,btrfs are supported).
# Format: fs <device> <mountpoint> <fstype> [uuid=<uuid>] [label=<label>] [<attributes>]
fs /dev/mapper/system-home /home xfs uuid=71d6e654-92a0-4bc2-b152-e3a5bab13f9f label=/home  options=rw,relatime,attr2,inode64,noquota
fs /dev/mapper/system-root / btrfs uuid=97590a87-5390-44f0-826f-a9425d42e396 label= options=rw,relatime,space_cache,subvolid=5,subvol=/
# Btrfs default subvolume for /dev/mapper/system-root at /
# Format: btrfsdefaultsubvol <device> <mountpoint> <btrfs_subvolume_ID> <btrfs_subvolume_path>
btrfsdefaultsubvol /dev/mapper/system-root / 5 /
# Btrfs snapshot subvolumes for /dev/mapper/system-root at /
# Btrfs snapshot subvolumes are listed here only as documentation.
# There is no recovery of btrfs snapshot subvolumes.
# Format: btrfssnapshotsubvol <device> <mountpoint> <btrfs_subvolume_ID> <btrfs_subvolume_path>
#btrfssnapshotsubvol /dev/mapper/system-root / 669 @/.snapshots/255/snapshot
#btrfssnapshotsubvol /dev/mapper/system-root / 670 @/.snapshots/256/snapshot
#btrfssnapshotsubvol /dev/mapper/system-root / 700 @/.snapshots/279/snapshot
#btrfssnapshotsubvol /dev/mapper/system-root / 701 @/.snapshots/280/snapshot
#btrfssnapshotsubvol /dev/mapper/system-root / 702 @/.snapshots/281/snapshot
#btrfssnapshotsubvol /dev/mapper/system-root / 703 @/.snapshots/282/snapshot
#btrfssnapshotsubvol /dev/mapper/system-root / 704 @/.snapshots/283/snapshot
#btrfssnapshotsubvol /dev/mapper/system-root / 705 @/.snapshots/284/snapshot
# Btrfs normal subvolumes for /dev/mapper/system-root at /
# Format: btrfsnormalsubvol <device> <mountpoint> <btrfs_subvolume_ID> <btrfs_subvolume_path>
# Btrfs subvolumes that belong to snapper are listed here only as documentation.
# Snapper's base subvolume '/@/.snapshots' is deactivated here because during 'rear recover'
# it is created by 'snapper/installation-helper --step 1' (which fails if it already exists).
# Furthermore any normal btrfs subvolume under snapper's base subvolume would be wrong.
# See https://github.com/rear/rear/issues/944#issuecomment-238239926
# and https://github.com/rear/rear/issues/963#issuecomment-240061392
# how to create a btrfs subvolume in compliance with the SLES12 default brtfs structure.
# In short: Normal btrfs subvolumes on SLES12 must be created directly below '/@/'
# e.g. '/@/var/lib/mystuff' (which requires that the btrfs root subvolume is mounted)
# and then the subvolume is mounted at '/var/lib/mystuff' to be accessible from '/'
# plus usually an entry in /etc/fstab to get it mounted automatically when booting.
# Because any '@/.snapshots' subvolume would let 'snapper/installation-helper --step 1' fail
# such subvolumes are deactivated here to not let 'rear recover' fail:
#btrfsnormalsubvol /dev/mapper/system-root / 258 @/.snapshots
btrfsnormalsubvol /dev/mapper/system-root / 257 @
btrfsnormalsubvol /dev/mapper/system-root / 259 @/boot/grub2/powerpc-ieee1275
btrfsnormalsubvol /dev/mapper/system-root / 260 @/opt
btrfsnormalsubvol /dev/mapper/system-root / 261 @/srv
btrfsnormalsubvol /dev/mapper/system-root / 262 @/tmp
btrfsnormalsubvol /dev/mapper/system-root / 263 @/usr/local
btrfsnormalsubvol /dev/mapper/system-root / 264 @/var/cache
btrfsnormalsubvol /dev/mapper/system-root / 265 @/var/crash
btrfsnormalsubvol /dev/mapper/system-root / 266 @/var/lib/libvirt/images
btrfsnormalsubvol /dev/mapper/system-root / 267 @/var/lib/machines
btrfsnormalsubvol /dev/mapper/system-root / 268 @/var/lib/mailman
btrfsnormalsubvol /dev/mapper/system-root / 269 @/var/lib/mariadb
btrfsnormalsubvol /dev/mapper/system-root / 270 @/var/lib/mysql
btrfsnormalsubvol /dev/mapper/system-root / 271 @/var/lib/named
btrfsnormalsubvol /dev/mapper/system-root / 272 @/var/lib/pgsql
btrfsnormalsubvol /dev/mapper/system-root / 273 @/var/log
btrfsnormalsubvol /dev/mapper/system-root / 274 @/var/opt
btrfsnormalsubvol /dev/mapper/system-root / 275 @/var/spool
btrfsnormalsubvol /dev/mapper/system-root / 276 @/var/tmp
# All mounted btrfs subvolumes (including mounted btrfs default subvolumes and mounted btrfs snapshot subvolumes).
# Determined by the findmnt command that shows the mounted btrfs_subvolume_path.
# Format: btrfsmountedsubvol <device> <subvolume_mountpoint> <mount_options> <btrfs_subvolume_path>
btrfsmountedsubvol /dev/mapper/system-root / rw,relatime,space_cache,subvolid=5,subvol=/ /
btrfsmountedsubvol /dev/mapper/system-root /var/log rw,relatime,space_cache,subvolid=273,subvol=/@/var/log @/var/log
btrfsmountedsubvol /dev/mapper/system-root /var/lib/mysql rw,relatime,space_cache,subvolid=270,subvol=/@/var/lib/mysql @/var/lib/mysql
btrfsmountedsubvol /dev/mapper/system-root /var/lib/pgsql rw,relatime,space_cache,subvolid=272,subvol=/@/var/lib/pgsql @/var/lib/pgsql
btrfsmountedsubvol /dev/mapper/system-root /var/lib/mariadb rw,relatime,space_cache,subvolid=269,subvol=/@/var/lib/mariadb @/var/lib/mariadb
btrfsmountedsubvol /dev/mapper/system-root /var/lib/libvirt/images rw,relatime,space_cache,subvolid=266,subvol=/@/var/lib/libvirt/images @/var/lib/libvirt/images
btrfsmountedsubvol /dev/mapper/system-root /var/lib/named rw,relatime,space_cache,subvolid=271,subvol=/@/var/lib/named @/var/lib/named
btrfsmountedsubvol /dev/mapper/system-root /var/crash rw,relatime,space_cache,subvolid=265,subvol=/@/var/crash @/var/crash
btrfsmountedsubvol /dev/mapper/system-root /var/lib/machines rw,relatime,space_cache,subvolid=267,subvol=/@/var/lib/machines @/var/lib/machines
btrfsmountedsubvol /dev/mapper/system-root /.snapshots rw,relatime,space_cache,subvolid=258,subvol=/@/.snapshots @/.snapshots
btrfsmountedsubvol /dev/mapper/system-root /opt rw,relatime,space_cache,subvolid=260,subvol=/@/opt @/opt
btrfsmountedsubvol /dev/mapper/system-root /usr/local rw,relatime,space_cache,subvolid=263,subvol=/@/usr/local @/usr/local
btrfsmountedsubvol /dev/mapper/system-root /tmp rw,relatime,space_cache,subvolid=262,subvol=/@/tmp @/tmp
btrfsmountedsubvol /dev/mapper/system-root /var/cache rw,relatime,space_cache,subvolid=264,subvol=/@/var/cache @/var/cache
btrfsmountedsubvol /dev/mapper/system-root /var/tmp rw,relatime,space_cache,subvolid=276,subvol=/@/var/tmp @/var/tmp
btrfsmountedsubvol /dev/mapper/system-root /var/lib/mailman rw,relatime,space_cache,subvolid=268,subvol=/@/var/lib/mailman @/var/lib/mailman
btrfsmountedsubvol /dev/mapper/system-root /var/spool rw,relatime,space_cache,subvolid=275,subvol=/@/var/spool @/var/spool
btrfsmountedsubvol /dev/mapper/system-root /var/opt rw,relatime,space_cache,subvolid=274,subvol=/@/var/opt @/var/opt
btrfsmountedsubvol /dev/mapper/system-root /srv rw,relatime,space_cache,subvolid=261,subvol=/@/srv @/srv
btrfsmountedsubvol /dev/mapper/system-root /boot/grub2/powerpc-ieee1275 rw,relatime,space_cache,subvolid=259,subvol=/@/boot/grub2/powerpc-ieee1275 @/boot/grub2/powerpc-ieee1275
# Mounted btrfs subvolumes that have the 'no copy on write' attribute set.
# Format: btrfsnocopyonwrite <btrfs_subvolume_path>
# Swap partitions or swap files
# Format: swap <filename> uuid=<uuid> label=<label>
swap /dev/mapper/system-swap uuid=10bd73bf-48b3-46d6-8608-7ce9972ea4ab label=
multipath /dev/mapper/mpathc 107374182400 /dev/sda,/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh
part /dev/mapper/mpathc 7176704 1048576 primary boot,prep /dev/mapper/mpathc1
part /dev/mapper/mpathc 106994442240 8225280 primary lvm /dev/mapper/mpathc2

Thanks,

schabrolles commented at 2018-05-04 14:18:

@bern66
I don't see why pv:/dev/mapper/mpathc_part2 is not recreated. I would need to have a look at the log file in debug mode.

try again with rear -d recover and send me the log file generated.

schabrolles commented at 2018-05-04 15:13:

@bern66,

I think I begin to understand what happen here.... but don't have the root cause yet.
It seems that your SLES12 created the partition on the multipath device in an unusual way (for a sles12)
partition are named /dev/mapper/mpathc2 while usually it is /dev/mapper/mpathc_part2
(@jsmeix what do you think .... Do you know why partition are named like RedHat here .... this multipath_partition naming convention will kill me .... )

bern66 commented at 2018-05-04 15:14:

@schabrolles
The requested log file is attached.

Thanks
rear-tstinf01-partial-2018-05-04T09_57_36-04_00.log.gz

bern66 commented at 2018-05-04 15:21:

@schabrolles
Regarding the naming convention, actually we have been advised by SuSE to not use friendly names. As a test at some point I changed it to user_friendly_names yes to try to fix my problem but that did not help. I'll have to switch it back to no.

For example:

tstinf02:~ # head /etc/multipath.conf
# Default multipath.conf file created for install boot

# Used mpathN names
defaults {
user_friendly_names no
}
devices {
    device {
        vendor "IBM"

bern66 commented at 2018-05-04 15:28:

Switching to user_friendly_names no changes the names from this:

tstinf01:~ # ls -l /dev/mapper/
total 0
crw------- 1 root root  10, 236 May  4 11:22 control
brw-r----- 1 root disk 254,   0 May  4 11:22 mpathc
brw-r----- 1 root disk 254,   1 May  4 11:22 mpathc1
brw-r----- 1 root disk 254,   2 May  4 11:22 mpathc2
lrwxrwxrwx 1 root root        7 May  4 11:22 mpathc_part1 -> ../dm-1
lrwxrwxrwx 1 root root        7 May  4 11:22 mpathc_part2 -> ../dm-2
lrwxrwxrwx 1 root root        7 May  4 11:22 system-home -> ../dm-5
lrwxrwxrwx 1 root root        7 May  4 11:22 system-root -> ../dm-3
lrwxrwxrwx 1 root root        7 May  4 11:22 system-swap -> ../dm-4

to:

tstinf01:~ # ls -l /dev/mapper/
total 0
brw-r----- 1 root disk 254,   0 May  4 11:24 3600507680c800450b80000000000093e
brw-r----- 1 root disk 254,   1 May  4 11:24 3600507680c800450b80000000000093e1
brw-r----- 1 root disk 254,   2 May  4 11:24 3600507680c800450b80000000000093e2
lrwxrwxrwx 1 root root        7 May  4 11:24 3600507680c800450b80000000000093e_part1 -> ../dm-1
lrwxrwxrwx 1 root root        7 May  4 11:24 3600507680c800450b80000000000093e_part2 -> ../dm-2
crw------- 1 root root  10, 236 May  4 11:24 control
lrwxrwxrwx 1 root root        7 May  4 11:24 system-home -> ../dm-5
lrwxrwxrwx 1 root root        7 May  4 11:24 system-root -> ../dm-3
lrwxrwxrwx 1 root root        7 May  4 11:24 system-swap -> ../dm-4

schabrolles commented at 2018-05-04 15:36:

@bern66
from what I see, It looks good with or without firendly name for a sles12... (XXXXX_part2)
What I don't understand is why your disklayout.conf report /dev/mapper/mpathc2 and not mpathc_part2.

If it is the case, this mean the problem is during the "mkrescue" part and not during the restore.
could you also run the following command rear -d mkrescue and send me the output ?

bern66 commented at 2018-05-04 16:01:

Attached is the log of rear -d mkrescue.

Thanks a lot for your assistance!

rear-tstinf01.log

schabrolles commented at 2018-05-04 16:07:

My mistake ... it was rear -D mkrescue sorry for that... I need the debug script

bern66 commented at 2018-05-04 16:18:

No problem... here it is!

Thanks again for your assitance!

rear-tstinf01.log.gz

schabrolles commented at 2018-05-04 20:20:

@bern66,
I made a quick change for another problem, maybe it could also help you.
could you try this version :

git clone https://github.com/schabrolles/rear -b issue_1766

jsmeix commented at 2018-05-07 08:14:

@schabrolles
regarding your https://github.com/rear/rear/issues/1796#issuecomment-386632589

Since SLES12 it should be usually /dev/mapper/mpathc-part2
(the /dev/mapper/mpathc_part2 form is usually for SLE11).

I am not a multipath user so that all what I know about
SUSE multipath partition names is what I got via mail in
https://github.com/rear/rear/pull/1765#issuecomment-378229855
and
https://github.com/rear/rear/pull/1765#issuecomment-378246918
which is basically that on SLES12 it is always

/dev/mapper/foo => /dev/mapper/foo-part1

@bern66
accordingly @schabrolles had recently documented
the known SUSE multipath partition names
in usr/share/rear/lib/layout-functions.sh
in the get_part_device_name_format() function
cf. https://github.com/rear/rear/pull/1765/files

In your initial comment here you wrote

Relax-and-Recover 2.3 / 2018-04-20

but https://github.com/rear/rear/pull/1765 was committed
https://github.com/rear/rear/commit/160e3263ed50d7f6bed4977197d1e0014e475aa3
on April 23 2018
so that your ReaR from 2018-04-20 is a bit too old.

In general regardless if your particular issue (unexpected SUSE multipath partition name)
is already fixed in our current ReaR upstream GitHub master code
I recommend to try out our current ReaR upstream GitHub master code
because that is the only place where we at ReaR upstream fix bugs.

To use our current ReaR upstream GitHub master code
do the following:

Basically "git clone" it into a separated directory and then
configure and run ReaR from within that directory like:

# git clone https://github.com/rear/rear.git

# mv rear rear.github.master

# cd rear.github.master

# vi etc/rear/local.conf

# usr/sbin/rear -D mkbackup

Note the relative paths "etc/rear/" and "usr/sbin/".

If the issue also happens with current ReaR upstream GitHub master code
please provide us a complete ReaR debug log file of "rear -D mkrescue/mkbackup"
and the resulting disklayout.conf file from your original system
plus a complete ReaR debug log file of "rear -D recover"
so that we can have a look how it behaves in your particular environment
cf. "Debugging issues with Relax-and-Recover" at
https://en.opensuse.org/SDB:Disaster_Recovery

If it perhaps "just works" with current ReaR upstream GitHub master code
we would really appreciate an explicit positive feedback.

jsmeix commented at 2018-05-07 08:31:

@bern66
if on your particular SLES12 system
your multipath devices are named

 /dev/mapper/mpathc2

and not in the usual SLES12 form which is

/dev/mapper/mpathc-part2

https://github.com/rear/rear/pull/1765#issuecomment-378229855
and
https://github.com/rear/rear/pull/1765#issuecomment-378246918
seem to indicate that on your particular SLES12 system
there is no udev rule file /usr/lib/udev/rules.d/66-kpartx.rules
(it is provided by the kpartx RPM)
or it is there but the actual rule therein

RUN+="/sbin/kpartx -u -p -part /dev/$name"

does not work as it should on your particular SLES12 system.
But because I am not a multipath user this is only a blind guess.

schabrolles commented at 2018-05-07 08:36:

@jsmeix

I think the problem here is the presence of device /dev/mapper/mpathc1 which is unusual ..
It seems to not be a link but a real device create with mknode command ... I don't why... (I don't have such device on my multipathed SLE12).

because of that multipathed partition are recorded like /dev/mapper/mapthc1 in disklayout.conf
The problem could be related to issue #1766 where I propose a new way to discover the multipathed partition name based on /sys and device-mapper.

I think it should also solve the issue here because it will find the dm-X device as partition and then find the real name (instead of trying to guess from device name + [1-9]* , or -part[1-9]* .... )

https://github.com/schabrolles/rear/commit/c635fd9da08307980f3c07f977731fb60d4e0cfe

If @bern66 or @badarmontassar confirm it solves their issue, I will make a PR.

jsmeix commented at 2018-05-07 08:40:

@bern66
how did you install your particular SLES12 system?

On my SLES12-SP3 installed form an original SLES12 installation medium I get

# rpm -qf /usr/lib/udev/rules.d/66-kpartx.rules
kpartx-0.7.1+7+suse.3edc5f7d-1.26.x86_64

# rpm -e --test kpartx
error: Failed dependencies:
        kpartx is needed by (installed) dmraid-1.0.0.rc16-34.3.x86_64
        kpartx is needed by (installed) multipath-tools-0.7.1+7+suse.3edc5f7d-1.26.x86_64

# rpm -e --test dmraid
error: Failed dependencies:
        dmraid is needed by (installed) os-prober-1.61-29.1.x86_64

# rpm -e --test multipath-tools
error: Failed dependencies:
        multipath-tools is needed by (installed) patterns-sles-base-12-77.8.x86_64

i.e. one cannot have kpartx not installed without breaking
several other RPM dependencies that are usually installed.

jsmeix commented at 2018-05-07 08:51:

@schabrolles

I think the presence of a multipath device /dev/mapper/mpathc1
instead of the usually expected /dev/mapper/mpathc-part1 on SLES12
indicates that this particular SLES12 system is not as it should be.
This might lead to an endless sequence of other problems for the user
because I think nobody expects a SLES12 system with such multipath
so that the user may run into endless more troubles e.g. when asking
our official SUSE support or someone else about whatever issues
that are somehow related to his particular SLES12 multipath system.
Furthermore I fear whatever other stuff in SUSE (e.g. YaST or whatever)
may "do strage things" if the multipath device names are not the expected ones.

Nevertheless if you can make the ReaR multipath code to even "just work"
for any kind of multipath device names it would be of course absolutely great,
in particular when users then could report "all fails - except ReaR" ;-)

bern66 commented at 2018-05-07 12:19:

Hello everyone,

There is so many things in your previous comments, I'll need some times to reply to all your questions and suggestions.

But a first few points:

  • kpartx is installed on our systems;
  • I did not installed our SLES12 system myself. We are in an IBM Power System environment and an image has been created initially from which all other systems are build from. I am fairly new in this environment;
  • I'll give a try with the latest version for ReaR.

Also, a SuSE consultant came to help us with many aspects of our environment and didn't mention anything wrong with our multipath setting. But he mention that we should not use friendly names specifically to support DR configurations.

Regarding the naming of the multipath, here is what I have when using friendly names:

tstinf01:~ # ls -l /dev/mapper/
total 0
crw------- 1 root root  10, 236 May  7 07:17 control
brw-r----- 1 root disk 254,   0 May  7 07:17 mpathc
brw-r----- 1 root disk 254,   1 May  7 07:17 mpathc1
brw-r----- 1 root disk 254,   2 May  7 07:17 mpathc2
lrwxrwxrwx 1 root root        7 May  7 07:17 mpathc_part1 -> ../dm-1
lrwxrwxrwx 1 root root        7 May  7 07:17 mpathc_part2 -> ../dm-2
lrwxrwxrwx 1 root root        7 May  7 07:17 system-home -> ../dm-5
lrwxrwxrwx 1 root root        7 May  7 07:17 system-root -> ../dm-3
lrwxrwxrwx 1 root root        7 May  7 07:17 system-swap -> ../dm-4

And here is what I have when not using friendly names:

tstinf01:~ # ls -l /dev/mapper/
total 0
brw-r----- 1 root disk 254,   0 May  7 08:17 3600507680c800450b80000000000093e
brw-r----- 1 root disk 254,   1 May  7 08:17 3600507680c800450b80000000000093e1
brw-r----- 1 root disk 254,   2 May  7 08:17 3600507680c800450b80000000000093e2
lrwxrwxrwx 1 root root        7 May  7 08:17 3600507680c800450b80000000000093e_part1 -> ../dm-1
lrwxrwxrwx 1 root root        7 May  7 08:17 3600507680c800450b80000000000093e_part2 -> ../dm-2
crw------- 1 root root  10, 236 May  7 08:17 control
lrwxrwxrwx 1 root root        7 May  7 08:17 system-home -> ../dm-5
lrwxrwxrwx 1 root root        7 May  7 08:17 system-root -> ../dm-3
lrwxrwxrwx 1 root root        7 May  7 08:17 system-swap -> ../dm-4

The version of my system is 12.2:

tstinf01:~ # lsb_release -a
LSB Version:    n/a
Distributor ID: SUSE
Description:    SUSE Linux Enterprise Server for SAP Applications 12 SP2
Release:        12.2
Codename:       n/a

Thanks for your assistance, you are amazing!

bern66 commented at 2018-05-07 18:49:

I compiled the latest version of ReaR and ended up with the same problem as initially described except that I use no friendly names:

RESCUE tstinf01:~ # rear -D recover
Relax-and-Recover 2.3. / 2018-05-07
Using log file: /var/log/rear/rear-tstinf01.log
Running workflow recover within the ReaR rescue/recovery system

[snip]

UserInput: No choices - result is 'yes'
User confirmed to proceed with recovery
No code has been generated to recreate pv:/dev/mapper/3600507680c800450b80000000000093e_part2 (lvmdev).
    To recreate it manually add code to /var/lib/rear/layout/diskrestore.sh or abort.
UserInput -I ADD_CODE_TO_RECREATE_MISSING_PVDEVMAPPER3600507680C800450B80000000000093EPART2LVMDEV needed in /usr/share/rear/layout/prepare/default/600_show_unprocessed.sh line 33
Manually add code that recreates pv:/dev/mapper/3600507680c800450b80000000000093e_part2 (lvmdev)
1) View /var/lib/rear/layout/diskrestore.sh
2) Edit /var/lib/rear/layout/diskrestore.sh
3) Go to Relax-and-Recover shell
4) Continue 'rear recover'
5) Abort 'rear recover'
(default '4' timeout 300 seconds)

One thing I do not understand, @schabrolles you said in a comment that XXXX_part2 was not there but it is. See below:

tstinf02:~ # grep part2 disklayout.conf
lvmdev /dev/system /dev/mapper/3600507680c800450b80000000000093e_part2 RWHsoG-C5a3-78M5-FmaZ-xMvD-dN5N-jSsKXJ 208973520

I attach the disklayout.conf and the log file of rear -D recover if it can help.

log_disklayout.tar.gz

schabrolles commented at 2018-05-07 19:16:

@bern66,

The issue is you have lvmdev /dev/system /dev/mapper/3600507680c800450b80000000000093e_part2 but the partition is named part /dev/mapper/3600507680c800450b80000000000093e2 instead of part /dev/mapper/3600507680c800450b80000000000093e_part2.

As explained in a previous comment, could you give a try to the patch I prepared for issue #1766. I think it could help here.

git clone https://github.com/schabrolles/rear -b issue_1766
mv rear rear.github.issue_1766
cd rear.github.issue_1766
vi etc/rear/local.conf
usr/sbin/rear -D mkbackup

bern66 commented at 2018-05-08 14:00:

@schabrolles
I tried the fix for issue 1766 and got the same problem.

RESCUE tstinf01:~ # rear -D recover
Relax-and-Recover 2.3-git_1766. / 2018-05-08
Using log file: /var/log/rear/rear-tstinf01.log
Running workflow recover within the ReaR rescue/recovery system
Testing connection to TSM server
. . .
UserInput -I DISK_LAYOUT_PROCEED_RECOVERY needed in /usr/share/rear/layout/prepare/default/250_compare_disks.sh line 146
Proceed with recovery (yes) otherwise manual disk layout configuration is enforced
(default 'yes' timeout 30 seconds)
UserInput: No real user input (empty or only spaces) - using default input
UserInput: No choices - result is 'yes'
Proceeding with recovery by default
No code has been generated to recreate pv:/dev/mapper/3600507680c800450b80000000000093e_part2 (lvmdev).
    To recreate it manually add code to /var/lib/rear/layout/diskrestore.sh or abort.
UserInput -I ADD_CODE_TO_RECREATE_MISSING_PVDEVMAPPER3600507680C800450B80000000000093EPART2LVMDEV needed in /usr/share/rear/layout/prepare/default/600_show_unprocessed.sh line 33
Manually add code that recreates pv:/dev/mapper/3600507680c800450b80000000000093e_part2 (lvmdev)
1) View /var/lib/rear/layout/diskrestore.sh
2) Edit /var/lib/rear/layout/diskrestore.sh

I attached the disklayout.conf and the log of read -D recover.

rear-20180508.zip

I'll now take a look at your message regarding 1802.

Thanks for your help

schabrolles commented at 2018-05-08 14:06:

@bern66,
send me the logfile generated during "rear -D mkbackup"

bern66 commented at 2018-05-08 14:12:

@schabrolles
Here is the log!

rear-tstinf01-rear-D-mkrescue.log.zip

schabrolles commented at 2018-05-08 14:54:

@bern66,

Is it possible to have a look to your /etc/multipath.conf.
I really don't know why your system is generating block devices in /dev/mapper... You should only have some links which point to /dev/dm- X. (except control)

bern66 commented at 2018-05-08 15:05:

@schabrolles
Why not! There is no secret in a multipath.conf file. Here it is:

# Default multipath.conf file created for install boot
# Used mpathN names
defaults {
user_friendly_names no
}
devices {
    device {
        vendor "IBM"
        product "^2145"
        path_grouping_policy "group_by_prio"
        features "1 queue_if_no_path"
        prio "alua"
        failback "immediate"
    }
}

I hope it can help you.

Thanks for your great assitance!

schabrolles commented at 2018-05-08 15:35:

@bern66,
I got exactly the same config: PowerVM LPAR with SLES12 and multipath disks on SAN over a IBM SVC (2145)

But .... I can't reproduce what you have on your side. I use the same multipath.conf file and NO block device into /dev/mapper. only links:

rear-sles12-143:~ # ls -l /dev/mapper/
total 0
lrwxrwxrwx 1 root root       7 May  8 17:26 3600507680c82004cf8000000000000d8 -> ../dm-0
lrwxrwxrwx 1 root root       7 May  8 17:26 3600507680c82004cf8000000000000d8_part1 -> ../dm-5
lrwxrwxrwx 1 root root       7 May  8 17:26 3600507680c82004cf8000000000000d8-part1 -> ../dm-5
lrwxrwxrwx 1 root root       7 May  8 17:26 3600507680c82004cf8000000000000d8_part2 -> ../dm-6
lrwxrwxrwx 1 root root       7 May  8 17:26 3600507680c82004cf8000000000000d8-part2 -> ../dm-6
lrwxrwxrwx 1 root root       7 May  8 17:26 3600507680c82004cf8000000000008d4 -> ../dm-1
lrwxrwxrwx 1 root root       7 May  8 17:26 3600507680c82004cf8000000000008d4_part1 -> ../dm-3
lrwxrwxrwx 1 root root       7 May  8 17:26 3600507680c82004cf8000000000008d4-part1 -> ../dm-3
lrwxrwxrwx 1 root root       7 May  8 17:26 3600507680c82004cf8000000000008d5 -> ../dm-2
lrwxrwxrwx 1 root root       7 May  8 17:26 3600507680c82004cf8000000000008d5_part1 -> ../dm-4
lrwxrwxrwx 1 root root       7 May  8 17:26 3600507680c82004cf8000000000008d5-part1 -> ../dm-4
crw------- 1 root root 10, 236 May  8 17:26 control
lrwxrwxrwx 1 root root       7 May  8 17:26 system-root -> ../dm-7
lrwxrwxrwx 1 root root       7 May  8 17:26 system-swap -> ../dm-8
lrwxrwxrwx 1 root root       7 May  8 17:26 vgdata-lv_data -> ../dm-9

Getting a multipath partition name give me this: (which is OK)

rear-sles12-143:~ # cat /sys/block/dm-6/dm/name 
3600507680c82004cf8000000000000d8-part2

You may be have some custom udev rules that creates this block devices. Could you pleas list the content of your /etc/udev/rules.d directory ? here is mine:

rear-sles12-143:~ # ls -l /etc/udev/rules.d/
total 128
-rw-r--r-- 1 root root  594 Dec 21 13:52 50-iscsi-firmware-login.rules
-rw-r--r-- 1 root root 1062 May  5 18:00 70-persistent-net.rules

bern66 commented at 2018-05-08 16:27:

I think I might have found something.

Look what I had in /etc/udev/rules.d/:

tstinf01:~ # ll /etc/udev/rules.d/
total 128
-rw-r--r-- 1 root root  66 Apr 24 11:18 70-persistent-net.rules
-rw-r--r-- 1 root root 998 Jul 10  2017 99-storixmpath.rules

So with those rules, we had:

tstinf01:/etc/udev/rules.d # ll /dev/mapper/
total 0
brw-r----- 1 root disk 254,   0 May  8 10:05 3600507680c800450b80000000000093e
brw-r----- 1 root disk 254,   1 May  8 10:05 3600507680c800450b80000000000093e1
brw-r----- 1 root disk 254,   2 May  8 10:05 3600507680c800450b80000000000093e2
lrwxrwxrwx 1 root root        7 May  8 10:05 3600507680c800450b80000000000093e_part1 -> ../dm-1
lrwxrwxrwx 1 root root        7 May  8 10:05 3600507680c800450b80000000000093e_part2 -> ../dm-2

I disable 99-storixmpath.rules:

tstinf01:~ # ll /etc/udev/rules.d/
total 128
-rw-r--r-- 1 root root  66 Apr 24 11:18 70-persistent-net.rules
-rw-r--r-- 1 root root 998 Jul 10  2017 99-storixmpath.rules.WHATISTHIS

I rebooted and got links for everything.

tstinf01:~ # ls -l /dev/mapper/
total 0
lrwxrwxrwx 1 root root       7 May  8 11:44 3600507680c800450b80000000000093e -> ../dm-0
lrwxrwxrwx 1 root root       7 May  8 11:44 3600507680c800450b80000000000093e1 -> ../dm-1
lrwxrwxrwx 1 root root       7 May  8 11:44 3600507680c800450b80000000000093e2 -> ../dm-2
lrwxrwxrwx 1 root root       7 May  8 11:44 3600507680c800450b80000000000093e_part1 -> ../dm-1
lrwxrwxrwx 1 root root       7 May  8 11:44 3600507680c800450b80000000000093e_part2 -> ../dm-2

But in /sys we still see 3600507680c800450b80000000000093e1 while we need 3600507680c800450b80000000000093e_part1 if I understand you.

tstinf01:~ # cat /sys/block/dm-1/dm/name
3600507680c800450b80000000000093e1

schabrolles commented at 2018-05-08 16:32:

@bern66 ,
Do you have run mkinitrd to regenerate the ramdisk before rebooting ?

bern66 commented at 2018-05-08 16:40:

Yes I have mkinitrd.... but you are scaring me a bit.... Don't tell me I'll have to regenerate a new initrd??

bern66 commented at 2018-05-08 16:53:

As this is a test lpar, I ran mkinitrd.

tstinf01:~> ls -l /dev/mapper/
total 0
lrwxrwxrwx 1 root root       7 May  8 12:51 3600507680c800450b80000000000093e -> ../dm-0
lrwxrwxrwx 1 root root       7 May  8 12:51 3600507680c800450b80000000000093e_part1 -> ../dm-1
lrwxrwxrwx 1 root root       7 May  8 12:51 3600507680c800450b80000000000093e-part1 -> ../dm-1
lrwxrwxrwx 1 root root       7 May  8 12:51 3600507680c800450b80000000000093e_part2 -> ../dm-2
lrwxrwxrwx 1 root root       7 May  8 12:51 3600507680c800450b80000000000093e-part2 -> ../dm-2

schabrolles commented at 2018-05-08 16:54:

Try lsinitrd | grep storix ... if you see the old file, this mean you have to regenerate the ramdisk to remove it. Otherwise, the storix script will run in the initrd during boot time.

schabrolles commented at 2018-05-08 16:57:

@bern66 ,

Much better !!!! then try again rear mkbackup and rear recover and give us good news !!!

bern66 commented at 2018-05-08 18:12:

@schabrolles
No good news.....

UserInput -I LAYOUT_CODE_RUN needed in /usr/share/rear/layout/recreate/default/200_run_layout_code.sh line 127
The disk layout recreation script failed
RESCUE tstinf01:~ # rear -vD recover
Relax-and-Recover 2.3-git_1766. / 2018-05-08
Using log file: /var/log/rear/rear-tstinf01.log
Running workflow recover within the ReaR rescue/recovery system
Testing connection to TSM server
IBM Spectrum Protect
Command Line Backup-Archive Client Interface
  Client Version 8, Release 1, Level 2.0
  Client date/time: 05/08/18   18:04:07
(c) Copyright by IBM Corporation and other(s) 1990, 2017. All Rights Reserved.

Node Name: STSTINF01
Session established with server GISPA: Linux/x86_64
  Server Version 8, Release 1, Level 4.000
  Server date/time: 05/08/18   14:04:07  Last access: 05/08/18   13:58:32

IBM Spectrum Protect Server Connection Information

Home Server Name........: GISPA
Server Type.............: Linux/x86_64
Archive Retain Protect..: "No"
Server Version..........: Ver. 8, Rel. 1, Lev. 4.0
Last Access Date........: 05/08/18   13:58:32
Delete Backup Files.....: "No"
Delete Archive Files....: "Yes"
Deduplication...........: "Client Or Server"

Node Name...............: STSTINF01
User Name...............: root

SSL Information.........: TLSv1.2 AES-256-GCM

Secondary Server Information
Configured for failover to server GISPB

Testing connection to TSM server completed successfully

TSM restores by default the latest backup data. Alternatively you can specify
a different date and time to enable Point-In-Time Restore. Press ENTER to
use the most recent available backup
Enter date/time (YYYY-MM-DD HH:mm:ss) or press ENTER [30 secs]:
Skipping Point-In-Time Restore, will restore most recent data.

The TSM Server reports the following for this node:
                  #     Last Incr Date          Type    Replication       File Space Name
                --------------------------------------------------------------------------------
                  1     07-05-2018 22:11:31     BTRFS   Current           /
                  2     07-05-2018 22:11:04     BTRFS   Current           /.snapshots
                  3     07-05-2018 22:11:04     BTRFS   Current           /boot/grub2/powerpc-ieee1275
                  4     07-05-2018 22:11:18     XFS     Current           /home
                  5     07-05-2018 22:11:04     BTRFS   Current           /opt
                  6     07-05-2018 22:11:07     BTRFS   Current           /srv
                  7     07-05-2018 22:11:04     BTRFS   Current           /usr/local
                  8     07-05-2018 22:11:04     BTRFS   Current           /var/cache
                  9     07-05-2018 22:11:04     BTRFS   Current           /var/crash
                 10     07-05-2018 22:11:08     BTRFS   Current           /var/lib/libvirt/images
                 11     07-05-2018 22:11:08     BTRFS   Current           /var/lib/machines
                 12     07-05-2018 22:11:04     BTRFS   Current           /var/lib/mailman
                 13     07-05-2018 22:11:04     BTRFS   Current           /var/lib/mariadb
                 14     07-05-2018 22:10:54     BTRFS   Current           /var/lib/mysql
                 15     07-05-2018 22:11:13     BTRFS   Current           /var/lib/named
                 16     07-05-2018 22:11:07     BTRFS   Current           /var/lib/pgsql
                 17     07-05-2018 22:11:13     BTRFS   Current           /var/log
                 18     07-05-2018 22:11:06     BTRFS   Current           /var/opt
                 19     07-05-2018 22:11:13     BTRFS   Current           /var/spool
                 20     07-05-2018 22:11:07     BTRFS   Current           /var/tmp
Please enter the numbers of the filespaces we should restore.
Pay attention to enter the filesystems in the correct order
(like restore / before /var/log)
(default: 1 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20): [30 secs]
The following filesystems will be restored:
/
/boot/grub2/powerpc-ieee1275
/home
/opt
/srv
/usr/local
/var/cache
/var/crash
/var/lib/libvirt/images
/var/lib/machines
/var/lib/mailman
/var/lib/mariadb
/var/lib/mysql
/var/lib/named
/var/lib/pgsql
/var/log
/var/opt
/var/spool
/var/tmp
Is this selection correct ? (Y|n) [30 secs] y
Setting up multipathing
Activating multipath
multipath activated
Listing multipath device found
3600507680c800450b80000000000093e       (254, 0)
Comparing disks
Device dm-0 has expected (same) size 107374182400 (will be used for recovery)
Disk configuration looks identical
UserInput -I DISK_LAYOUT_PROCEED_RECOVERY needed in /usr/share/rear/layout/prepare/default/250_compare_disks.sh line 146
Proceed with recovery (yes) otherwise manual disk layout configuration is enforced
(default 'yes' timeout 30 seconds)
yes
UserInput: No choices - result is 'yes'
User confirmed to proceed with recovery
Start system layout restoration.
Creating partitions for disk /dev/mapper/3600507680c800450b80000000000093e (msdos)
Creating LVM PV /dev/mapper/3600507680c800450b80000000000093e-part2
Restoring LVM VG system
Sleeping 3 seconds to let udev or systemd-udevd create their devices...
Creating filesystem of type btrfs with mount point / on /dev/mapper/system-root.
Mounting filesystem /
UserInput -I LAYOUT_CODE_RUN needed in /usr/share/rear/layout/recreate/default/200_run_layout_code.sh line 127
The disk layout recreation script failed
1) Rerun disk recreation script (/var/lib/rear/layout/diskrestore.sh)
2) View 'rear recover' log file (/var/log/rear/rear-tstinf01.log)
3) Edit disk recreation script (/var/lib/rear/layout/diskrestore.sh)
4) View original disk space usage (/var/lib/rear/layout/config/df.txt)
5) Use Relax-and-Recover shell and return back to here
6) Abort 'rear recover'
(default '1' timeout 300 seconds)
6
UserInput: Valid choice number result 'Abort 'rear recover''
ERROR: User chose to abort 'rear recover' in /usr/share/rear/layout/recreate/default/200_run_layout_code.sh
Aborting due to an error, check /var/log/rear/rear-tstinf01.log for details
Exiting rear recover (PID 4193) and its descendant processes
Running exit tasks
You should also rm -Rf /tmp/rear.F25RIt3XyC7og40
Terminated
RESCUE tstinf01:~ #

schabrolles commented at 2018-05-08 18:37:

send /var/log/rear/rear-tstinf01.log

bern66 commented at 2018-05-08 18:49:

@schabrolles
Here is the log.

rear-tstinf01.log

But I don't know what I will do if I have to run mkinitrd on 200+ LPARs???

Thanks again for you assistance!

schabrolles commented at 2018-05-08 19:06:

@bern66,

if you have installed storix on your 200+ LPARS, then you will have to run mkinitrd after having removed 99-storixmpath.rules

For the error, it seems that the device /dev/mapper/3600507680c800450b80000000000093e-part2 is used .... don't know why.

lvm pvcreate -ff --yes -v --uuid RWHsoG-C5a3-78M5-FmaZ-xMvD-dN5N-jSsKXJ --restorefile /var/lib/rear/layout/lvm/system.cfg /dev/mapper/3600507680c800450b80000000000093e-part2
Can't open /dev/mapper/3600507680c800450b80000000000093e-part2 exclusively.  Mounted filesystem?

Did you try several time rear recover without rebooting on the DVD media ?

bern66 commented at 2018-05-08 19:16:

Well storix has not been installed on all our LPARS but on the image from which all other LPARs have been created therefore it is .

Yes, I tried a few times until I realize I forgot to tell the TSM server to be open up the security for my test machine on first login for a recover. This is a security feature in TSM. I have to do it every time I do a test.

schabrolles commented at 2018-05-08 19:46:

Then, I'm afraid you will have to regenerate initrd on all your LPARs to completely clean and remove storix stuff.

I think the fact that you retry to recover several times without rebooting can explain the error you got. ReaR recreates the disklayout (partition/LVM/fs) and mount the new filesystem on /mnt/local. Then, it starts TSM to recover the data in /mnt/local.

If you have to retry a recovery without rebooting on the ISO image, don't forget to:

  • umount everything from /mnt/local
  • go to /var/lib/rear/layout and restore the disklayout.conf (rear create a backup of the file each time a recover is started) This important if you are doing a migration (restore to another hardware).

Could you try again by rebooting on the recovery media?

jsmeix commented at 2018-05-09 10:05:

@schabrolles
only FYI regarding your https://github.com/rear/rear/issues/1796#issuecomment-387445056

Nowadays the system's default udev rules are in /usr/lib/udev/rules.d/
e.g. the kpartx RPM provides /usr/lib/udev/rules.d/66-kpartx.rules
and the udev rules in /etc/udev/rules.d/ are primarily meant for
user-specific udev rules, see "man udev" that reads (excerpt)

RULES FILES
The udev rules are read from the files located in the
system rules directory /usr/lib/udev/rules.d,
the volatile runtime directory /run/udev/rules.d
and the local administration directory /etc/udev/rules.d.
All rules files are collectively sorted and processed in
lexical order, regardless of the directories in which they live.
However, files with identical filenames replace each other.
Files in /etc have the highest priority, files in /run take
precedence over files with the same name in /usr/lib.
This can be used to override a system-supplied rules file
with a local file if needed; a symlink in /etc with the same
name as a rules file in /usr/lib, pointing to /dev/null,
disables the rules file entirely.
Rule files must have the extension .rules;
other extensions are ignored.

jsmeix commented at 2018-05-09 10:26:

@bern66
I don't know about Storix.
Could you provide some basic background information
why you use it in addition to ReaR?

I only see at
https://www.storix.com/
that it is another "Full-system backup and disaster recovery" tool
and in
https://www.storix.com/download/sbaDM-Multipath.pdf
their special requirements regarding multipath are described
in particular they need user_friendly_names yes plus their
particular udev rules file /etc/udev/rules.d/99-storixmpath.rules

It seems using Storix conflicts with using ReaR at least in case of multipath
and it is good when we at ReaR upstream know about that possible conflict
so that we may describe that possible conflict in the ReaR documentation
to avoid that also other users have to learn it the hard way that a prior usage
or setup of Storix on a system may cause issues when later ReaR is used.

jsmeix commented at 2018-05-09 10:37:

In general regarding retrying "rear recover":

It depends if umounting everything from /mnt/local/
plus using the original disklayout.conf is sufficient
because when the diskrestore.sh script had (partially) run
the disks are (partially) changed in whatever unwanted
and possibly broken way.

Unfortunately ReaR still has no "cleanupdisk" script
cf. https://github.com/rear/rear/issues/799
so that a subsequent "rear recover" may fail because
a prior "rear recover" had somehow messed up the disk.

This means one may have to manually clean up the disks
before retrying "rear recover".

bern66 commented at 2018-05-09 12:29:

We thought about using storix at some point but it has been removed from our environment. Removed but unfortunately, some traces of it are still laying around.

Now making sure our TSM server is ready for a recover, I rebooted my server and tried another rear -D recover only once. ;-) I block on the following fail.

Mounting filesystem /
UserInput -I LAYOUT_CODE_RUN needed in /usr/share/rear/layout/recreate/default/200_run_layout_code.sh line 127
The disk layout recreation script failed

The recover log is attached as well as the disklayout.conf if it can help.

Thanks for your help or devotion should I say. :-)

recover-20180509.zip

schabrolles commented at 2018-05-09 14:06:

@bern66

Did you add the following lines in your rear configuration file? (see https://github.com/rear/rear/issues/1796#issuecomment-386592239)

## SLES12
BACKUP_OPTIONS="nfsvers=4,nolock"
REQUIRED_PROGS=( "${REQUIRED_PROGS[@]}" snapper chattr lsattr )
COPY_AS_IS=( "${COPY_AS_IS[@]}" /usr/lib/snapper/installation-helper /etc/snapper/config-templates/default )

for subvol in $(findmnt -n -r -t btrfs | cut -d ' ' -f 1 | grep -v '^/$' | egrep -v 'snapshots|crash') ; do
    BACKUP_PROG_INCLUDE=( "${BACKUP_PROG_INCLUDE[@]}" "$subvol" )
done

POST_RECOVERY_SCRIPT=( 'if snapper --no-dbus -r $TARGET_FS_ROOT get-config | grep -q "^QGROUP.*[0-9]/[0-9]" ; then snapper --no-dbus -r $TARGET_FS_ROOT set-config QGROUP= ; snapper --no-dbus -r $TARGET_FS_ROOT setup-quota && echo snapper setup-quota done || echo snapper setup-quota failed ; else echo snapper setup-quota not used ; fi' )

bern66 commented at 2018-05-09 15:14:

Yes, those entries are in my rear configuration file site.conf. Here is the content of my site.conf.
I like your idea of a for loop for BACKUP_PROG_INCLUDE. I'll use your suggestion.

OUTPUT=ISO
OUTPUT_URL=nfs://tstinf02/exports/rear/iso
ISO_PREFIX="$HOSTNAME-rear-$( date "+%y%m%d" )"
ISO_VOLID=$HOSTNAME

REAR_INITRD_COMPRESSION=lzma

MODULES_LOAD=( autofs4 scsi_mod scsi_dh_alua scsi_dh_emc scsi_dh_rdac dm_mod dm_multipath sg dm_log dm_region_hash dm_mirror scsi_transport_srp scsi_transport_fc ibmvscsi ibmvfc cdrom sr_mod sd_mod dm_service_time raid6_pq xor btrfs sunrpc rtc_generic ibmveth libcrc32c xfs af_packet isofs nls_utf8 netlink_diag af_packet_diag unix_diag inet_diag udp_diag tcp_diag rpaphp rpadlpar_io )

REQUIRED_PROGS=( "${REQUIRED_PROGS[@]}" snapper chattr lsattr )

COPY_AS_IS=( "${COPY_AS_IS[@]}" /usr/lib/snapper/installation-helper /etc/snapper/config-templates/default )

BACKUP_PROG_INCLUDE=( '/var/log/*' '/var/lib/mysql/*' '/var/lib/pgsql/*' '/var/lib/mariadb/*' '/var/lib/libvirt/images/*' '/var/lib/named/*' '/var/crash/*' '/var/lib/machines/*' '/.snapshots/*' '/opt/*' '/usr/local/*' '/tmp/*' '/var/cache/*' '/var/tmp/*' '/var/lib/mailman/*' '/var/spool/*' '/var/opt/*' '/srv/*' '/boot/grub2/powerpc-ieee1275/*' )

AUTOEXCLUDE_MULTIPATH=n

BOOT_OVER_SAN=y

BACKUP=TSM

COPY_AS_IS_TSM=( /etc/$HOSTNAME /opt/tivoli/tsm/client/ba/bin/dsmc /opt/tivoli/tsm/client/ba/bin/tsmbench_inclexcl /opt/tivoli/tsm/client/ba/bin/dsm.sys /opt/tivoli/tsm/client/ba/bin/dsm.opt /opt/tivoli/tsm/client/api/bin64/libgpfs.so /opt/tivoli/tsm/client/api/bin64/libdmapi.so /opt/tivoli/tsm/client/ba/bin/EN_US/dsmclientV3.cat /usr/local/ibm/gsk8/* )

COPY_AS_IS_EXCLUDE_TSM=( )

PROGS_TSM=(dsmc)

TSM_LD_LIBRARY_PATH="/opt/tivoli/tsm/client/ba/bin:/opt/tivoli/tsm/client/api/bin64:/opt/tivoli/tsm/client/api/bin:/opt/tivoli/tsm/client/api/bin64/cit/bin"

TSM_RESULT_FILE_PATH=/opt/tivoli/tsm/rear

TSM_RESULT_SAVE=n

TSM_ARCHIVE_MGMT_CLASS=qaasba

TSM_RM_ISOFILE=y

schabrolles commented at 2018-05-09 16:10:

@jsmeix, I think you could help here.

There is no btrfsdefaultsubvol defined with @/.snapshots in the @bern66 disklayout.conf.
I think the system was configured to not use snapshots... (@bern66 could you confirm ?)
So, the snapper installer-helper code is not triggered.

disklayout extract

# Btrfs default subvolume for /dev/mapper/system-root at /
# Format: btrfsdefaultsubvol <device> <mountpoint> <btrfs_subvolume_ID> <btrfs_subvolume_path>
btrfsdefaultsubvol /dev/mapper/system-root / 5 /
# Btrfs snapshot subvolumes for /dev/mapper/system-root at /
# Btrfs snapshot subvolumes are listed here only as documentation.
# There is no recovery of btrfs snapshot subvolumes.
# Format: btrfssnapshotsubvol <device> <mountpoint> <btrfs_subvolume_ID> <btrfs_subvolume_path>
#btrfssnapshotsubvol /dev/mapper/system-root / 669 @/.snapshots/255/snapshot
#btrfssnapshotsubvol /dev/mapper/system-root / 670 @/.snapshots/256/snapshot
#btrfssnapshotsubvol /dev/mapper/system-root / 700 @/.snapshots/279/snapshot
#btrfssnapshotsubvol /dev/mapper/system-root / 701 @/.snapshots/280/snapshot
#btrfssnapshotsubvol /dev/mapper/system-root / 702 @/.snapshots/281/snapshot
#btrfssnapshotsubvol /dev/mapper/system-root / 703 @/.snapshots/282/snapshot
#btrfssnapshotsubvol /dev/mapper/system-root / 704 @/.snapshots/283/snapshot
#btrfssnapshotsubvol /dev/mapper/system-root / 705 @/.snapshots/284/snapshot
# Btrfs normal subvolumes for /dev/mapper/system-root at /
# Format: btrfsnormalsubvol <device> <mountpoint> <btrfs_subvolume_ID> <btrfs_subvolume_path>
# Btrfs subvolumes that belong to snapper are listed here only as documentation.
# Snapper's base subvolume '/@/.snapshots' is deactivated here because during 'rear recover'
# it is created by 'snapper/installation-helper --step 1' (which fails if it already exists).
# Furthermore any normal btrfs subvolume under snapper's base subvolume would be wrong.
# See https://github.com/rear/rear/issues/944#issuecomment-238239926
# and https://github.com/rear/rear/issues/963#issuecomment-240061392
# how to create a btrfs subvolume in compliance with the SLES12 default brtfs structure.
# In short: Normal btrfs subvolumes on SLES12 must be created directly below '/@/'
# e.g. '/@/var/lib/mystuff' (which requires that the btrfs root subvolume is mounted)
# and then the subvolume is mounted at '/var/lib/mystuff' to be accessible from '/'
# plus usually an entry in /etc/fstab to get it mounted automatically when booting.
# Because any '@/.snapshots' subvolume would let 'snapper/installation-helper --step 1' fail
# such subvolumes are deactivated here to not let 'rear recover' fail:
#btrfsnormalsubvol /dev/mapper/system-root / 258 @/.snapshots
btrfsnormalsubvol /dev/mapper/system-root / 257 @

The layout creation failed with the following message:
The subvol .snapshots cannot be found. (I assume it should be recreated by snapper installer-helper)

+++ grep -q ' on /mnt/local/.snapshots '
+++ test -d /mnt/local/.snapshots
+++ mkdir -p /mnt/local/.snapshots
+++ mount -t btrfs -o rw,relatime,space_cache -o subvol=@/.snapshots /dev/mapper/system-root /mnt/local/.snapshots
mount: mount(2) failed: No such file or directory
++ ((  1 == 0  ))
++ true
+++ UserInput -I LAYOUT_CODE_RUN -p 'The disk layout recreation script failed' -D 'Rerun disk recreation script (/var/lib/rear/layout/diskrestore.sh)' 'Rerun disk recreation script (/var/lib/rear/layout/diskrestore.sh)' 'View '\''rear recover'\'' log file (/var/log/rear/rear-tstinf01.log)' 'Edit disk recreation script (/var/lib/rear/layout/diskrestore.sh)' 'View original disk space usage (/var/lib/rear/layout/config/df.txt)' 'Use Relax-and-Recover shell and return back to here' 'Abort '\''rear recover'\'''

jsmeix commented at 2018-05-09 16:27:

@bern66
I will have a look but please be patient - I am not in the office
until next Moday and next week I am only rarely in the office.

What does on your original system the command

findmnt -a -o SOURCE,TARGET,FSTYPE

show?

bern66 commented at 2018-05-09 17:12:

Yes, snapshots are enabled in our environment.

Could the problem that @schabrolles is talking about related to the fact that I didn't select snapshot to be restore? When running rear recover, it eventually asked for the fs we would like to restore. I went with the suggested default which didn't include snapshots. Here is the extract I am talking about.

. . . 
Skipping Point-In-Time Restore, will restore most recent data.

The TSM Server reports the following for this node:
                  #     Last Incr Date          Type    Replication       File Space Name
                --------------------------------------------------------------------------------
                  1     07-05-2018 22:11:31     BTRFS   Current           /
                  2     07-05-2018 22:11:04     BTRFS   Current           /.snapshots
                  3     07-05-2018 22:11:04     BTRFS   Current           /boot/grub2/powerpc-ieee1275
                  4     07-05-2018 22:11:18     XFS     Current           /home
                  5     07-05-2018 22:11:04     BTRFS   Current           /opt
                  6     07-05-2018 22:11:07     BTRFS   Current           /srv
                  7     07-05-2018 22:11:04     BTRFS   Current           /usr/local
                  8     07-05-2018 22:11:04     BTRFS   Current           /var/cache
                  9     07-05-2018 22:11:04     BTRFS   Current           /var/crash
                 10     07-05-2018 22:11:08     BTRFS   Current           /var/lib/libvirt/images
                 11     07-05-2018 22:11:08     BTRFS   Current           /var/lib/machines
                 12     07-05-2018 22:11:04     BTRFS   Current           /var/lib/mailman
                 13     07-05-2018 22:11:04     BTRFS   Current           /var/lib/mariadb
                 14     07-05-2018 22:10:54     BTRFS   Current           /var/lib/mysql
                 15     07-05-2018 22:11:13     BTRFS   Current           /var/lib/named
                 16     07-05-2018 22:11:07     BTRFS   Current           /var/lib/pgsql
                 17     07-05-2018 22:11:13     BTRFS   Current           /var/log
                 18     07-05-2018 22:11:06     BTRFS   Current           /var/opt
                 19     07-05-2018 22:11:13     BTRFS   Current           /var/spool
                 20     07-05-2018 22:11:07     BTRFS   Current           /var/tmp
Please enter the numbers of the filespaces we should restore.
Pay attention to enter the filesystems in the correct order
(like restore / before /var/log)
(default: 1 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20): [30 secs]
The following filesystems will be restored:
/
/boot/grub2/powerpc-ieee1275
/home
/opt
/srv
/usr/local
/var/cache
/var/crash
/var/lib/libvirt/images
/var/lib/machines
/var/lib/mailman
/var/lib/mariadb
/var/lib/mysql
/var/lib/named
/var/lib/pgsql
/var/log
/var/opt
/var/spool
/var/tmp
Is this selection correct ? (Y|n) [30 secs] y
. . .

@jsmeix your required output is shown below.

tstinf01:~ # findmnt -a -o SOURCE,TARGET,FSTYPE
SOURCE                                                  TARGET                                FSTYPE
/dev/mapper/system-root                                 /                                     btrfs
sysfs                                                   |-/sys                                sysfs
securityfs                                              | |-/sys/kernel/security              securityfs
tmpfs                                                   | |-/sys/fs/cgroup                    tmpfs
cgroup                                                  | | |-/sys/fs/cgroup/systemd          cgroup
cgroup                                                  | | |-/sys/fs/cgroup/memory           cgroup
cgroup                                                  | | |-/sys/fs/cgroup/cpu,cpuacct      cgroup
cgroup                                                  | | |-/sys/fs/cgroup/devices          cgroup
cgroup                                                  | | |-/sys/fs/cgroup/pids             cgroup
cgroup                                                  | | |-/sys/fs/cgroup/hugetlb          cgroup
cgroup                                                  | | |-/sys/fs/cgroup/blkio            cgroup
cgroup                                                  | | |-/sys/fs/cgroup/cpuset           cgroup
cgroup                                                  | | |-/sys/fs/cgroup/freezer          cgroup
cgroup                                                  | | |-/sys/fs/cgroup/net_cls,net_prio cgroup
cgroup                                                  | | `-/sys/fs/cgroup/perf_event       cgroup
pstore                                                  | |-/sys/fs/pstore                    pstore
debugfs                                                 | `-/sys/kernel/debug                 debugfs
tracefs                                                 |   `-/sys/kernel/debug/tracing       tracefs
proc                                                    |-/proc                               proc
systemd-1                                               | `-/proc/sys/fs/binfmt_misc          autofs
binfmt_misc                                             |   `-/proc/sys/fs/binfmt_misc        binfmt_misc
devtmpfs                                                |-/dev                                devtmpfs
tmpfs                                                   | |-/dev/shm                          tmpfs
devpts                                                  | |-/dev/pts                          devpts
mqueue                                                  | |-/dev/mqueue                       mqueue
hugetlbfs                                               | `-/dev/hugepages                    hugetlbfs
tmpfs                                                   |-/run                                tmpfs
tmpfs                                                   | |-/run/user/186                     tmpfs
tmpfs                                                   | |-/run/user/122                     tmpfs
tmpfs                                                   | `-/run/user/110                     tmpfs
sunrpc                                                  |-/var/lib/nfs/rpc_pipefs             rpc_pipefs
/dev/mapper/system-root[/@/opt]                         |-/opt                                btrfs
/dev/mapper/system-root[/@/var/lib/mariadb]             |-/var/lib/mariadb                    btrfs
/dev/mapper/system-root[/@/var/opt]                     |-/var/opt                            btrfs
/dev/mapper/system-root[/@/usr/local]                   |-/usr/local                          btrfs
/dev/mapper/system-root[/@/var/lib/libvirt/images]      |-/var/lib/libvirt/images             btrfs
/dev/mapper/system-root[/@/srv]                         |-/srv                                btrfs
/dev/mapper/system-root[/@/var/lib/pgsql]               |-/var/lib/pgsql                      btrfs
/dev/mapper/system-root[/@/tmp]                         |-/tmp                                btrfs
/dev/mapper/system-root[/@/boot/grub2/powerpc-ieee1275] |-/boot/grub2/powerpc-ieee1275        btrfs
/dev/mapper/system-root[/@/var/cache]                   |-/var/cache                          btrfs
/dev/mapper/system-root[/@/var/lib/machines]            |-/var/lib/machines                   btrfs
/dev/mapper/system-root[/@/var/lib/mailman]             |-/var/lib/mailman                    btrfs
/dev/mapper/system-root[/@/var/lib/mysql]               |-/var/lib/mysql                      btrfs
/dev/mapper/system-root[/@/var/crash]                   |-/var/crash                          btrfs
/dev/mapper/system-root[/@/var/log]                     |-/var/log                            btrfs
/dev/mapper/system-root[/@/.snapshots]                  |-/.snapshots                         btrfs
/dev/mapper/system-root[/@/var/spool]                   |-/var/spool                          btrfs
/dev/mapper/system-root[/@/var/lib/named]               |-/var/lib/named                      btrfs
/dev/mapper/system-root[/@/var/tmp]                     |-/var/tmp                            btrfs
/dev/mapper/system-home                                 |-/home                               xfs
/etc/auto.direct                                        `-/install                            autofs
nfs02:/install                                        `-/install                          nfs4

schabrolles commented at 2018-05-09 17:25:

@bern66,

I would suggest to exclude /.snapshots from your TSM backup. Even if TSM is using dedup, it just slow down the Backup process.

what is strange in yout System, is that you are mounting directly the btrfs / instead of a subvolume (snapshot)

Here is the output of findmnt -a -o SOURCE,TARGET,FSTYPE -t btrfs on my system

SOURCE                                                  TARGET                         FSTYPE
/dev/mapper/system-root[/@/.snapshots/1/snapshot]       /                              btrfs
/dev/mapper/system-root[/@/var/cache]                   |-/var/cache                   btrfs
/dev/mapper/system-root[/@/srv]                         |-/srv                         btrfs
/dev/mapper/system-root[/@/var/opt]                     |-/var/opt                     btrfs
/dev/mapper/system-root[/@/var/lib/mariadb]             |-/var/lib/mariadb             btrfs
/dev/mapper/system-root[/@/var/lib/libvirt/images]      |-/var/lib/libvirt/images      btrfs
/dev/mapper/system-root[/@/usr/local]                   |-/usr/local                   btrfs
/dev/mapper/system-root[/@/var/lib/pgsql]               |-/var/lib/pgsql               btrfs
/dev/mapper/system-root[/@/.snapshots]                  |-/.snapshots                  btrfs
/dev/mapper/system-root[/@/var/crash]                   |-/var/crash                   btrfs
/dev/mapper/system-root[/@/boot/grub2/powerpc-ieee1275] |-/boot/grub2/powerpc-ieee1275 btrfs
/dev/mapper/system-root[/@/tmp]                         |-/tmp                         btrfs
/dev/mapper/system-root[/@/var/lib/machines]            |-/var/lib/machines            btrfs
/dev/mapper/system-root[/@/var/log]                     |-/var/log                     btrfs
/dev/mapper/system-root[/@/var/lib/named]               |-/var/lib/named               btrfs
/dev/mapper/system-root[/@/var/lib/mysql]               |-/var/lib/mysql               btrfs
/dev/mapper/system-root[/@/opt]                         |-/opt                         btrfs
/dev/mapper/system-root[/@/home]                        |-/home                        btrfs
/dev/mapper/system-root[/@/var/lib/mailman]             |-/var/lib/mailman             btrfs
/dev/mapper/system-root[/@/var/spool]                   |-/var/spool                   btrfs
/dev/mapper/system-root[/@/var/tmp]                     `-/var/tmp                     btrfs

bern66 commented at 2018-05-09 18:03:

Yes, snapshots are already excluded.

And for the rest, I have no clue why it is like it is as I was not involved in the creation of the first system.

jsmeix commented at 2018-05-14 09:23:

Damn!
With the current ReaR GitHub master code it does no longer work
to recreate a SLES12-GA/SP0 system which has this default btrfs structure:

# findmnt -a -o SOURCE,TARGET,FSTYPE -t btrfs
SOURCE                              TARGET                   FSTYPE
/dev/sda2[/@]                       /                        btrfs
/dev/sda2[/@/.snapshots]            |-/.snapshots            btrfs
/dev/sda2[/@/var/opt]               |-/var/opt               btrfs
/dev/sda2[/@/var/tmp]               |-/var/tmp               btrfs
/dev/sda2[/@/srv]                   |-/srv                   btrfs
/dev/sda2[/@/var/spool]             |-/var/spool             btrfs
/dev/sda2[/@/home]                  |-/home                  btrfs
/dev/sda2[/@/var/log]               |-/var/log               btrfs
/dev/sda2[/@/var/lib/pgsql]         |-/var/lib/pgsql         btrfs
/dev/sda2[/@/var/lib/named]         |-/var/lib/named         btrfs
/dev/sda2[/@/var/lib/mailman]       |-/var/lib/mailman       btrfs
/dev/sda2[/@/var/crash]             |-/var/crash             btrfs
/dev/sda2[/@/usr/local]             |-/usr/local             btrfs
/dev/sda2[/@/tmp]                   |-/tmp                   btrfs
/dev/sda2[/@/opt]                   |-/opt                   btrfs
/dev/sda2[/@/boot/grub2/x86_64-efi] |-/boot/grub2/x86_64-efi btrfs
/dev/sda2[/@/boot/grub2/i386-pc]    `-/boot/grub2/i386-pc    btrfs

jsmeix commented at 2018-05-14 09:34:

@bern66
regardless of my https://github.com/rear/rear/issues/1796#issuecomment-388753131
it seems your btrfs structure is not any SLE12 default btrfs structure because you have

SOURCE                          TARGET        FSTYPE
/dev/mapper/system-root         /             btrfs

while SLES12-GA/SP0 has

SOURCE                      TARGET           FSTYPE
/dev/sda2[/@]               /                btrfs

As far as I know snapper snapshotting and in particular the rollback
will not work at all or will not work correctly if you do not have one
of the SLE12 default btrfs structures and I do know that only the latest
of the SLE12 default btrfs structures should work well with snapper
snapshotting and rollback.
I fear any tiny deviation from the SLE12-SP2 default btrfs structure
may lead to whaever kind of unexpected (and probably even unsupported)
issues.
Therefore I do really recommend that you get your SLE12-SP2 systems
in full compliance with SUSE's SLE12-SP2 defaults to avoid that
whatever things may behave unexpectedly or even just fail later
cf. https://github.com/rear/rear/issues/1796#issuecomment-387001483

Regarding the diffent (incompatible) SLE12 default btrfs structures see
https://github.com/rear/rear/issues/1368#issuecomment-302410707

In general regarding that oversophisticated SUSE btrfs default structures
see https://github.com/rear/rear/pull/1435#issuecomment-319011579

schabrolles commented at 2018-05-14 09:38:

@jsmeix,

I recommend to @bern66 to use the following code which should be used with SLES12SP2

## SLES12
BACKUP_OPTIONS="nfsvers=4,nolock"
REQUIRED_PROGS=( "${REQUIRED_PROGS[@]}" snapper chattr lsattr )
COPY_AS_IS=( "${COPY_AS_IS[@]}" /usr/lib/snapper/installation-helper /etc/snapper/config-templates/default )

for subvol in $(findmnt -n -r -t btrfs | cut -d ' ' -f 1 | grep -v '^/$' | egrep -v 'snapshots|crash') ; do
    BACKUP_PROG_INCLUDE=( "${BACKUP_PROG_INCLUDE[@]}" "$subvol" )
done

POST_RECOVERY_SCRIPT=( 'if snapper --no-dbus -r $TARGET_FS_ROOT get-config | grep -q "^QGROUP.*[0-9]/[0-9]" ; then snapper --no-dbus -r $TARGET_FS_ROOT set-config QGROUP= ; snapper --no-dbus -r $TARGET_FS_ROOT setup-quota && echo snapper setup-quota done || echo snapper setup-quota failed ; else echo snapper setup-quota not used ; fi' )

But it seems this SLES12 has a SP0 disklayout (maybe due to successive upgrade SP0->SP1->SP2 which, if I remember well, does not change the layout)
Could the fact we use a SLES12SP2 btrfs additional script instead of SLES12SP0 could explain the fact it fails here?

jsmeix commented at 2018-05-14 09:55:

@schabrolles
no, his SLES12 has not a SP0 btrfs disklayout but something different
see my https://github.com/rear/rear/issues/1796#issuecomment-388756113

It would be o.k if his SLES12 had a SP0 btrfs disklayout because
that is an expected thing where ReaR should "just work", cf. the "Reason" in
https://github.com/rear/rear/issues/1368#issuecomment-302410707

jsmeix commented at 2018-05-14 10:01:

With this manual change in disklayout.conf of a SLES12-SP0 default btrfs system

# Because any '@/.snapshots' subvolume would let 'snapper/installation-helper --step 1' fail
 # such subvolumes are deactivated here to not let 'rear recover' fail:
 #btrfsnormalsubvol /dev/sda2 / 275 @/.snapshots
 btrfsnormalsubvol /dev/sda2 / 257 @
+btrfsnormalsubvol /dev/sda2 / 275 @/.snapshots
 btrfsnormalsubvol /dev/sda2 / 258 @/boot/grub2/i386-pc
 btrfsnormalsubvol /dev/sda2 / 259 @/boot/grub2/x86_64-efi

"rear recover" works for me again for a SLES12-SP0 default btrfs system.

What I had messed up for SLES12-SP0 when implementing
support for the SLES12-SP1 btrfs structure where things are
set up by 'snapper/installation-helper --step 1' is conditional code
that does not do the installation-helper stuff when it is not used.

I will do a pull request - hopefully today - but at least on Thursday or Friday.
I will not be in the office tomorrow and on Wednesdey,
cf. https://github.com/rear/rear/issues/1796#issuecomment-387797072
and at home I do not have the needed various SLE12 test systems
(guess what: at home I do not use any btrfs stuff - guess why ;-)

jsmeix commented at 2018-05-14 11:05:

Only a side note FWTFIW regarding "fun with btrfs":
On my SLES12-SP3 test system (virtual KVM/QEMU machine) with default btrfs
that "just booted" for several weeks I get now suddenly during booting
funny longer delays with several repeating interesting messages like

... NMI watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [btrfs-balance:431]

intermixed with even more funny

... rcu_sched kthread starved for 105012 jiffies! ...

and there it sits looking somehow busy with itself with its btrfs stuff...
(on the host the "Virtual Machine Manager" shows constant 100% CPU for that system)
and I can only wait and hope and pray...

jsmeix commented at 2018-05-14 11:45:

No time to hope and pray for btrfs any longer.
I just killed that SLES12-SP3 test system.
I got it recreated with ReaR in a few minutes :-)

bern66 commented at 2018-05-14 14:02:

@jsmeix regarding your comment https://github.com/rear/rear/issues/1796#issuecomment-388756113 and the default btrfs structure, I would say that a structure like:

SOURCE                          TARGET                FSTYPE
/dev/mapper/system-root         /                     btrfs

is part of a "default" SLES12 structure. At the installation we simply choose the LVM-based Proposal.

The structure:

SOURCE                      TARGET                   FSTYPE
/dev/sda2[/@]               /                        btrfs

is when one choose to go with Partition-based Proposal.

Unless you mean that a default installation is when one do a next-next-next installation without changing anything.

As a test, I did a new installation with SLES12-SP3 and -SP0 and I ended up with the same structure as above while choosing LVM-based proposal in both cases:

SLES12-SP0

Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/system-root  6.6G  2.8G  3.5G  45% /
/dev/mapper/system-root  6.6G  2.8G  3.5G  45% /var/tmp
/dev/mapper/system-root  6.6G  2.8G  3.5G  45% /var/opt
[snip]

SLES12-SP3

Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/system-root  6.6G  2.1G  4.4G  32% /
/dev/mapper/system-root  6.6G  2.1G  4.4G  32% /var/crash
/dev/mapper/system-root  6.6G  2.1G  4.4G  32% /opt
[snip]

I personally would consider this a "default" installation. I would consider an installation not a default one when someone modify the installation beyond the regular installation process.

bern66 commented at 2018-05-14 19:15:

I did a test in my own environment, intel and Virtualbox. I easily restored btrfs filesystems inside an LVM without any problem. That is my SLES12-SP3 environment from my previous comment.

Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/system-root  6.6G  2.1G  4.4G  32% /
/dev/mapper/system-root  6.6G  2.1G  4.4G  32% /var/crash
/dev/mapper/system-root  6.6G  2.1G  4.4G  32% /opt
[snip]

Could the problem more related to our architecture, ppc64le, than the disklayout?

schabrolles commented at 2018-05-14 19:24:

@bern66 what about the snapshots on your virtualbox setup ... I cannot see them in your "screenshots".... The issue we currently have is related to the recreation of the snapshots FS.

bern66 commented at 2018-05-15 14:19:

@schabrolles
Indeed I didn't have any snaphosts. I'll give it another try with snapshots.

But in the other hand, unless there is something I do not understand which is very possible, if you take a look at my comment https://github.com/rear/rear/issues/1796#issuecomment-387809919 you shoud see that I didn't select the .snapshot file space. Is it the fs you talk about? In the selection of fs so far I always kept the default selection presented.

Thanks,

schabrolles commented at 2018-05-15 14:44:

@bern66,
just run findmnt /

here is the output I got on my system (LPAR on POWER - SLES12SP2)

TARGET SOURCE                                            FSTYPE OPTIONS
/      /dev/mapper/system-root[/@/.snapshots/1/snapshot] btrfs  rw,relatime,space_cache,subvolid=279,subvol=/@/.snapshot

jsmeix commented at 2018-05-17 07:44:

I would have been really surprised if the kind of underlying block-device
(e.g. a plain disk partition like /dev/sda2 versus a logical volume like /dev/dm-1)
would make a difference which btrfs structure is created on it.

The following shows - as far as I can reproduce it - that the btrfs structure
(and in particular how the btrfs parts are mounted)
does not depend on whether or not the "LVM-based Proposal" is used.

When I install a SLES12-GA/SP0 system
from an original SUSE SLES12-GA/SP0 installation medium
with its original SUSE SLES12-GA/SP0 installer
(i.e. the YaST installer on that SLES12-GA/SP0 installation medium)
on a virtual KVM/QEMU machine with a single 20GiB virtual harddisk
I get when I select the "LVM-based Proposal" in YaST
this result in the installed system:

# cat /etc/issue
Welcome to SUSE Linux Enterprise Server 12  (x86_64) - Kernel \r (\l).

# findmnt -a -o SOURCE,TARGET,FSTYPE -t btrfs
SOURCE                                            TARGET                   FSTYPE
/dev/mapper/system-root[/@]                       /                        btrfs
/dev/mapper/system-root[/@/.snapshots]            |-/.snapshots            btrfs
/dev/mapper/system-root[/@/var/lib/mailman]       |-/var/lib/mailman       btrfs
/dev/mapper/system-root[/@/var/spool]             |-/var/spool             btrfs
/dev/mapper/system-root[/@/tmp]                   |-/tmp                   btrfs
/dev/mapper/system-root[/@/var/tmp]               |-/var/tmp               btrfs
/dev/mapper/system-root[/@/home]                  |-/home                  btrfs
/dev/mapper/system-root[/@/var/opt]               |-/var/opt               btrfs
/dev/mapper/system-root[/@/var/lib/pgsql]         |-/var/lib/pgsql         btrfs
/dev/mapper/system-root[/@/var/lib/named]         |-/var/lib/named         btrfs
/dev/mapper/system-root[/@/var/log]               |-/var/log               btrfs
/dev/mapper/system-root[/@/var/crash]             |-/var/crash             btrfs
/dev/mapper/system-root[/@/usr/local]             |-/usr/local             btrfs
/dev/mapper/system-root[/@/srv]                   |-/srv                   btrfs
/dev/mapper/system-root[/@/opt]                   |-/opt                   btrfs
/dev/mapper/system-root[/@/boot/grub2/x86_64-efi] |-/boot/grub2/x86_64-efi btrfs
/dev/mapper/system-root[/@/boot/grub2/i386-pc]    `-/boot/grub2/i386-pc    btrfs

# file /dev/mapper/system-root
/dev/mapper/system-root: symbolic link to `../dm-1'

# readlink -e /dev/mapper/system-root
/dev/dm-1

# file /dev/dm-1
/dev/dm-1: block special (254/1)

# lsblk -i -p -o NAME,KNAME,PKNAME,MAJ:MIN,TYPE,FSTYPE,SIZE /dev/sda
NAME                        KNAME     PKNAME    MAJ:MIN TYPE FSTYPE       SIZE
/dev/sda                    /dev/sda              8:0   disk               20G
`-/dev/sda1                 /dev/sda1 /dev/sda    8:1   part LVM2_member   20G
  |-/dev/mapper/system-swap /dev/dm-0 /dev/sda1 254:0   lvm  swap         1.5G
  `-/dev/mapper/system-root /dev/dm-1 /dev/sda1 254:1   lvm  btrfs       18.5G

# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/sda1
  VG Name               system
  PV Size               20.00 GiB / not usable 3.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              5119
  Free PE               2
  Allocated PE          5117
  PV UUID               hyUS23-Gd2Y-YR70-kLlZ-22BQ-n4ee-AneoHm

# vgdisplay 
  --- Volume group ---
  VG Name               system
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               20.00 GiB
  PE Size               4.00 MiB
  Total PE              5119
  Alloc PE / Size       5117 / 19.99 GiB
  Free  PE / Size       2 / 8.00 MiB
  VG UUID               tejprz-WpZp-jLD1-JSKc-fTcK-16W9-DCyfw4

# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/system/root
  LV Name                root
  VG Name                system
  LV UUID                YPFEHh-VedF-fm2Y-rtwT-dCFO-z2fk-SSfgMv
  LV Write Access        read/write
  LV Creation host, time (none), 2018-05-17 08:39:24 +0200
  LV Status              available
  # open                 1
  LV Size                18.53 GiB
  Current LE             4744
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           254:1

  --- Logical volume ---
  LV Path                /dev/system/swap
  LV Name                swap
  VG Name                system
  LV UUID                3GWaTs-1IKN-OEen-mh3s-RBEY-PQNx-mO2mMJ
  LV Write Access        read/write
  LV Creation host, time (none), 2018-05-17 08:39:25 +0200
  LV Status              available
  # open                 2
  LV Size                1.46 GiB
  Current LE             373
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           254:0

When I install a SLES12-SP3 system
from an original SUSE SLES12-SP3 installation medium
with its original SUSE SLES12-SP3 installer
(i.e. the YaST installer on that SLES12-SP3 installation medium)
on a virtual KVM/QEMU machine with a single 20GiB virtual harddisk
I get when I select the "LVM-based Proposal" in YaST
this result in the installed system:

# cat /etc/issue
Welcome to SUSE Linux Enterprise Server 12 SP3  (x86_64) - Kernel \r (\l).

# findmnt -a -o SOURCE,TARGET,FSTYPE -t btrfs
SOURCE                                             TARGET                    FSTYPE
/dev/mapper/system-root[/@/.snapshots/1/snapshot]  /                         btrfs
/dev/mapper/system-root[/@/usr/local]              |-/usr/local              btrfs
/dev/mapper/system-root[/@/var/lib/named]          |-/var/lib/named          btrfs
/dev/mapper/system-root[/@/srv]                    |-/srv                    btrfs
/dev/mapper/system-root[/@/var/lib/mariadb]        |-/var/lib/mariadb        btrfs
/dev/mapper/system-root[/@/var/lib/pgsql]          |-/var/lib/pgsql          btrfs
/dev/mapper/system-root[/@/var/cache]              |-/var/cache              btrfs
/dev/mapper/system-root[/@/boot/grub2/i386-pc]     |-/boot/grub2/i386-pc     btrfs
/dev/mapper/system-root[/@/home]                   |-/home                   btrfs
/dev/mapper/system-root[/@/var/tmp]                |-/var/tmp                btrfs
/dev/mapper/system-root[/@/var/spool]              |-/var/spool              btrfs
/dev/mapper/system-root[/@/.snapshots]             |-/.snapshots             btrfs
/dev/mapper/system-root[/@/var/lib/mailman]        |-/var/lib/mailman        btrfs
/dev/mapper/system-root[/@/opt]                    |-/opt                    btrfs
/dev/mapper/system-root[/@/var/lib/mysql]          |-/var/lib/mysql          btrfs
/dev/mapper/system-root[/@/var/lib/machines]       |-/var/lib/machines       btrfs
/dev/mapper/system-root[/@/var/log]                |-/var/log                btrfs
/dev/mapper/system-root[/@/var/crash]              |-/var/crash              btrfs
/dev/mapper/system-root[/@/var/lib/libvirt/images] |-/var/lib/libvirt/images btrfs
/dev/mapper/system-root[/@/tmp]                    |-/tmp                    btrfs
/dev/mapper/system-root[/@/boot/grub2/x86_64-efi]  |-/boot/grub2/x86_64-efi  btrfs
/dev/mapper/system-root[/@/var/opt]                `-/var/opt                btrfs

# file /dev/mapper/system-root
/dev/mapper/system-root: symbolic link to `../dm-1'

# readlink -e /dev/mapper/system-root
/dev/dm-1

# file /dev/dm-1
/dev/dm-1: block special (254/1)

# lsblk -i -p -o NAME,KNAME,PKNAME,MAJ:MIN,TYPE,FSTYPE,SIZE /dev/sda
NAME                        KNAME     PKNAME    MAJ:MIN TYPE FSTYPE       SIZE
/dev/sda                    /dev/sda              8:0   disk               20G
`-/dev/sda1                 /dev/sda1 /dev/sda    8:1   part LVM2_member   20G
  |-/dev/mapper/system-swap /dev/dm-0 /dev/sda1 254:0   lvm  swap         1.5G
  `-/dev/mapper/system-root /dev/dm-1 /dev/sda1 254:1   lvm  btrfs       18.6G

# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/sda1
  VG Name               system
  PV Size               20.00 GiB / not usable 3.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              5119
  Free PE               2
  Allocated PE          5117
  PV UUID               wKSGQ9-VyTZ-8vCv-t0gL-elPF-5d7u-h0nr5y

# vgdisplay 
  --- Volume group ---
  VG Name               system
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               20.00 GiB
  PE Size               4.00 MiB
  Total PE              5119
  Alloc PE / Size       5117 / 19.99 GiB
  Free  PE / Size       2 / 8.00 MiB
  VG UUID               nw0UCG-WT1G-3cv1-AA73-nMC7-i1cd-nDRZPm

# lvdisplay 
  --- Logical volume ---
  LV Path                /dev/system/root
  LV Name                root
  VG Name                system
  LV UUID                pcAtC2-nFQf-RtLx-wNFs-10r6-W6cm-97LIDR
  LV Write Access        read/write
  LV Creation host, time install, 2018-05-17 09:32:47 +0200
  LV Status              available
  # open                 1
  LV Size                18.54 GiB
  Current LE             4746
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           254:1

  --- Logical volume ---
  LV Path                /dev/system/swap
  LV Name                swap
  VG Name                system
  LV UUID                01GXgu-AxQv-TfzR-Pitr-OwHL-Pat9-nc5X2M
  LV Write Access        read/write
  LV Creation host, time install, 2018-05-17 09:32:47 +0200
  LV Status              available
  # open                 2
  LV Size                1.45 GiB
  Current LE             371
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1024
  Block device           254:0

bern66 commented at 2018-05-17 13:03:

Should I understand there is a fix for my problem? If there is a fix, is it available so I could test it in my environment?

Thanks,

jsmeix commented at 2018-05-17 14:58:

No,
it confirms https://github.com/rear/rear/issues/1796#issuecomment-388762181
also for the "LVM-based Proposal" in YaST as far as I can reproduce it.

I know about no SLES12 btrfs where its root subvolume
is mounted at all.

What is mounted at the root of the filesystem tree
(i.e. what is mounted at the '/' mountpoint directory) is
for SLES12-SP0 the normal /@ subvolume (which causes an issue) and
since SLES12-SP1 a snapper /@/.snapshots/1/snapshot subvolume.

jsmeix commented at 2018-05-18 11:53:

With https://github.com/rear/rear/pull/1813 merged it should work again
to recreate a SLES12-GA/SP0 system whith its default btrfs structure
cf. https://github.com/rear/rear/issues/1796#issuecomment-388753131

bern66 commented at 2018-05-18 12:03:

Thanks @jsmeix I'll give it a try today.

jsmeix commented at 2018-05-18 12:36:

@bern66
very likely this will not help you unless you get a btrfs structure
that is one of the known SLES12 btrfs structures,
i.e. either the one of a SLES12-GA/SP0 system
where what is mounted at '/' shows up as

# findmnt -a -o SOURCE,TARGET -t btrfs
SOURCE                                            TARGET
/dev/mapper/system-root[/@]                       /
...

or the one of a SLES12-SP1-or-later system
where what is mounted at '/' shows up as

# findmnt -a -o SOURCE,TARGET -t btrfs
SOURCE                                             TARGET
/dev/mapper/system-root[/@/.snapshots/1/snapshot]  /
...

If you can you should try to get a SLES12-SP1-or-later
btrfs structure because the old SLES12-GA/SP0 btrfs setup
has an issue which is a "disk space leak" bug,
see
https://www.suse.com/releasenotes/x86_64/SUSE-SLES/12-SP1/
that reads (excerpt):

2.2.9 Installing into a Snapper-Controlled Btrfs Subvolume

Prior to SUSE Linux Enterprise 12 SP1, after the first rollback
of the system the original root volume was no longer reachable
and would never be removed automatically.
This resulted in a disk space leak.

Starting with SP1, YaST installs the system into a subvolume
controlled by Snapper.

bern66 commented at 2018-05-22 16:04:

@jsmeix @schabrolles
I did a fresh install of SLES12-SP2:

tstinf03:/ # lsb_release -a
LSB Version:    n/a
Distributor ID: SUSE
Description:    SUSE Linux Enterprise Server for SAP Applications 12 SP2
Release:        12.2
Codename:       n/a

In a Power environment:

tstinf03:/ # arch
ppc64le

For which I understand I have a standard btrfs structure:

tstinf03:/ # btrfs subvolume  list -a /
ID 257 gen 118 top level 5 path <FS_TREE>/@
ID 258 gen 212 top level 257 path <FS_TREE>/@/.snapshots
ID 260 gen 236 top level 258 path <FS_TREE>/@/.snapshots/1/snapshot
ID 261 gen 209 top level 257 path <FS_TREE>/@/boot/grub2/powerpc-ieee1275
ID 262 gen 231 top level 257 path <FS_TREE>/@/home
ID 263 gen 118 top level 257 path <FS_TREE>/@/opt
ID 264 gen 212 top level 257 path <FS_TREE>/@/srv
ID 265 gen 236 top level 257 path <FS_TREE>/@/tmp
ID 266 gen 237 top level 257 path <FS_TREE>/@/usr/local
ID 267 gen 237 top level 257 path <FS_TREE>/@/var/cache
ID 268 gen 219 top level 257 path <FS_TREE>/@/var/crash
ID 269 gen 219 top level 257 path <FS_TREE>/@/var/lib/libvirt/images
ID 270 gen 219 top level 257 path <FS_TREE>/@/var/lib/machines
ID 271 gen 219 top level 257 path <FS_TREE>/@/var/lib/mailman
ID 272 gen 219 top level 257 path <FS_TREE>/@/var/lib/mariadb
ID 273 gen 219 top level 257 path <FS_TREE>/@/var/lib/mysql
ID 274 gen 219 top level 257 path <FS_TREE>/@/var/lib/named
ID 275 gen 219 top level 257 path <FS_TREE>/@/var/lib/pgsql
ID 276 gen 238 top level 257 path <FS_TREE>/@/var/log
ID 277 gen 219 top level 257 path <FS_TREE>/@/var/opt
ID 278 gen 238 top level 257 path <FS_TREE>/@/var/spool
ID 279 gen 231 top level 257 path <FS_TREE>/@/var/tmp
ID 292 gen 162 top level 258 path <FS_TREE>/@/.snapshots/2/snapshot
ID 293 gen 164 top level 258 path <FS_TREE>/@/.snapshots/3/snapshot
ID 294 gen 165 top level 258 path <FS_TREE>/@/.snapshots/4/snapshot
ID 295 gen 166 top level 258 path <FS_TREE>/@/.snapshots/5/snapshot
ID 296 gen 167 top level 258 path <FS_TREE>/@/.snapshots/6/snapshot
ID 297 gen 168 top level 258 path <FS_TREE>/@/.snapshots/7/snapshot
ID 298 gen 169 top level 258 path <FS_TREE>/@/.snapshots/8/snapshot
ID 299 gen 170 top level 258 path <FS_TREE>/@/.snapshots/9/snapshot
ID 300 gen 171 top level 258 path <FS_TREE>/@/.snapshots/10/snapshot
ID 301 gen 172 top level 258 path <FS_TREE>/@/.snapshots/11/snapshot
ID 302 gen 180 top level 258 path <FS_TREE>/@/.snapshots/12/snapshot
ID 303 gen 181 top level 258 path <FS_TREE>/@/.snapshots/13/snapshot
ID 304 gen 183 top level 258 path <FS_TREE>/@/.snapshots/14/snapshot
ID 305 gen 184 top level 258 path <FS_TREE>/@/.snapshots/15/snapshot
ID 306 gen 190 top level 258 path <FS_TREE>/@/.snapshots/16/snapshot
ID 307 gen 191 top level 258 path <FS_TREE>/@/.snapshots/17/snapshot
ID 308 gen 202 top level 258 path <FS_TREE>/@/.snapshots/18/snapshot
ID 309 gen 204 top level 258 path <FS_TREE>/@/.snapshots/19/snapshot

I then did a rear backup with rear -vD mkbackup and tried a rear recover with rear -vD recover. The recover still ended in error. See below the output of the recover and attached is the rear recover log. So far from my understanding, the problem seems to be the subvol=@/.snapshots.

RESCUE pgiststinf03:~ # rear -vD recover
Relax-and-Recover 2.3-git. / 2018-05-18
Using log file: /var/log/rear/rear-pgiststinf03.log
Running workflow recover within the ReaR rescue/recovery system
Starting required daemons for NFS: RPC portmapper (portmap or rpcbind) and rpc.statd if available.
Started RPC portmapper 'rpcbind'.
RPC portmapper 'rpcbind' available.
Started rpc.statd.
RPC status rpc.statd available.
Starting rpc.idmapd failed.
Using backup archive '/tmp/rear.qeI4KlUR0ME2oC2/outputfs/pgiststinf03/backup.tar.gz'
Will do driver migration (recreating initramfs/initrd)
Calculating backup archive size
Backup archive size is 1.4G     /tmp/rear.qeI4KlUR0ME2oC2/outputfs/pgiststinf03/backup.tar.gz (compressed)
Setting up multipathing
Activating multipath
multipath activated
Starting multipath daemon
multipathd started
Listing multipath device found
 size=100G
Comparing disks
Device mapper!3600507680c800450b800000000000ba7 does not exist (manual configuration needed)
Switching to manual disk layout configuration
Using /dev/mapper/3600507680c800450b800000000000bb9 (same size) for recreating /dev/mapper/3600507680c800450b800000000000ba7
Current disk mapping table (source -> target):
    /dev/mapper/3600507680c800450b800000000000ba7 /dev/mapper/3600507680c800450b800000000000bb9
UserInput -I LAYOUT_MIGRATION_CONFIRM_MAPPINGS needed in /usr/share/rear/layout/prepare/default/300_map_disks.sh line 211
Confirm or edit the disk mapping
1) Confirm disk mapping and continue 'rear recover'
2) Edit disk mapping (/var/lib/rear/layout/disk_mappings)
3) Use Relax-and-Recover shell and return back to here
4) Abort 'rear recover'
(default '1' timeout 300 seconds)
1
UserInput: Valid choice number result 'Confirm disk mapping and continue 'rear recover''
User confirmed disk mapping
UserInput -I LAYOUT_FILE_CONFIRMATION needed in /usr/share/rear/layout/prepare/default/500_confirm_layout_file.sh line 26
Confirm or edit the disk layout file
1) Confirm disk layout and continue 'rear recover'
2) Edit disk layout (/var/lib/rear/layout/disklayout.conf)
3) View disk layout (/var/lib/rear/layout/disklayout.conf)
4) View original disk space usage (/var/lib/rear/layout/config/df.txt)
5) Use Relax-and-Recover shell and return back to here
6) Abort 'rear recover'
(default '1' timeout 300 seconds)
1
UserInput: Valid choice number result 'Confirm disk layout and continue 'rear recover''
User confirmed disk layout file
Doing SLES12-SP1 (and later) btrfs subvolumes setup because the default subvolume path contains '@/.snapshots/'
UserInput -I LAYOUT_CODE_CONFIRMATION needed in /usr/share/rear/layout/recreate/default/100_confirm_layout_code.sh line 26
Confirm or edit the disk recreation script
1) Confirm disk recreation script and continue 'rear recover'
2) Edit disk recreation script (/var/lib/rear/layout/diskrestore.sh)
3) View disk recreation script (/var/lib/rear/layout/diskrestore.sh)
4) View original disk space usage (/var/lib/rear/layout/config/df.txt)
5) Use Relax-and-Recover shell and return back to here
6) Abort 'rear recover'
(default '1' timeout 300 seconds)
1
UserInput: Valid choice number result 'Confirm disk recreation script and continue 'rear recover''
User confirmed disk recreation script
Start system layout restoration.
Creating partitions for disk /dev/mapper/3600507680c800450b800000000000bb9 (msdos)
Creating LVM PV /dev/mapper/3600507680c800450b800000000000bb9-part2
Creating LVM VG system
Creating LVM volume system/root
Creating LVM volume system/swap
Creating filesystem of type btrfs with mount point / on /dev/mapper/system-root.
Mounting filesystem /
/usr/lib/snapper/installation-helper not executable may indicate an error with btrfs default subvolume setup for @/.snapshots/1/snapshot on /dev/mapper/system-root
UserInput -I LAYOUT_CODE_RUN needed in /usr/share/rear/layout/recreate/default/200_run_layout_code.sh line 127
The disk layout recreation script failed
1) Rerun disk recreation script (/var/lib/rear/layout/diskrestore.sh)
2) View 'rear recover' log file (/var/log/rear/rear-pgiststinf03.log)
3) Edit disk recreation script (/var/lib/rear/layout/diskrestore.sh)
4) View original disk space usage (/var/lib/rear/layout/config/df.txt)
5) Use Relax-and-Recover shell and return back to here
6) Abort 'rear recover'
(default '1' timeout 300 seconds)
6
UserInput: Valid choice number result 'Abort 'rear recover''
ERROR: User chose to abort 'rear recover' in /usr/share/rear/layout/recreate/default/200_run_layout_code.sh
Aborting due to an error, check /var/log/rear/rear-pgiststinf03.log for details
Exiting rear recover (PID 2830) and its descendant processes
Running exit tasks
You should also rm -Rf /tmp/rear.qeI4KlUR0ME2oC2
Terminated
RESCUE pgiststinf03:~ #

rear-pgiststinf03.log-20180522.gz

schabrolles commented at 2018-05-22 17:39:

@bern66 could you please check the following into your "recovery system"

ls -l /usr/lib/snapper/installation-helper

Did you add the SLES12SP2 addtional configuration lines into your /etc/rear/local.conf file ?

## SLES12
BACKUP_OPTIONS="nfsvers=4,nolock"
REQUIRED_PROGS=( "${REQUIRED_PROGS[@]}" snapper chattr lsattr )
COPY_AS_IS=( "${COPY_AS_IS[@]}" /usr/lib/snapper/installation-helper /etc/snapper/config-templates/default )

for subvol in $(findmnt -n -r -t btrfs | cut -d ' ' -f 1 | grep -v '^/$' | egrep -v 'snapshots|crash') ; do
    BACKUP_PROG_INCLUDE=( "${BACKUP_PROG_INCLUDE[@]}" "$subvol" )
done

POST_RECOVERY_SCRIPT=( 'if snapper --no-dbus -r $TARGET_FS_ROOT get-config | grep -q "^QGROUP.*[0-9]/[0-9]" ; then snapper --no-dbus -r $TARGET_FS_ROOT set-config QGROUP= ; snapper --no-dbus -r $TARGET_FS_ROOT setup-quota && echo snapper setup-quota done || echo snapper setup-quota failed ; else echo snapper setup-quota not used ; fi' )

bern66 commented at 2018-05-22 18:56:

Damn! I forgot this line and another one. Now it works for a normal btrfs structure under IBM Power Systems. I now have to find out if I can fix the anomaly in our actual btrfs systems.

Thanks!

jsmeix commented at 2018-05-23 09:19:

@bern66
in your
https://github.com/rear/rear/issues/1796#issuecomment-391047347
the btrfs subvolume list -a / output looks o.k.
but that only means you have the usual SLES12-SP1-or-later btrfs subvolumes
but that does not tell how those subvolumes are mounted, in particular
it does not tell what subvolume is mounted at the root of the filesystem tree
(i.e. what subvolume is mounted at the / directory)
only findmnt -a -o SOURCE,TARGET -t btrfs will tell that.

In general regarding btrfs and how to find out what its actual structure is
(not how it looks from within the mounted tree of filesystems and subvolume), see
https://github.com/rear/rear/issues/1496#issuecomment-329775673
therein what I wrote about
"In general when you have to deal with btrfs subvolumes".
therein items (1) (2) and (3) - item (4) should not happen for a pristine
SLES12 default installation but it could happen when whatever tools
create additional btrfs subvolumes in a SLES12 system.

Regarding what subvolume is mounted at the / directory in SLES12:
It is the btrfs default subvolume that gets mounted at / in SLES12
i.e. what btrfs subvolume get-default / shows.

For some very initial basics about btrfs on SLES12 you may also have a look at
https://en.opensuse.org/images/8/8b/Relax-and-Recover_jsmeix_presentation.pdf
therein the two pages about "Relax-and-Recover on SLE12"
which is about the old SLE12-GA/SP0 (where .../@ is the default subvolume)
that is linked as "Fundamentals about Relax-and-Recover presentation PDF"
in the "See also" section in
https://en.opensuse.org/SDB:Disaster_Recovery

jsmeix commented at 2018-05-23 09:45:

@bern66
in your older https://github.com/rear/rear/issues/1796#issuecomment-387412149
(I don't know if that is still valid here)
therein in your https://github.com/rear/rear/files/1984200/rear-20180508.zip
your disklayout.conf contains

# Btrfs default subvolume for /dev/mapper/system-root at /
# Format: btrfsdefaultsubvol <device> <mountpoint> <btrfs_subvolume_ID> <btrfs_subvolume_path>
btrfsdefaultsubvol /dev/mapper/system-root / 5 /

which is not what matches an original SLES12 btrfs stucture.

On SLES12-GA/SP0 the btrfs default subvolume is

# btrfs subvolume get-default /
ID 257 gen 601 top level 5 path @

# grep ^btrfsdefaultsubvol var/lib/rear/layout/disklayout.conf
btrfsdefaultsubvol /dev/mapper/system-root / 257 @

On SLES12-SP1-or-later the btrfs default subvolume is

# btrfs subvolume get-default /
ID 259 gen 654 top level 258 path @/.snapshots/1/snapshot

# grep ^btrfsdefaultsubvol var/lib/rear/layout/disklayout.conf
btrfsdefaultsubvol /dev/mapper/system-root / 259 @/.snapshots/1/snapshot

bern66 commented at 2018-05-24 17:47:

As far as I understand it, the btrfs structure is ok.

tstinf03:~> findmnt -a -o SOURCE,TARGET -t btrfs
SOURCE                                                  TARGET
/dev/mapper/system-root[/@/.snapshots/1/snapshot]       /
/dev/mapper/system-root[/@/srv]                         |-/srv
/dev/mapper/system-root[/@/var/crash]                   |-/var/crash
/dev/mapper/system-root[/@/boot/grub2/powerpc-ieee1275] |-/boot/grub2/powerpc-ieee1275
/dev/mapper/system-root[/@/var/lib/machines]            |-/var/lib/machines
/dev/mapper/system-root[/@/var/opt]                     |-/var/opt
/dev/mapper/system-root[/@/opt]                         |-/opt
/dev/mapper/system-root[/@/var/spool]                   |-/var/spool
/dev/mapper/system-root[/@/var/lib/mysql]               |-/var/lib/mysql
/dev/mapper/system-root[/@/var/lib/libvirt/images]      |-/var/lib/libvirt/images
/dev/mapper/system-root[/@/.snapshots]                  |-/.snapshots
/dev/mapper/system-root[/@/var/lib/pgsql]               |-/var/lib/pgsql
/dev/mapper/system-root[/@/var/lib/mailman]             |-/var/lib/mailman
/dev/mapper/system-root[/@/home]                        |-/home
/dev/mapper/system-root[/@/tmp]                         |-/tmp
/dev/mapper/system-root[/@/usr/local]                   |-/usr/local
/dev/mapper/system-root[/@/var/lib/mariadb]             |-/var/lib/mariadb
/dev/mapper/system-root[/@/var/cache]                   |-/var/cache
/dev/mapper/system-root[/@/var/log]                     |-/var/log
/dev/mapper/system-root[/@/var/tmp]                     |-/var/tmp
/dev/mapper/system-root[/@/var/lib/named]               `-/var/lib/named

The / seems mounted at a normal place.

tstinf03:~ # btrfs subvolume get-default /
ID 279 gen 2925 top level 277 path @/.snapshots/1/snapshot

Now that I know rear can do the job, I'll have to find a way to fix our systems in problem.

Thanks for your assistance.

jsmeix commented at 2018-05-25 09:01:

@bern66
now the btrfs structure looks good!

An addedum FYI which could be also of interest for @schabrolles
what I meanwhile found out how a plain SLES12-SP2 default installation
can be different from a default installation of SLES_SAP12-SP2
"SUSE Linux Enterprise Server for SAP Applications 12 SP2":

I asked a colleague at SUSE about
what kind of btrfs setup a default installation of
"SUSE Linux Enterprise Server for SAP Applications 12 SP2"
should result.

He told me that SLES_SAP-12-SP2 consists of a pristine SLES12-SP2
plus additional stuff (mainly SLES_HA plus some SAP specific stuff)
so that when one installs SLES_SAP-12-SP2 from scratch
one should get a pristine SLES12-SP2 btrfs setup.

But this is not fully true because actually there are differences.

It is expected that the disk space makes a difference
whether or not one gets a btrfs setup with enabled
or disabled snapshots.

What is unexpected is that even with exactly same disk space
a default SLES_SAP-12-SP2 installation still differs
from a default SLES12-SP2 installation.

When I install SLES12-SP2
on a single 20 GiB (virtual) harddisk
on a KVM/QEMU virtual machine I get
by default plain partitioning without LVM and
a btrfs setup with enabled snapshots
so that in particular what is mounted at '/'
is a snapper controlled btrfs snapshot subvolume:

# findmnt -a -o SOURCE,TARGET -t btrfs
SOURCE                                             TARGET
/dev/mapper/system-root[/@/.snapshots/1/snapshot]  /
...

I also get that btrfs setup with enabled snapshots
when I select the "LVM-based Proposal" during
SLES12-SP2 installation on the same virtual machine.

When I install SLES_SAP-12-SP2
on a single 20 GiB (virtual) harddisk
on a KVM/QEMU virtual machine I get
by default the "LVM-based Proposal"
(in contrast to what I get with plain SLES12-SP2)
and I get a btrfs setup with disabled snapshots
(in contrast to what I get with SLES12-SP2 on a 20 GiB harddisk)
as follows:

# lsblk -i -p -o NAME,KNAME,TYPE,FSTYPE,SIZE /dev/sda
NAME                        KNAME     TYPE FSTYPE       SIZE
/dev/sda                    /dev/sda  disk               20G
`-/dev/sda1                 /dev/sda1 part LVM2_member   20G
  |-/dev/mapper/system-swap /dev/dm-0 lvm  swap         636M
  `-/dev/mapper/system-root /dev/dm-1 lvm  btrfs       19.4G

# findmnt -a -o SOURCE,TARGET -t btrfs
SOURCE                                             TARGET
/dev/mapper/system-root[/@]                        /
/dev/mapper/system-root[/@/home]                   |-/home
/dev/mapper/system-root[/@/opt]                    |-/opt
/dev/mapper/system-root[/@/var/lib/machines]       |-/var/lib/machines
/dev/mapper/system-root[/@/boot/grub2/i386-pc]     |-/boot/grub2/i386-pc
/dev/mapper/system-root[/@/var/crash]              |-/var/crash
/dev/mapper/system-root[/@/boot/grub2/x86_64-efi]  |-/boot/grub2/x86_64-efi
/dev/mapper/system-root[/@/var/lib/pgsql]          |-/var/lib/pgsql
/dev/mapper/system-root[/@/var/lib/mailman]        |-/var/lib/mailman
/dev/mapper/system-root[/@/var/lib/libvirt/images] |-/var/lib/libvirt/images
/dev/mapper/system-root[/@/var/cache]              |-/var/cache
/dev/mapper/system-root[/@/var/lib/mysql]          |-/var/lib/mysql
/dev/mapper/system-root[/@/var/lib/mariadb]        |-/var/lib/mariadb
/dev/mapper/system-root[/@/var/opt]                |-/var/opt
/dev/mapper/system-root[/@/tmp]                    |-/tmp
/dev/mapper/system-root[/@/var/lib/named]          |-/var/lib/named
/dev/mapper/system-root[/@/srv]                    |-/srv
/dev/mapper/system-root[/@/var/tmp]                |-/var/tmp
/dev/mapper/system-root[/@/var/spool]              |-/var/spool
/dev/mapper/system-root[/@/var/log]                |-/var/log
/dev/mapper/system-root[/@/usr/local]              `-/usr/local

I tested "rear mkbackup" plus "rear recover" on that system
(i.e. when what is mounted at '/' is the btrfs normal subvolume '/@')
which worked for me
(using ReaR with the https://github.com/rear/rear/pull/1813 fix).

When I install SLES_SAP-12-SP2
on a single 40 GiB (virtual) harddisk
on a KVM/QEMU virtual machine I get
by default the "LVM-based Proposal" and
a btrfs setup with enabled snapshots
as follows:

# lsblk -i -p -o NAME,KNAME,TYPE,FSTYPE,SIZE /dev/sda
NAME                        KNAME     TYPE FSTYPE       SIZE
/dev/sda                    /dev/sda  disk               40G
`-/dev/sda1                 /dev/sda1 part LVM2_member   40G
  |-/dev/mapper/system-swap /dev/dm-0 lvm  swap           2G
  `-/dev/mapper/system-root /dev/dm-1 lvm  btrfs       38.1G

# findmnt -a -o SOURCE,TARGET -t btrfs
SOURCE                                             TARGET
/dev/mapper/system-root[/@/.snapshots/1/snapshot]  /
/dev/mapper/system-root[/@/var/lib/mysql]          |-/var/lib/mysql
/dev/mapper/system-root[/@/boot/grub2/x86_64-efi]  |-/boot/grub2/x86_64-efi
/dev/mapper/system-root[/@/var/lib/machines]       |-/var/lib/machines
/dev/mapper/system-root[/@/opt]                    |-/opt
/dev/mapper/system-root[/@/tmp]                    |-/tmp
/dev/mapper/system-root[/@/var/lib/mariadb]        |-/var/lib/mariadb
/dev/mapper/system-root[/@/usr/local]              |-/usr/local
/dev/mapper/system-root[/@/var/opt]                |-/var/opt
/dev/mapper/system-root[/@/var/crash]              |-/var/crash
/dev/mapper/system-root[/@/var/spool]              |-/var/spool
/dev/mapper/system-root[/@/var/lib/mailman]        |-/var/lib/mailman
/dev/mapper/system-root[/@/var/lib/named]          |-/var/lib/named
/dev/mapper/system-root[/@/var/cache]              |-/var/cache
/dev/mapper/system-root[/@/var/log]                |-/var/log
/dev/mapper/system-root[/@/home]                   |-/home
/dev/mapper/system-root[/@/var/tmp]                |-/var/tmp
/dev/mapper/system-root[/@/srv]                    |-/srv
/dev/mapper/system-root[/@/.snapshots]             |-/.snapshots
/dev/mapper/system-root[/@/var/lib/libvirt/images] |-/var/lib/libvirt/images
/dev/mapper/system-root[/@/var/lib/pgsql]          |-/var/lib/pgsql
/dev/mapper/system-root[/@/boot/grub2/i386-pc]     `-/boot/grub2/i386-pc

which is the same as when I install SLES12-SP2
on a single 20 GiB (virtual) harddisk
on a KVM/QEMU virtual machine
and select the "LVM-based Proposal", cf.
https://github.com/rear/rear/issues/1796#issuecomment-389775698
for SLES12-SP3.

bern66 commented at 2018-05-25 12:14:

@jsmeix
All your examples shows / mounted under /@ or /@/.snapshots just like my test system. All our systems show a root filesystem mounted directly under / like below and I understand this is the source of all my problems.

tstinf01:~/scripts # findmnt -a -o SOURCE,TARGET -t btrfs
SOURCE                                                  TARGET
/dev/mapper/system-root                                 /
/dev/mapper/system-root[/@/opt]                         |-/opt
/dev/mapper/system-root[/@/var/spool]                   |-/var/spool
/dev/mapper/system-root[/@/var/cache]                   |-/var/cache

jsmeix commented at 2018-05-25 12:25:

@bern66
exactly!

FYI (also @schabrolles ):
Next week I am not in the office, therefore:
Have a nice (and hopefully relaxed) weekend and a successful next week!

bern66 commented at 2018-05-28 11:50:

@jsmeix @schabrolles
The btrfs problem is easily fixed.

tstinf04:~ # findmnt  -ao SOURCE,TARGET -t btrfs
SOURCE                                                  TARGET
/dev/mapper/system-root                                 /
/dev/mapper/system-root[/@/var/opt]                     |-/var/opt
/dev/mapper/system-root[/@/var/spool]                   |-/var/spool
/dev/mapper/system-root[/@/tmp]                         |-/tmp
/dev/mapper/system-root[/@/.snapshots]                  |-/.snapshots

tstinf04:~ # snapper create
tstinf04:~ # snapper ls
Type   | #   | Pre # | Date                     | User | Cleanup | Description  | Userdata
-------+-----+-------+--------------------------+------+---------+--------------+--------------
single | 0   |       |                          | root |         | current      |
pre    | 241 |       | Thu Jan 11 10:16:25 2018 | root | number  | zypp(zypper) | important=no
post   | 242 | 241   | Thu Jan 11 10:16:26 2018 | root | number  |              | important=no
pre    | 243 |       | Thu Jan 11 10:16:37 2018 | root | number  | zypp(zypper) | important=no
post   | 244 | 243   | Thu Jan 11 10:16:38 2018 | root | number  |              | important=no
pre    | 245 |       | Mon May  7 09:33:14 2018 | root | number  | zypp(zypper) | important=yes
post   | 246 | 245   | Mon May  7 09:34:36 2018 | root | number  |              | important=yes
single | 247 |       | Mon May 28 07:44:35 2018 | root |         |              |
tstinf04:~ # snapper rollback 247
Creating read-only snapshot of current system. (Snapshot 248.)
Creating read-write snapshot of snapshot 247. (Snapshot 249.)
Setting default subvolume to snapshot 249.

tstinf04:~ # shutdown -r now

pgiststinf04:~ # findmnt  -ao SOURCE,TARGET -t btrfs
SOURCE                                                  TARGET
/dev/mapper/system-root[/@/.snapshots/249/snapshot]     /
/dev/mapper/system-root[/@/usr/local]                   |-/usr/local
/dev/mapper/system-root[/@/var/lib/pgsql]               |-/var/lib/pgsql
/dev/mapper/system-root[/@/var/crash]                   |-/var/crash

It is that easy to fix this problem. Thanks for your time. Now I am on intensive ReaR testing.

bern66 commented at 2018-05-28 13:25:

After a rear recover the system show a snapshot of 1.

tstinf04:~ # findmnt -ao SOURCE,TARGET -t btrfs
SOURCE                                                  TARGET
/dev/mapper/system-root[/@/.snapshots/1/snapshot]       /
/dev/mapper/system-root[/@/var/lib/named]               |-/var/lib/named
/dev/mapper/system-root[/@/var/log]                     |-/var/log

schabrolles commented at 2018-05-29 18:37:

I mark this one as fixed as the problem described in the title is solved by the different patches referenced into this thread.

schabrolles commented at 2018-05-29 18:42:

@bern66,

The fact the the snapshot is now 1 looks good to me. ReaR doesn't backup the snapshots layer. it backups the system as it is when you run the rear mkbackup command and recreate a new system during recovery that can still work with btrfs snapshot and snapper. (so restart with snapshot 1)

bern66 commented at 2018-05-29 18:49:

Ok, that is what I understood. Thanks!

jsmeix commented at 2018-06-04 12:01:

FYI regarding
after rear recover the system show a snapshot of 1
see in
usr/share/rear/conf/examples/SLE12-SP2-btrfs-example.conf
the comment

# Regarding btrfs snapshots:
# Recovery of btrfs snapshot subvolumes is not possible.
# Only recovery of "normal" btrfs subvolumes is possible.
# On SLE12-SP1 and SP2 the only exception is the btrfs snapshot subvolume
# that is mounted at '/' but that one is not recreated but instead
# it is created anew from scratch during the recovery installation with the
# default first btrfs snapper snapshot subvolume path "@/.snapshots/1/snapshot"
# by the SUSE tool "installation-helper --step 1" (cf. below).
# Other snapshots like "@/.snapshots/234/snapshot" are not recreated.

[Export of Github issue for rear/rear.]