#2451 Issue closed: Unable to see multipath devices during rear recover

Labels: support / question, special hardware or VM, no-issue-activity

makamp1 opened issue at 2020-07-02 23:58:

Relax-and-Recover (ReaR) Issue Template

Fill in the following items before submitting a new issue
(quick response is not guaranteed with free support):

  • ReaR version ("/usr/sbin/rear -V"):
    Relax-and-Recover 2.6

  • OS version ("cat /etc/os-release" or "lsb_release -a" or "cat /etc/rear/os.conf"):
    SLES 12 SP4

  • ReaR configuration files ("cat /etc/rear/site.conf" and/or "cat /etc/rear/local.conf"):
    cat /etc/rear/local.conf

#Backup Parameters
BACKUP=NETFS
BACKUP_OPTIONS="nfsvers=3,nolock"
BACKUP_URL=nfs://10.154.248.111/FileUpload/manju/rear
NETFS_KEEP_OLD_BACKUP_COPY=no
REQUIRED_PROGS=( "${REQUIRED_PROGS[@]}" snapper chattr lsattr parted multipath dmsetup kpartx multipathd )
COPY_AS_IS=( "${COPY_AS_IS[@]}" /usr/lib/snapper/installation-helper /etc/snapper/config-templates/default )
BACKUP_PROG_EXCLUDE=("$BACKUP_PROG_EXCLUDE[@]}" '/.snapshots/*' '/var/crash' )
BACKUP_PROG_INCLUDE=( ` findmnt -n -r -o TARGET -t btrfs | grep -v '^/$' | egrep -v 'snapshots|crash' `)
EXCLUDE_VG=(`vgs | awk 'NR>1 {print $1}'`)
SSH_ROOT_PASSWORD="..."

#ISO Parameters
OUTPUT=ISO
ISO_MKISOFS_BIN=/usr/bin/ebiso
ISO_PREFIX="rear-`date +%Y%m%d`-$HOSTNAME"

#Boot from SAN Configurations
AUTOEXCLUDE_MULTIPATH=n
BOOT_OVER_SAN=1
USING_UEFI_BOOTLOADER=1
UEFI_BOOTLOADER=( 'shim*.efi' 'grub.efi' )
  • Hardware (PC or PowerNV BareMetal or ARM) or virtual machine (KVM guest or PoverVM LPAR):
    BM

  • System architecture (x86 compatible or PPC64/PPC64LE or what exact ARM device):
    X86

  • Firmware (BIOS or UEFI or Open Firmware) and bootloader (GRUB or ELILO or Petitboot):
    UEFI, GRUB

  • Storage (local disk or SSD) and/or SAN (FC or iSCSI or FCoE) and/or multipath (DM or NVMe):
    SAN

  • Storage layout ("lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,SIZE,MOUNTPOINT" or "lsblk" as makeshift):

  • Description of the issue (ideally so that others can reproduce it):
    I am doing some tests to do rear recover of my HPE SuperdomeX server using SLES 12 SP4

When I tried using rear 2.4, during recover multipath devices were not listed "multipath -l"
then I updated it to rear 2.6, it seems to detect the SAN LUNs OK .. but not all of them and miss many LUNs
after few "rear recover" commands are run in the RESCUE mode, the LUNs appear .. and paths go faulty / offline preventing the restore to complete success fully

finally I managed to get 1-2 "rear recover" to work.

below are some outputs

RESCUE cs900r01os2:~ # rear -v recover
Relax-and-Recover 2.4 / Git
Running rear recover (PID 11534)
Using log file: /var/log/rear/rear-cs900r01os2.log
Running workflow recover within the ReaR rescue/recovery system
Starting required daemons for NFS: RPC portmapper (portmap or rpcbind) and rpc.statd if available.
Started RPC portmapper 'rpcbind'.
RPC portmapper 'rpcbind' available.
RPC status rpc.statd available.
Starting rpc.idmapd failed.
Using backup archive '/tmp/rear.8X8Fe8j9OxWVg9f/outputfs/cs900r01os2/backup.tar.gz'
Will do driver migration (recreating initramfs/initrd)
Calculating backup archive size
Backup archive size is 2.2G     /tmp/rear.8X8Fe8j9OxWVg9f/outputfs/cs900r01os2/backup.tar.gz (compressed)
Setting up multipathing
Activating multipath
multipath activated
Starting multipath daemon
multipathd started
Listing multipath device found
   <<<<<<<<<<<<<<<<<< NO DISKS LISTED >>>>>>>>>>>>>>>>>
Comparing disks

RESCUE cs900r01os2:~ # rear -d recover
Relax-and-Recover 2.6 / 2020-06-17
Running rear recover (PID 10801)
Using log file: /var/log/rear/rear-cs900r01os2.log
Running workflow recover within the ReaR rescue/recovery system
Starting required daemons for NFS: RPC portmapper (portmap or rpcbind) and rpc.statd if available.
Started RPC portmapper 'rpcbind'.
RPC portmapper 'rpcbind' available.
Started rpc.statd.
RPC status rpc.statd available.
Starting rpc.idmapd failed.
Using backup archive '/tmp/rear.1UhCfbuhglF2WRb/outputfs/cs900r01os2/backup.tar.gz'
Will do driver migration (recreating initramfs/initrd)
Calculating backup archive size
Backup archive size is 1.8G     /tmp/rear.1UhCfbuhglF2WRb/outputfs/cs900r01os2/backup.tar.gz (compressed)
Setting up multipathing
Activating multipath
multipath activated
Starting multipath daemon
multipathd started
Listing multipath device found
VV0026 (360002ac0000000000000001a0001da07) dm-12 3PARdata,VV size=64G
VV0075 (360002ac0000000000000004b0001da07) dm-22 3PARdata,VV size=2.9T
360002ac0000000000000000a0001da07 dm-0 3PARdata,VV size=256G
VV0074 (360002ac0000000000000004a0001da07) dm-21 3PARdata,VV size=2.9T
VV0073 (360002ac000000000000000490001da07) dm-20 3PARdata,VV size=2.9T
VV0107 (360002ac0000000000000006b0001da07) dm-11 3PARdata,VV size=2.9T
VV0072 (360002ac000000000000000480001da07) dm-19 3PARdata,VV size=2.9T
VV0106 (360002ac0000000000000006a0001da07) dm-10 3PARdata,VV size=2.9T

<<<<<<<<<<<<<<<<< ALL THE DISKS ARE LISTED >>>>>>>>>>>>>>>>>>>>

RESCUE cs900r01os2:~ # multipath -ll
Jul 02 08:01:23 | DM multipath kernel driver not loaded

RESCUE cs900r01os2:~ # rear -D recover
Relax-and-Recover 2.6 / 2020-06-17
Running rear recover (PID 10283)
Using log file: /var/log/rear/rear-cs900r01os2.log
Running workflow recover within the ReaR rescue/recovery system
Starting required daemons for NFS: RPC portmapper (portmap or rpcbind) and rpc.statd if available.
Started RPC portmapper 'rpcbind'.
RPC portmapper 'rpcbind' available.
Started rpc.statd.
RPC status rpc.statd available.
Starting rpc.idmapd failed.
Using backup archive '/tmp/rear.jmd07XlHRLNKh2e/outputfs/cs900r01os2/backup.tar.gz'
Will do driver migration (recreating initramfs/initrd)
Calculating backup archive size
Backup archive size is 1.8G     /tmp/rear.jmd07XlHRLNKh2e/outputfs/cs900r01os2/backup.tar.gz (compressed)
Setting up multipathing
Activating multipath
multipath activated
Starting multipath daemon
multipathd started
Listing multipath device found
360002ac0000000000000000a0001da07 dm-0 3PARdata,VV size=256G
Comparing disks
<<<<<<<<<<<<<<< ONLY ONE DISK LISTED >>>>>>>>>>>

no HW issues are seen as ALL the paths works fine when booted in to OS

PLEASE SEE ATTACHED DOCUMENT WITH DETAILS ON THE CONFIG AND OUTPUTS FROM THE DIFFERENT TESTS

QUESTION
is there any specific configuration required to force detecting SAN/multipath LUNs

  • Workaround, if any:

  • Attachments, as applicable ("rear -D mkrescue/mkbackup/recover" debug log files):

makamp1 commented at 2020-07-03 01:16:

The issue seems to match https://github.com/rear/rear/issues/2002
and https://github.com/rear/rear/issues/2016
and tried to run udevadm commands manually to see if it fixes the issue

using SLES 12 SP3 and rear 2.6 .. i did further testing and using idev commands to reload
but still having issue detecting SAN LUNs in REAR RESCUE mode.

RESCUE cs900r01os2:~ # uname -a
Linux cs900r01os2 4.4.162-94.69-default #1 SMP Mon Nov 5 18:58:52 UTC 2018 (9e06c56) x86_64 x86_64 x86_64 GNU/Linux
RESCUE cs900r01os2:~ #

RESCUE cs900r01os2:~ # multipath -ll
RESCUE cs900r01os2:~ # udevadm control --reload-rules
RESCUE cs900r01os2:~ # udevadm trigger
RESCUE cs900r01os2:~ # multipath -r
RESCUE cs900r01os2:~ # multipath -l
RESCUE cs900r01os2:~ #

RESCUE cs900r01os2:~ # rear -d -v recover
Relax-and-Recover 2.6 / 2020-06-17
Running rear recover (PID 27585)
Using log file: /var/log/rear/rear-cs900r01os2.log
Running workflow recover within the ReaR rescue/recovery system
Starting required daemons for NFS: RPC portmapper (portmap or rpcbind) and rpc.statd if available.
Started RPC portmapper 'rpcbind'.
RPC portmapper 'rpcbind' available.
RPC status rpc.statd available.
Starting rpc.idmapd failed.
Using backup archive '/tmp/rear.yZKMUAkxBYcdoh7/outputfs/cs900r01os2/backup.tar.gz'
Will do driver migration (recreating initramfs/initrd)
Calculating backup archive size
Backup archive size is 1.3G     /tmp/rear.yZKMUAkxBYcdoh7/outputfs/cs900r01os2/backup.tar.gz (compressed)
Setting up multipathing
Activating multipath
multipath activated
Starting multipath daemon
multipathd started
Listing multipath device found

Comparing disks
Ambiguous possible target disks need manual configuration (more than one with same size found)
Switching to manual disk layout configuration
Using /dev/sda (same size) for recreating /dev/mapper/360002ac0000000000000000a0001da07
Current disk mapping table (source => target):
  /dev/mapper/360002ac0000000000000000a0001da07 => /dev/sda

UserInput -I LAYOUT_MIGRATION_CONFIRM_MAPPINGS needed in /usr/share/rear/layout/prepare/default/300_map_disks.sh line 275
Confirm or edit the disk mapping
1) Confirm disk mapping and continue 'rear recover'
2) n/a
3) Edit disk mapping (/var/lib/rear/layout/disk_mappings)
4) Use Relax-and-Recover shell and return back to here
5) Abort 'rear recover'
(default '1' timeout 300 seconds)

RESCUE cs900r01os2:~ #  udevadm info -n sda
P: /devices/pci0000:10/0000:10:03.0/0000:14:00.0/host1/rport-1:0-0/target1:0:0/1:0:0:10/block/sda
N: sda
S: disk/by-path/pci-0000:14:00.0-fc-0x22010002ac01da07-lun-10
E: COMPAT_SYMLINK_GENERATION=1
E: DEVLINKS=/dev/disk/by-path/pci-0000:14:00.0-fc-0x22010002ac01da07-lun-10
E: DEVNAME=/dev/sda
E: DEVPATH=/devices/pci0000:10/0000:10:03.0/0000:14:00.0/host1/rport-1:0-0/target1:0:0/1:0:0:10/block/sda
E: DEVTYPE=disk
E: ID_PART_TABLE_TYPE=gpt
E: ID_PART_TABLE_UUID=1a06a760-d525-49cc-ab70-67dd20bbbfe7
E: ID_PATH=pci-0000:14:00.0-fc-0x22010002ac01da07-lun-10
E: ID_PATH_TAG=pci-0000_14_00_0-fc-0x22010002ac01da07-lun-10
E: MAJOR=8
E: MINOR=0
E: MPATH_SBIN_PATH=/sbin
E: SUBSYSTEM=block
E: TAGS=:systemd:
E: USEC_INITIALIZED=44344547

RESCUE cs900r01os2:~ #

jsmeix commented at 2020-07-03 07:50:

@makamp1
I have no personal experience with multipath
so I cannot really help with multipath issues
(I try to help but multipath issues get soon beyond my imagination).

Usually @schabrolles is our expert for SAN and multipath on IBM hardware
but in this case here it is a HPE SuperdomeX server
so perhaps @gozora could have a look here (as time permits)?

gozora commented at 2020-07-03 07:57:

IMHO this will be one of many FC driver + multipath issues on enterprise grade (I'm guessing HPE Superdome 2 ?) HW. My best guess would be to experiment with driver module options.

V.

gozora commented at 2020-07-03 08:02:

@makamp1 maybe try to check if your /etc/multipath.conf in ReaR recovery system is the same as on original system.

V.

makamp1 commented at 2020-07-03 09:34:

Thank you for the responses..

Issue is on HPE SuperdomeX .. the driver module is included in the RESCUE and get loaded all OK .. just that devices does not get detected when I do multipath -l

/etc/multipath.conf in RESCUE mode is same as on original system

in the same SAN/Storage environment, small system like DL360 SAN boot sever .. rear recover works fine.

sounds some thing to do with the size of the system and number of FC cards in the system and time taken to scan etc ..

jsmeix commented at 2020-07-03 09:55:

The

... sounds some thing to do with the size of the system and
number of FC cards in the system and time taken to scan etc ..

seems to also point in the direction of

... things happen too fast during "rear recover" in
usr/share/rear/layout/prepare/GNU/Linux/210_load_multipath.sh

in https://github.com/rear/rear/issues/2016#issuecomment-454338861

makamp1 commented at 2020-07-03 10:53:

anyway to slow it down ? ;-) OR manually scan the devices before kicking off "rear recover"

i tried modprobe, multipath -r, udev cmds in rescue shell before ... no luck finding the SAN LUNs

makamp1 commented at 2020-07-03 11:34:

Below is another examoke where I have kept repeating the "rear recover"
and LUNs were start appearing with each time it tried to reload multipath ..
after 5-6 attempts, i was able to successfully complete the rear recover.

RESCUE cs900r01os2:/ # rear -v recover
Relax-and-Recover 2.6 / 2020-06-17
Running rear recover (PID 10089)
Using log file: /var/log/rear/rear-cs900r01os2.log
Running workflow recover within the ReaR rescue/recovery system
Starting required daemons for NFS: RPC portmapper (portmap or rpcbind) and rpc.statd if available.
Started RPC portmapper 'rpcbind'.
RPC portmapper 'rpcbind' available.
Started rpc.statd.
RPC status rpc.statd available.
Starting rpc.idmapd failed.
Using backup archive '/tmp/rear.NqojL9PWqKXslJg/outputfs/cs900r01os2/backup.tar.gz'
Will do driver migration (recreating initramfs/initrd)
Calculating backup archive size
Backup archive size is 1.8G     /tmp/rear.NqojL9PWqKXslJg/outputfs/cs900r01os2/backup.tar.gz (compressed)
Setting up multipathing
Activating multipath
multipath activated
Starting multipath daemon
multipathd started
Listing multipath device found
VV0107 (360002ac0000000000000006b0001da07) dm-0 3PARdata,VV size=2.9T
VV0106 (360002ac0000000000000006a0001da07) dm-3 3PARdata,VV size=2.9T
VV0072 (360002ac000000000000000480001da07) dm-1 3PARdata,VV size=2.9T
VV0105 (360002ac000000000000000690001da07) dm-2 3PARdata,VV size=2.9T
VV0068 (360002ac000000000000000440001da07) dm-4 3PARdata,VV size=512G
Comparing disks
Ambiguous possible target disks need manual configuration (more than one with same size found)
Switching to manual disk layout configuration
Using /dev/sda (same size) for recreating /dev/mapper/360002ac0000000000000000a0001da07
Current disk mapping table (source => target):
  /dev/mapper/360002ac0000000000000000a0001da07 => /dev/sda

Confirm or edit the disk mapping
1) Confirm disk mapping and continue 'rear recover'
2) n/a
3) Edit disk mapping (/var/lib/rear/layout/disk_mappings)
4) Use Relax-and-Recover shell and return back to here
5) Abort 'rear recover'
(default '1' timeout 300 seconds)
5
ERROR: User chose to abort 'rear recover' in /usr/share/rear/layout/prepare/default/300_map_disks.sh
Some latest log messages since the last called script 300_map_disks.sh:
  2020-07-03 10:55:21.979902738 2) n/a
  2020-07-03 10:55:21.981459631 3) Edit disk mapping (/var/lib/rear/layout/disk_mappings)
  2020-07-03 10:55:21.982880663 4) Use Relax-and-Recover shell and return back to here
  2020-07-03 10:55:21.984435895 5) Abort 'rear recover'
  2020-07-03 10:55:21.985999029 (default '1' timeout 300 seconds)
  2020-07-03 10:55:25.941551158 UserInput: 'read' got as user input '5'
  2020-07-03 10:55:25.945888095 Error detected during restore.
  2020-07-03 10:55:25.948060119 Restoring saved original /var/lib/rear/layout/disklayout.conf
Aborting due to an error, check /var/log/rear/rear-cs900r01os2.log for details
Exiting rear recover (PID 10089) and its descendant processes ...
Running exit tasks
Terminated

RESCUE cs900r01os2:/ # multipath -l
VV0107 (360002ac0000000000000006b0001da07) dm-0 3PARdata,VV
size=2.9T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  `- 10:0:0:107 sdex 129:144 active undef running
VV0106 (360002ac0000000000000006a0001da07) dm-3 3PARdata,VV
size=2.9T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  `- 12:0:0:106 sdfs 130:224 active undef running
VV0072 (360002ac000000000000000480001da07) dm-1 3PARdata,VV
size=2.9T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  `- 10:0:0:72  sdek 128:192 active undef running
VV0105 (360002ac000000000000000690001da07) dm-2 3PARdata,VV
size=2.9T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  `- 12:0:0:105 sdfr 130:208 active undef running
VV0068 (360002ac000000000000000440001da07) dm-4 3PARdata,VV
size=512G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  `- 12:0:0:68  sdfc 129:224 active undef running

RESCUE cs900r01os2:/ # multipath -r

RESCUE cs900r01os2:/ # multipath -l
VV0107 (360002ac0000000000000006b0001da07) dm-0 3PARdata,VV
size=2.9T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  `- 10:0:0:107 sdex 129:144 active undef running
VV0106 (360002ac0000000000000006a0001da07) dm-3 3PARdata,VV
size=2.9T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  `- 12:0:0:106 sdfs 130:224 active undef running
VV0072 (360002ac000000000000000480001da07) dm-1 3PARdata,VV
size=2.9T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  `- 10:0:0:72  sdek 128:192 active undef running
VV0105 (360002ac000000000000000690001da07) dm-2 3PARdata,VV
size=2.9T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  `- 12:0:0:105 sdfr 130:208 active undef running
VV0068 (360002ac000000000000000440001da07) dm-4 3PARdata,VV
size=512G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  `- 12:0:0:68  sdfc 129:224 active undef running

RESCUE cs900r01os2:/ # udevadm control --reload-rules

RESCUE cs900r01os2:/ #  udevadm trigger

RESCUE cs900r01os2:/ # multipath -l
VV0107 (360002ac0000000000000006b0001da07) dm-0 ##,##
size=2.9T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  `- #:#:#:#    sdex 129:144 active undef running
VV0106 (360002ac0000000000000006a0001da07) dm-3 ##,##
size=2.9T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  `- #:#:#:#    sdfs 130:224 active undef running
VV0072 (360002ac000000000000000480001da07) dm-1 3PARdata,VV
size=2.9T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  `- 10:0:0:72  sdek 128:192 active undef running
VV0105 (360002ac000000000000000690001da07) dm-2 3PARdata,VV
size=2.9T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  `- 12:0:0:105 sdfr 130:208 active undef running
VV0068 (360002ac000000000000000440001da07) dm-4 ##,##
size=512G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  `- #:#:#:#    sdfc 129:224 active undef running

RESCUE cs900r01os2:/ # multipath -r

RESCUE cs900r01os2:/ # multipath -l
VV0072 (360002ac000000000000000480001da07) dm-1 3PARdata,VV
size=2.9T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=enabled
  `- 10:0:0:72  sdek 128:192 active undef running
VV0105 (360002ac000000000000000690001da07) dm-2 3PARdata,VV
size=2.9T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=enabled
  `- 12:0:0:105 sdfr 130:208 active undef running

RESCUE cs900r01os2:/ # rear -v recover
Relax-and-Recover 2.6 / 2020-06-17
Running rear recover (PID 18790)
Using log file: /var/log/rear/rear-cs900r01os2.log
Running workflow recover within the ReaR rescue/recovery system
Starting required daemons for NFS: RPC portmapper (portmap or rpcbind) and rpc.statd if available.
Started RPC portmapper 'rpcbind'.
RPC portmapper 'rpcbind' available.
RPC status rpc.statd available.
Starting rpc.idmapd failed.
Using backup archive '/tmp/rear.izXSYtlTrKt8vbn/outputfs/cs900r01os2/backup.tar.gz'
Will do driver migration (recreating initramfs/initrd)
Calculating backup archive size
Backup archive size is 1.8G     /tmp/rear.izXSYtlTrKt8vbn/outputfs/cs900r01os2/backup.tar.gz (compressed)
Setting up multipathing
Activating multipath
multipath activated
Starting multipath daemon
multipathd started
Listing multipath device found
VV0075 (360002ac0000000000000004b0001da07) dm-8 3PARdata,VV size=2.9T
VV0107 (360002ac0000000000000006b0001da07) dm-4 3PARdata,VV size=2.9T
VV0106 (360002ac0000000000000006a0001da07) dm-3 3PARdata,VV size=2.9T
VV0072 (360002ac000000000000000480001da07) dm-1 3PARdata,VV size=2.9T
VV0105 (360002ac000000000000000690001da07) dm-2 ##,## size=2.9T
VV0068 (360002ac000000000000000440001da07) dm-12 3PARdata,VV size=512G
VV0070 (360002ac000000000000000460001da07) dm-5 3PARdata,VV size=512G
VV0104 (360002ac000000000000000680001da07) dm-0 3PARdata,VV size=2.9T
VV0067 (360002ac000000000000000430001da07) dm-13 3PARdata,VV size=32G
VV0099 (360002ac000000000000000630001da07) dm-11 3PARdata,VV size=32G
VV0103 (360002ac000000000000000670001da07) dm-6 3PARdata,VV size=512G
VV0066 (360002ac000000000000000420001da07) dm-7 3PARdata,VV size=32G
VV0102 (360002ac000000000000000660001da07) dm-10 3PARdata,VV size=512G
VV0101 (360002ac000000000000000650001da07) dm-9 3PARdata,VV size=512G
Comparing disks
Ambiguous possible target disks need manual configuration (more than one with same size found)
Switching to manual disk layout configuration
Using /dev/sda (same size) for recreating /dev/mapper/360002ac0000000000000000a0001da07
Current disk mapping table (source => target):
  /dev/mapper/360002ac0000000000000000a0001da07 => /dev/sda

Confirm or edit the disk mapping
1) Confirm disk mapping and continue 'rear recover'
2) n/a
3) Edit disk mapping (/var/lib/rear/layout/disk_mappings)
4) Use Relax-and-Recover shell and return back to here
5) Abort 'rear recover'
(default '1' timeout 300 seconds)
5
ERROR: User chose to abort 'rear recover' in /usr/share/rear/layout/prepare/default/300_map_disks.sh
Some latest log messages since the last called script 300_map_disks.sh:
  2020-07-03 10:57:07.354063064 2) n/a
  2020-07-03 10:57:07.355419768 3) Edit disk mapping (/var/lib/rear/layout/disk_mappings)
  2020-07-03 10:57:07.356983461 4) Use Relax-and-Recover shell and return back to here
  2020-07-03 10:57:07.358365543 5) Abort 'rear recover'
  2020-07-03 10:57:07.359794716 (default '1' timeout 300 seconds)
  2020-07-03 10:57:11.861521119 UserInput: 'read' got as user input '5'
  2020-07-03 10:57:11.865954171 Error detected during restore.
  2020-07-03 10:57:11.867657676 Restoring saved original /var/lib/rear/layout/disklayout.conf
Aborting due to an error, check /var/log/rear/rear-cs900r01os2.log for details
Exiting rear recover (PID 18790) and its descendant processes ...
Running exit tasks
Terminated

RESCUE cs900r01os2:/ # rear -v recover
Relax-and-Recover 2.6 / 2020-06-17
Running rear recover (PID 24456)
Using log file: /var/log/rear/rear-cs900r01os2.log
Running workflow recover within the ReaR rescue/recovery system
Starting required daemons for NFS: RPC portmapper (portmap or rpcbind) and rpc.statd if available.
Started RPC portmapper 'rpcbind'.
RPC portmapper 'rpcbind' available.
RPC status rpc.statd available.
Starting rpc.idmapd failed.
Using backup archive '/tmp/rear.kjMb3lDqrS131oC/outputfs/cs900r01os2/backup.tar.gz'
Will do driver migration (recreating initramfs/initrd)
Calculating backup archive size
Backup archive size is 1.8G     /tmp/rear.kjMb3lDqrS131oC/outputfs/cs900r01os2/backup.tar.gz (compressed)
Setting up multipathing
Activating multipath
multipath activated
Starting multipath daemon
multipathd started
Listing multipath device found
VV0075 (360002ac0000000000000004b0001da07) dm-8 ##,## size=2.9T
360002ac0000000000000000a0001da07 dm-14 3PARdata,VV size=256G
VV0074 (360002ac0000000000000004a0001da07) dm-18 3PARdata,VV size=2.9T
VV0073 (360002ac000000000000000490001da07) dm-20 3PARdata,VV size=2.9T
VV0107 (360002ac0000000000000006b0001da07) dm-4 3PARdata,VV size=2.9T
VV0106 (360002ac0000000000000006a0001da07) dm-3 ##,## size=2.9T
VV0072 (360002ac000000000000000480001da07) dm-1 ##,## size=2.9T
VV0105 (360002ac000000000000000690001da07) dm-2 ##,## size=2.9T
VV0068 (360002ac000000000000000440001da07) dm-12 ##,## size=512G
VV0070 (360002ac000000000000000460001da07) dm-5 3PARdata,VV size=512G
VV0104 (360002ac000000000000000680001da07) dm-0 ##,## size=2.9T
VV0067 (360002ac000000000000000430001da07) dm-13 3PARdata,VV size=32G
VV0099 (360002ac000000000000000630001da07) dm-11 3PARdata,VV size=32G
VV0103 (360002ac000000000000000670001da07) dm-6 3PARdata,VV size=512G
VV0066 (360002ac000000000000000420001da07) dm-7 3PARdata,VV size=32G
VV0098 (360002ac000000000000000620001da07) dm-19 3PARdata,VV size=32G
VV0102 (360002ac000000000000000660001da07) dm-10 ##,## size=512G
VV0101 (360002ac000000000000000650001da07) dm-9 ##,## size=512G
Comparing disks
Ambiguous possible target disks need manual configuration (more than one with same size found)
Switching to manual disk layout configuration
Using /dev/mapper/360002ac0000000000000000a0001da07 (same name and same size) for recreating /dev/mapper/360002ac0000000000000000a0001da07
Current disk mapping table (source => target):
  /dev/mapper/360002ac0000000000000000a0001da07 => /dev/mapper/360002ac0000000000000000a0001da07

Confirm or edit the disk mapping
1) Confirm disk mapping and continue 'rear recover'
2) Confirm identical disk mapping and proceed without manual configuration
3) Edit disk mapping (/var/lib/rear/layout/disk_mappings)
4) Use Relax-and-Recover shell and return back to here
5) Abort 'rear recover'
(default '1' timeout 300 seconds)
Disk '/dev/mapper/360002ac0000000000000000a0001da07': creating 'gpt' partition table
The disk layout recreation script failed
1) Rerun disk recreation script (/var/lib/rear/layout/diskrestore.sh)
2) View 'rear recover' log file (/var/log/rear/rear-cs900r01os2.log)
3) Edit disk recreation script (/var/lib/rear/layout/diskrestore.sh)
4) View original disk space usage (/var/lib/rear/layout/config/df.txt)
5) Use Relax-and-Recover shell and return back to here
6) Abort 'rear recover'
(default '1' timeout 300 seconds)

User reruns disk recreation script
Start system layout restoration.
Disk '/dev/mapper/360002ac0000000000000000a0001da07': creating 'gpt' partition table
The disk layout recreation script failed
1) Rerun disk recreation script (/var/lib/rear/layout/diskrestore.sh)
2) View 'rear recover' log file (/var/log/rear/rear-cs900r01os2.log)
3) Edit disk recreation script (/var/lib/rear/layout/diskrestore.sh)
4) View original disk space usage (/var/lib/rear/layout/config/df.txt)
5) Use Relax-and-Recover shell and return back to here
6) Abort 'rear recover'
(default '1' timeout 300 seconds)
1
User reruns disk recreation script
Start system layout restoration.
Disk '/dev/mapper/360002ac0000000000000000a0001da07': creating 'gpt' partition table
The disk layout recreation script failed
1) Rerun disk recreation script (/var/lib/rear/layout/diskrestore.sh)
2) View 'rear recover' log file (/var/log/rear/rear-cs900r01os2.log)
3) Edit disk recreation script (/var/lib/rear/layout/diskrestore.sh)
4) View original disk space usage (/var/lib/rear/layout/config/df.txt)
5) Use Relax-and-Recover shell and return back to here
6) Abort 'rear recover'
(default '1' timeout 300 seconds)
6
ERROR: User chose to abort 'rear recover' in /usr/share/rear/layout/recreate/default/200_run_layout_code.sh
Some latest log messages since the last called script 200_run_layout_code.sh:
  2020-07-03 10:59:03.515055385 3) Edit disk recreation script (/var/lib/rear/layout/diskrestore.sh)
  2020-07-03 10:59:03.516897424 4) View original disk space usage (/var/lib/rear/layout/config/df.txt)
  2020-07-03 10:59:03.518687005 5) Use Relax-and-Recover shell and return back to here
  2020-07-03 10:59:03.520561003 6) Abort 'rear recover'
  2020-07-03 10:59:03.522393627 (default '1' timeout 300 seconds)
  2020-07-03 10:59:17.705687596 UserInput: 'read' got as user input '6'
  2020-07-03 10:59:17.710549191 Error detected during restore.
  2020-07-03 10:59:17.712580752 Restoring saved original /var/lib/rear/layout/disklayout.conf
Aborting due to an error, check /var/log/rear/rear-cs900r01os2.log for details
Exiting rear recover (PID 24456) and its descendant processes ...
Running exit tasks
Terminated

RESCUE cs900r01os2:/ # multipath -ll
VV0075 (360002ac0000000000000004b0001da07) dm-8 ##,##
size=2.9T features='0' hwhandler='0' wp=rw
360002ac0000000000000000a0001da07 dm-14 ##,##
size=256G features='0' hwhandler='0' wp=rw
VV0074 (360002ac0000000000000004a0001da07) dm-18 ##,##
size=2.9T features='0' hwhandler='0' wp=rw
VV0073 (360002ac000000000000000490001da07) dm-20 ##,##
size=2.9T features='0' hwhandler='0' wp=rw
VV0107 (360002ac0000000000000006b0001da07) dm-4 3PARdata,VV
size=2.9T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
  `- 10:0:0:107 sdex 129:144 active ready running
VV0106 (360002ac0000000000000006a0001da07) dm-3 ##,##
size=2.9T features='0' hwhandler='0' wp=rw
VV0072 (360002ac000000000000000480001da07) dm-1 ##,##
size=2.9T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  `- #:#:#:#    sdcs 70:0    active faulty running
VV0105 (360002ac000000000000000690001da07) dm-2 ##,##
size=2.9T features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
  `- #:#:#:#    sdfr 130:208 active faulty running
VV0068 (360002ac000000000000000440001da07) dm-12 ##,##
size=512G features='0' hwhandler='0' wp=rw
VV0070 (360002ac000000000000000460001da07) dm-5 3PARdata,VV
size=512G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
  `- 4:0:0:70   sday 67:32   active ready running
VV0104 (360002ac000000000000000680001da07) dm-0 ##,##
size=2.9T features='0' hwhandler='0' wp=rw
VV0067 (360002ac000000000000000430001da07) dm-13 3PARdata,VV
size=32G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
  |- 4:0:0:67   sdav 66:240  active ready running
  `- 9:0:0:67   sddj 71:16   active ready running
VV0099 (360002ac000000000000000630001da07) dm-11 3PARdata,VV
size=32G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
  `- 3:0:0:99   sdaj 66:48   active ready running
VV0103 (360002ac000000000000000670001da07) dm-6 3PARdata,VV
size=512G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
  `- 10:0:0:103 sdet 129:80  active ready running
VV0066 (360002ac000000000000000420001da07) dm-7 3PARdata,VV
size=32G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='round-robin 0' prio=50 status=active
  `- 7:0:0:66   sdcm 69:160  active ready running
VV0098 (360002ac000000000000000620001da07) dm-19 ##,##
size=32G features='0' hwhandler='0' wp=rw
VV0102 (360002ac000000000000000660001da07) dm-10 ##,##
size=512G features='0' hwhandler='0' wp=rw
VV0101 (360002ac000000000000000650001da07) dm-9 ##,##
size=512G features='0' hwhandler='0' wp=rw

RESCUE cs900r01os2:/ # rear -v recover
Relax-and-Recover 2.6 / 2020-06-17
Running rear recover (PID 31478)
Using log file: /var/log/rear/rear-cs900r01os2.log
Running workflow recover within the ReaR rescue/recovery system
Starting required daemons for NFS: RPC portmapper (portmap or rpcbind) and rpc.statd if available.
Started RPC portmapper 'rpcbind'.
RPC portmapper 'rpcbind' available.
RPC status rpc.statd available.
Starting rpc.idmapd failed.
Using backup archive '/tmp/rear.dpL2BLgK0ImvWiG/outputfs/cs900r01os2/backup.tar.gz'
Will do driver migration (recreating initramfs/initrd)
Calculating backup archive size
Backup archive size is 1.8G     /tmp/rear.dpL2BLgK0ImvWiG/outputfs/cs900r01os2/backup.tar.gz (compressed)
Setting up multipathing
Activating multipath
multipath activated
Starting multipath daemon
multipathd started
Listing multipath device found
VV0026 (360002ac0000000000000001a0001da07) dm-21 3PARdata,VV size=64G
VV0075 (360002ac0000000000000004b0001da07) dm-8 ##,## size=2.9T
360002ac0000000000000000a0001da07 dm-14 3PARdata,VV size=256G
VV0074 (360002ac0000000000000004a0001da07) dm-18 ##,## size=2.9T
VV0073 (360002ac000000000000000490001da07) dm-20 ##,## size=2.9T
VV0107 (360002ac0000000000000006b0001da07) dm-4 3PARdata,VV size=2.9T
VV0106 (360002ac0000000000000006a0001da07) dm-3 3PARdata,VV size=2.9T
VV0072 (360002ac000000000000000480001da07) dm-1 3PARdata,VV size=2.9T
VV0071 (360002ac000000000000000470001da07) dm-22 3PARdata,VV size=512G
VV0105 (360002ac000000000000000690001da07) dm-2 3PARdata,VV size=2.9T
VV0068 (360002ac000000000000000440001da07) dm-12 ##,## size=512G
VV0070 (360002ac000000000000000460001da07) dm-5 3PARdata,VV size=512G
VV0104 (360002ac000000000000000680001da07) dm-0 3PARdata,VV size=2.9T
VV0067 (360002ac000000000000000430001da07) dm-13 3PARdata,VV size=32G
VV0099 (360002ac000000000000000630001da07) dm-11 3PARdata,VV size=32G
VV0103 (360002ac000000000000000670001da07) dm-6 ##,## size=512G
VV0066 (360002ac000000000000000420001da07) dm-7 ##,## size=32G
VV0098 (360002ac000000000000000620001da07) dm-19 3PARdata,VV size=32G
VV0102 (360002ac000000000000000660001da07) dm-10 3PARdata,VV size=512G
VV0101 (360002ac000000000000000650001da07) dm-9 3PARdata,VV size=512G
Comparing disks
Ambiguous possible target disks need manual configuration (more than one with same size found)
Switching to manual disk layout configuration
Using /dev/mapper/360002ac0000000000000000a0001da07 (same name and same size) for recreating /dev/mapper/360002ac0000000000000000a0001da07
Current disk mapping table (source => target):
  /dev/mapper/360002ac0000000000000000a0001da07 => /dev/mapper/360002ac0000000000000000a0001da07

Confirm or edit the disk mapping
1) Confirm disk mapping and continue 'rear recover'
2) Confirm identical disk mapping and proceed without manual configuration
3) Edit disk mapping (/var/lib/rear/layout/disk_mappings)
4) Use Relax-and-Recover shell and return back to here
5) Abort 'rear recover'
(default '1' timeout 300 seconds)

User confirmed disk mapping
Confirm or edit the disk layout file
1) Confirm disk layout and continue 'rear recover'
2) Edit disk layout (/var/lib/rear/layout/disklayout.conf)
3) View disk layout (/var/lib/rear/layout/disklayout.conf)
4) View original disk space usage (/var/lib/rear/layout/config/df.txt)
5) Use Relax-and-Recover shell and return back to here
6) Abort 'rear recover'
(default '1' timeout 300 seconds)

User confirmed disk layout file
Doing SLES-like btrfs subvolumes setup for /dev/mapper/360002ac0000000000000000a0001da07-part3 on / (BTRFS_SUBVOLUME_SLES_SETUP contains /dev/mapper/360002ac0000000000000000a0001da07-part3)
SLES12-SP1 (and later) btrfs subvolumes setup needed for /dev/mapper/360002ac0000000000000000a0001da07-part3 (default subvolume path contains '@/.snapshots/')
Confirm or edit the disk recreation script
1) Confirm disk recreation script and continue 'rear recover'
2) Edit disk recreation script (/var/lib/rear/layout/diskrestore.sh)
3) View disk recreation script (/var/lib/rear/layout/diskrestore.sh)
4) View original disk space usage (/var/lib/rear/layout/config/df.txt)
5) Use Relax-and-Recover shell and return back to here
6) Abort 'rear recover'
(default '1' timeout 300 seconds)

User confirmed disk recreation script
Start system layout restoration.
Disk '/dev/mapper/360002ac0000000000000000a0001da07': creating 'gpt' partition table
Disk '/dev/mapper/360002ac0000000000000000a0001da07': creating partition number 1 with name ''360002ac0000000000000000a0001da07-pa''
Disk '/dev/mapper/360002ac0000000000000000a0001da07': creating partition number 2 with name ''360002ac0000000000000000a0001da07-pa''
Disk '/dev/mapper/360002ac0000000000000000a0001da07': creating partition number 3 with name ''360002ac0000000000000000a0001da07-pa''
Creating swap on /dev/mapper/360002ac0000000000000000a0001da07-part2
Creating filesystem of type btrfs with mount point / on /dev/mapper/360002ac0000000000000000a0001da07-part3.
Mounting filesystem /
Running snapper/installation-helper
Creating filesystem of type vfat with mount point /boot/efi on /dev/mapper/360002ac0000000000000000a0001da07-part1.
Mounting filesystem /boot/efi
Disk layout created.
Confirm the recreated disk layout or go back one step
1) Confirm recreated disk layout and continue 'rear recover'
2) Go back one step to redo disk layout recreation
3) Use Relax-and-Recover shell and return back to here
4) Abort 'rear recover'
(default '1' timeout 300 seconds)

User confirmed recreated disk layout
Restoring from '/tmp/rear.dpL2BLgK0ImvWiG/outputfs/cs900r01os2/backup.tar.gz' (restore log in /var/lib/rear/restore/recover.backup.tar.gz.31478.restore.log) ...
Restored 3721 MiB [avg. 119098 KiB/sec] OK
Restored 3741 MiB in 33 seconds [avg. 116096 KiB/sec]
Restoring finished (verify backup restore log messages in /var/lib/rear/restore/recover.backup.tar.gz.31478.restore.log)
Created SELinux /mnt/local/.autorelabel file : after reboot SELinux will relabel all files
Recreating directories (with permissions) from /var/lib/rear/recovery/directories_permissions_owner_group
Migrating disk-by-id mappings in certain restored files in /mnt/local to current disk-by-id mappings ...
Migrating filesystem UUIDs in certain restored files in /mnt/local to current UUIDs ...
Patching filesystem UUIDs in boot/grub2/grub.cfg to current UUIDs
Patching filesystem UUIDs in etc/sysconfig/bootloader to current UUIDs
Skip patching symlink etc/mtab target /mnt/local/proc/44183/mounts on /proc/ /sys/ /dev/ or /run/
Patching filesystem UUIDs in etc/fstab to current UUIDs
Patching filesystem UUIDs in etc/mtools.conf to current UUIDs
Patching filesystem UUIDs in etc/smartd.conf to current UUIDs
Patching filesystem UUIDs in etc/sysconfig/smartmontools to current UUIDs
Patching filesystem UUIDs in etc/security/pam_mount.conf.xml to current UUIDs
Patching filesystem UUIDs in boot/efi/EFI/boot/grub.cfg to current UUIDs
Patching filesystem UUIDs in boot/efi/EFI/sles/grub.cfg to current UUIDs
Confirm restored config files are OK or adapt them as needed
1) Confirm it is OK to recreate initrd and reinstall bootloader and continue 'rear recover'
2) Edit restored etc/fstab (/mnt/local/etc/fstab)
3) View restored etc/fstab (/mnt/local/etc/fstab)
4) Use Relax-and-Recover shell and return back to here
5) Abort 'rear recover'
(default '1' timeout 300 seconds)

User confirmed restored files
Running mkinitrd...
Recreated initrd (/sbin/mkinitrd).
Creating  EFI Boot Manager entry 'SUSE_LINUX 12.4' for 'EFI\sles\shim.efi' (UEFI_BOOTLOADER='/boot/efi/EFI/sles/shim.efi')
Installing secure boot loader (shim)...
Finished 'recover'. The target system is mounted at '/mnt/local'.
Exiting rear recover (PID 31478) and its descendant processes ...
Running exit tasks

gozora commented at 2020-07-15 12:50:

I just accidentally run into same behavior on Superdome2 16s x86.
ReaR managed to correctly assemble multipath when server was POWERED OFF before. Without power off (just plain reboot) multipath in ReaR rescue system was not assembled correctly.

V.

makamp1 commented at 2020-07-15 13:17:

@gozora thanks for the update .. when you say "powered off" .. you mean power pulled from the sever OR partition powered off (poweroff partition)

for my issue mentioned above, I reduced the partition from 4 blades to 2 blades .. and multipath was able to scan thru all the LUNs fine in rescue shell .. once the rear recovery is completed, and added the blades back to parition and rebooted it.

it is more off a workaround than a solution.

I wonder how OS Install media would do scan of LUNs in a large system before we get to select the LUN to install the LUN ( I never had issue with OS install on large system ) .. can something like that be implemented in rear rescue ??

jsmeix commented at 2020-07-15 13:17:

The description in
https://github.com/rear/rear/issues/2451#issuecomment-658748098
looks similar as what @OliverO2 described in his
https://github.com/rear/rear/pull/2455
(excerpts):

The machine affected, an HPE ML10Gen9 server, still hung frequently,
but not always, when trying to transfer control to the kernel after unlocking.

After some more research, it turned out that the only reliable way to boot
was using a power cycle after Opal disks were unlocked.
It looks like the firmware did not initialize properly during a 'simple' reboot
and got screwed by the changed state of the boot disk after unlocking.

A reboot including a power cycle could be achieved on this machine
via the `reboot=efi` kernel parameter.

jsmeix commented at 2020-07-15 13:21:

@makamp1
because you wrote "I never had issue with OS install on large system"
I like to ask if you perhaps always did a "power cycle boot" before installing an OS?

I am not a HP server user so I know nothing about different boot methods for them.
I only ask (possibly nonsense questions) from what I think I understand here.

gozora commented at 2020-07-15 14:40:

@makamp1

.. you mean power pulled from the sever OR partition powered off

I just powered off blade through ILO

My Superdome X hardware was in some overall strange state, because it just hanged even during regular OS boot waiting for SAN disks to appear ("thanks" to Systemd compressed boot output format I was not able to see LUN WWN ...), I had to do fuse reset of first blade and after power on system booted just fine.
Now more then before I think that this is pure HW/FW problem, and there is not much ReaR can do to fix this problem.

V.

makamp1 commented at 2020-07-16 01:10:

@gozora looks like you had different issue with HW .. and may not be related to REAR

@jsmeix I done the reboot of the partition with OS install and rear recover .. no diffrence in there ..

if I have a large system/partition, rear rescue's multipath is not able to SCAN the LUNs and prepare them for recovery ..
if I reduce the partition to less Blades ( less CPUs / MEM / NIC / FC HBAs), rear rescue/recover seems to handle it fine

running the udevadm cmds in rescue shell seems to be HIT and MISS in scanning all the LUNs every time.

jsmeix commented at 2020-07-16 14:55:

Only FYI as another mostly blind shot into the dark:

Does it perhaps help to zero out the target disk(s)
(i.e. the disk devices where the system should be recreated on)
before "rear recover" is done?

Ideally before the ReaR recovery system is booted
so that the ReaR recovery system is booted with storage
that behaves same as pristine new storage, see the section
"Prepare replacement hardware for disaster recovery" in
https://en.opensuse.org/SDB:Disaster_Recovery
therein in particular the part about
"you must completely zero out your replacement storage"

Cf. https://github.com/rear/rear/issues/2019
in particular starting at
https://github.com/rear/rear/issues/2019#issuecomment-476598723
and see the subsequent comments
(in particular enjoy our research by-the-way finding about
how communication with an iPhone "in between" actually works ;-)

The current bottom line from that issue is sumarized in the section
"Prepare replacement hardware for disaster recovery" in
https://en.opensuse.org/SDB:Disaster_Recovery

It seems wipefs plus parted mklabel alone is not sufficient, see
https://github.com/rear/rear/issues/2019#issuecomment-598802903
and
https://github.com/rear/rear/issues/799#issuecomment-598626162

github-actions commented at 2020-09-15 01:34:

Stale issue message


[Export of Github issue for rear/rear.]