#2035 Issue closed: 'rear recover' hangs while running mkinitrd (bind mount of /run missing in TARGET_FS_ROOT)

Labels: enhancement, support / question, fixed / solved / done

procurve86 opened issue at 2019-02-08 15:31:

Relax-and-Recover (ReaR) Issue Template

Fill in the following items before submitting a new issue
(quick response is not guaranteed with free support):

  • ReaR version ("/usr/sbin/rear -V"):
    Relax-and-Recover 2.4 / Git

  • OS version ("cat /etc/rear/os.conf" or "lsb_release -a" or "cat /etc/os-release"):
    Red Hat Enterprise Linux Server release 7.6 (Maipo)

  • ReaR configuration files ("cat /etc/rear/site.conf" and/or "cat /etc/rear/local.conf"):

MODULES=( 'all_modules' )
FIRMWARE_FILES=( 'yes' )
OUTPUT=ISO
BACKUP=NETFS
NETFS_KEEP_OLD_BACKUP_COPY=no
EXCLUDE_RECREATE=( '/dev/sdc' )
BACKUP_PROG_EXCLUDE=( '/u02/*' )
NETFS_URL=nfs://10.1.30.20/u02/rear_backups
  • Hardware (PC or PowerNV BareMetal or ARM) or virtual machine (KVM guest or PoverVM LPAR):
    HPE DL380 Gen10

  • System architecture (x86 compatible or PPC64/PPC64LE or what exact ARM device):
    x86

  • Firmware (BIOS or UEFI or Open Firmware) and bootloader (GRUB or ELILO or Petitboot):
    UEFI

  • Storage (lokal disk or SSD) and/or SAN (FC or iSCSI or FCoE) and/or multipath (DM or NVMe):
    local SSD's

  • Description of the issue (ideally so that others can reproduce it):
    When executing rear recover (on the same machine where the backup was created) everything works as expected until its says "Running mkinitrd...".
    At this point it gets stuck. After 30 minutes I've aborted with CTRL+C.

Below the a part of the log while where it gets stuck:

+++ cat /tmp/rear.sX7R9psO2ZewG3R/tmp/storage_drivers
++ NEW_INITRD_MODULES=($(tr " " "\n" <<< "${NEW_INITRD_MODULES[*]}" | sort | uniq -u))
+++ tr ' ' '\n'
+++ sort
+++ uniq -u
++ Log 'New INITRD_MODULES='\'' scsi_transport_sas' sd_mod ses sg smartpqi sr_mod 'uas'\'''
+++ date '+%Y-%m-%d %H:%M:%S.%N '
++ local 'timestamp=2019-02-08 15:10:38.119409143 '
++ test 7 -gt 0
++ echo '2019-02-08 15:10:38.119409143 New INITRD_MODULES='\'' scsi_transport_sas sd_mod ses sg smartpqi sr_mod uas'\'''
2019-02-08 15:10:38.119409143 New INITRD_MODULES=' scsi_transport_sas sd_mod ses sg smartpqi sr_mod uas'
++ INITRD_MODULES=' scsi_transport_sas sd_mod ses sg smartpqi sr_mod uas'
+++ printf '%s\n' scsi_transport_sas sd_mod ses sg smartpqi sr_mod uas
+++ awk '{printf "--with=%s ", $1}'
++ WITH_INITRD_MODULES='--with=scsi_transport_sas --with=sd_mod --with=ses --with=sg --with=smartpqi --with=sr_mod --with=uas '
++ mount -t proc none /mnt/local/proc
++ mount -t sysfs none /mnt/local/sys
++ unalias ls
+++ egrep -v '(kdump|rescue|plymouth)'
+++ ls /mnt/local/boot/initramfs-0-rescue-d7ce629be5bf4c0a95967a86593f7968.img /mnt/local/boot/initramfs-3.10.0-693.el7.x86_64.img /mnt/local/boot/initramfs-3.10.0-693.el7.x86_64kdump.img /mnt/local/boot/initramfs-3.10.0-957.5.1.el7.x86_64.img /mnt/local/boot/initramfs-4.1.12-124.25.1.el7uek.x86_64.img /mnt/local/boot/initramfs-4.1.12-124.25.1.el7uek.x86_64kdump.img /mnt/local/boot/initramfs-4.1.12-94.3.9.el7uek.x86_64.img /mnt/local/boot/initramfs-4.1.12-94.3.9.el7uek.x86_64kdump.img /mnt/local/boot/initrd-plymouth.img
++ for INITRD_IMG in '$( ls $TARGET_FS_ROOT/boot/initramfs-*.img $TARGET_FS_ROOT/boot/initrd-*.img | egrep -v '\''(kdump|rescue|plymouth)'\'' )'
+++ cut -f2- -d-
+++ sed 's/\.img//'
++++ echo /mnt/local/boot/initramfs-3.10.0-693.el7.x86_64.img
+++ basename /mnt/local/boot/initramfs-3.10.0-693.el7.x86_64.img
++ kernel_version=3.10.0-693.el7.x86_64
+++ echo /mnt/local/boot/initramfs-3.10.0-693.el7.x86_64.img
+++ egrep -o '/boot/.*'
++ INITRD=/boot/initramfs-3.10.0-693.el7.x86_64.img
++ LogPrint 'Running mkinitrd...'
++ Log 'Running mkinitrd...'
+++ date '+%Y-%m-%d %H:%M:%S.%N '
++ local 'timestamp=2019-02-08 15:10:38.133292052 '
++ test 1 -gt 0
++ echo '2019-02-08 15:10:38.133292052 Running mkinitrd...'
2019-02-08 15:10:38.133292052 Running mkinitrd...
++ Print 'Running mkinitrd...'
++ test 1
++ echo -e 'Running mkinitrd...'
+++ chroot /mnt/local /bin/bash -c 'PATH=/sbin:/usr/sbin:/usr/bin:/bin type -P mkinitrd'
++ local mkinitrd_binary=/usr/bin/mkinitrd
++ test /usr/bin/mkinitrd
++ chroot /mnt/local /usr/bin/mkinitrd -v -f --with=scsi_transport_sas --with=sd_mod --with=ses --with=sg --with=smartpqi --with=sr_mod --with=uas /boot/initramfs-3.10.0-693.el7.x86_64.img 3.10.0-693.el7.x86_64
Executing: /sbin/dracut -v -f --add-drivers " scsi_transport_sas sd_mod ses sg smartpqi sr_mod uas" /boot/initramfs-3.10.0-693.el7.x86_64.img 3.10.0-693.el7.x86_64
++ LogPrint 'WARNING:
Failed to create initrd for kernel version '\''3.10.0-693.el7.x86_64'\''.
Check '\''/var/log/rear/rear-bosarip1.log'\'' to see the error messages in detail
and decide yourself, whether the system will boot or not.
'
  • Workaround, if any:

  • Attachments, as applicable ("rear -D mkrescue/mkbackup/recover" debug log files):

gozora commented at 2019-02-08 19:11:

Did yo try to check the log file mentioned?
Can you try running chroot /mnt/local /usr/bin/mkinitrd -v -f --with=scsi_transport_sas --with=sd_mod --with=ses --with=sg --with=smartpqi --with=sr_mod --with=uas /boot/initramfs-3.10.0-693.el7.x86_64.img 3.10.0-693.el7.x86_64 manually when rear recover finished?

V.

procurve86 commented at 2019-02-11 07:02:

Hi gozora

thanks for your support.
Rear recover hangs while "Running mkinitrd..." and never gets finished.

I've stopped the process using CRTL+C.
Then I've manually executed the command you mentioned and it was successful. It took 37 seconds to complete.

I've add some debug echos.

It seems that it hangs in 550_rebuild_initramfs.sh at following code line:

if chroot $TARGET_FS_ROOT $mkinitrd_binary -v -f ${WITH_INITRD_MODULES[@]} $INITRD $kernel_version >&2; then

Regards

gozora commented at 2019-02-11 08:49:

@procurve86
It would be helpful if you could post here output from rear -d -D recover from the hanged session.

V.

procurve86 commented at 2019-02-11 08:56:

Below the debug output:

2019-02-08 15:10:38.107691837 Entering debugscripts mode via 'set -x'.
+ source /usr/share/rear/finalize/Fedora/i386/550_rebuild_initramfs.sh
++ is_false yes
++ case "$1" in
++ return 1
++ is_true yes
++ case "$1" in
++ return 0
++ have_udev
++ test -d /etc/udev/rules.d
++ has_binary udevadm udevtrigger udevsettle udevinfo udevstart
++ for bin in '$@'
++ type udevadm
++ return 0
++ return 0
++ '[' -f /var/lib/rear/recovery/initrd_modules ']'
++ OLD_INITRD_MODULES=($(cat $VAR_DIR/recovery/initrd_modules))
+++ cat /var/lib/rear/recovery/initrd_modules
++ Log 'Original OLD_INITRD_MODULES='\'''\'''
+++ date '+%Y-%m-%d %H:%M:%S.%N '
++ local 'timestamp=2019-02-08 15:10:38.114451579 '
++ test 1 -gt 0
++ echo '2019-02-08 15:10:38.114451579 Original OLD_INITRD_MODULES='\'''\'''
2019-02-08 15:10:38.114451579 Original OLD_INITRD_MODULES=''
++ NEW_INITRD_MODULES=(${OLD_INITRD_MODULES[@]} ${OLD_INITRD_MODULES[@]} $( cat $TMP_DIR/storage_drivers ))
+++ cat /tmp/rear.sX7R9psO2ZewG3R/tmp/storage_drivers
++ NEW_INITRD_MODULES=($(tr " " "\n" <<< "${NEW_INITRD_MODULES[*]}" | sort | uniq -u))
+++ tr ' ' '\n'
+++ sort
+++ uniq -u
++ Log 'New INITRD_MODULES='\'' scsi_transport_sas' sd_mod ses sg smartpqi sr_mod 'uas'\'''
+++ date '+%Y-%m-%d %H:%M:%S.%N '
++ local 'timestamp=2019-02-08 15:10:38.119409143 '
++ test 7 -gt 0
++ echo '2019-02-08 15:10:38.119409143 New INITRD_MODULES='\'' scsi_transport_sas sd_mod ses sg smartpqi sr_mod uas'\'''
2019-02-08 15:10:38.119409143 New INITRD_MODULES=' scsi_transport_sas sd_mod ses sg smartpqi sr_mod uas'
++ INITRD_MODULES=' scsi_transport_sas sd_mod ses sg smartpqi sr_mod uas'
+++ printf '%s\n' scsi_transport_sas sd_mod ses sg smartpqi sr_mod uas
+++ awk '{printf "--with=%s ", $1}'
++ WITH_INITRD_MODULES='--with=scsi_transport_sas --with=sd_mod --with=ses --with=sg --with=smartpqi --with=sr_mod --with=uas '
++ mount -t proc none /mnt/local/proc
++ mount -t sysfs none /mnt/local/sys
++ unalias ls
+++ egrep -v '(kdump|rescue|plymouth)'
+++ ls /mnt/local/boot/initramfs-0-rescue-d7ce629be5bf4c0a95967a86593f7968.img /mnt/local/boot/initramfs-3.10.0-693.el7.x86_64.img /mnt/local/boot/initramfs-3.10.0-693.el7.x86_64kdump.img /mnt/local/boot/initramfs-3.10.0-957.5.1.el7.x86_64.img /mnt/local/boot/initramfs-4.1.12-124.25.1.el7uek.x86_64.img /mnt/local/boot/initramfs-4.1.12-124.25.1.el7uek.x86_64kdump.img /mnt/local/boot/initramfs-4.1.12-94.3.9.el7uek.x86_64.img /mnt/local/boot/initramfs-4.1.12-94.3.9.el7uek.x86_64kdump.img /mnt/local/boot/initrd-plymouth.img
++ for INITRD_IMG in '$( ls $TARGET_FS_ROOT/boot/initramfs-*.img $TARGET_FS_ROOT/boot/initrd-*.img | egrep -v '\''(kdump|rescue|plymouth)'\'' )'
+++ cut -f2- -d-
+++ sed 's/\.img//'
++++ echo /mnt/local/boot/initramfs-3.10.0-693.el7.x86_64.img
+++ basename /mnt/local/boot/initramfs-3.10.0-693.el7.x86_64.img
++ kernel_version=3.10.0-693.el7.x86_64
+++ echo /mnt/local/boot/initramfs-3.10.0-693.el7.x86_64.img
+++ egrep -o '/boot/.*'
++ INITRD=/boot/initramfs-3.10.0-693.el7.x86_64.img
++ LogPrint 'Running mkinitrd...'
++ Log 'Running mkinitrd...'
+++ date '+%Y-%m-%d %H:%M:%S.%N '
++ local 'timestamp=2019-02-08 15:10:38.133292052 '
++ test 1 -gt 0
++ echo '2019-02-08 15:10:38.133292052 Running mkinitrd...'
2019-02-08 15:10:38.133292052 Running mkinitrd...
++ Print 'Running mkinitrd...'
++ test 1
++ echo -e 'Running mkinitrd...'
+++ chroot /mnt/local /bin/bash -c 'PATH=/sbin:/usr/sbin:/usr/bin:/bin type -P mkinitrd'
++ local mkinitrd_binary=/usr/bin/mkinitrd
++ test /usr/bin/mkinitrd
++ chroot /mnt/local /usr/bin/mkinitrd -v -f --with=scsi_transport_sas --with=sd_mod --with=ses --with=sg --with=smartpqi --with=sr_mod --with=uas /boot/initramfs-3.10.0-693.el7.x86_64.img 3.10.0-693.el7.x86_64
Executing: /sbin/dracut -v -f --add-drivers " scsi_transport_sas sd_mod ses sg smartpqi sr_mod uas" /boot/initramfs-3.10.0-693.el7.x86_64.img 3.10.0-693.el7.x86_64
++ LogPrint 'WARNING:
Failed to create initrd for kernel version '\''3.10.0-693.el7.x86_64'\''.
Check '\''/var/log/rear/rear-bosarip1.log'\'' to see the error messages in detail
and decide yourself, whether the system will boot or not.
'
++ Log 'WARNING:
Failed to create initrd for kernel version '\''3.10.0-693.el7.x86_64'\''.
Check '\''/var/log/rear/rear-bosarip1.log'\'' to see the error messages in detail
and decide yourself, whether the system will boot or not.
'
+++ date '+%Y-%m-%d %H:%M:%S.%N '
++ local 'timestamp=2019-02-08 15:12:01.099743557 '
++ test 1 -gt 0
++ echo '2019-02-08 15:12:01.099743557 WARNING:
Failed to create initrd for kernel version '\''3.10.0-693.el7.x86_64'\''.
Check '\''/var/log/rear/rear-bosarip1.log'\'' to see the error messages in detail
and decide yourself, whether the system will boot or not.
'
2019-02-08 15:12:01.099743557 WARNING:
Failed to create initrd for kernel version '3.10.0-693.el7.x86_64'.
Check '/var/log/rear/rear-bosarip1.log' to see the error messages in detail
and decide yourself, whether the system will boot or not.

++ Print 'WARNING:
Failed to create initrd for kernel version '\''3.10.0-693.el7.x86_64'\''.
Check '\''/var/log/rear/rear-bosarip1.log'\'' to see the error messages in detail
and decide yourself, whether the system will boot or not.
'
++ test 1
++ echo -e 'WARNING:
Failed to create initrd for kernel version '\''3.10.0-693.el7.x86_64'\''.
Check '\''/var/log/rear/rear-bosarip1.log'\'' to see the error messages in detail
and decide yourself, whether the system will boot or not.
'
++ for INITRD_IMG in '$( ls $TARGET_FS_ROOT/boot/initramfs-*.img $TARGET_FS_ROOT/boot/initrd-*.img | egrep -v '\''(kdump|rescue|plymouth)'\'' )'
+++ cut -f2- -d-
+++ sed 's/\.img//'
++++ echo /mnt/local/boot/initramfs-3.10.0-957.5.1.el7.x86_64.img
+++ basename /mnt/local/boot/initramfs-3.10.0-957.5.1.el7.x86_64.img
++ kernel_version=3.10.0-957.5.1.el7.x86_64
+++ egrep -o '/boot/.*'
+++ echo /mnt/local/boot/initramfs-3.10.0-957.5.1.el7.x86_64.img
++ INITRD=/boot/initramfs-3.10.0-957.5.1.el7.x86_64.img
++ LogPrint 'Running mkinitrd...'
++ Log 'Running mkinitrd...'
+++ date '+%Y-%m-%d %H:%M:%S.%N '
++ local 'timestamp=2019-02-08 15:12:01.159119863 '
++ test 1 -gt 0
++ echo '2019-02-08 15:12:01.159119863 Running mkinitrd...'
2019-02-08 15:12:01.159119863 Running mkinitrd...
++ Print 'Running mkinitrd...'
++ test 1
++ echo -e 'Running mkinitrd...'
+++ chroot /mnt/local /bin/bash -c 'PATH=/sbin:/usr/sbin:/usr/bin:/bin type -P mkinitrd'
++ local mkinitrd_binary=/usr/bin/mkinitrd
++ test /usr/bin/mkinitrd
++ chroot /mnt/local /usr/bin/mkinitrd -v -f --with=scsi_transport_sas --with=sd_mod --with=ses --with=sg --with=smartpqi --with=sr_mod --with=uas /boot/initramfs-3.10.0-957.5.1.el7.x86_64.img 3.10.0-957.5.1.el7.x86_64
Executing: /sbin/dracut -v -f --add-drivers " scsi_transport_sas sd_mod ses sg smartpqi sr_mod uas" /boot/initramfs-3.10.0-957.5.1.el7.x86_64.img 3.10.0-957.5.1.el7.x86_64
++ LogPrint 'WARNING:
Failed to create initrd for kernel version '\''3.10.0-957.5.1.el7.x86_64'\''.
Check '\''/var/log/rear/rear-bosarip1.log'\'' to see the error messages in detail
and decide yourself, whether the system will boot or not.
'
++ Log 'WARNING:
Failed to create initrd for kernel version '\''3.10.0-957.5.1.el7.x86_64'\''.
Check '\''/var/log/rear/rear-bosarip1.log'\'' to see the error messages in detail
and decide yourself, whether the system will boot or not.
'
+++ date '+%Y-%m-%d %H:%M:%S.%N '
++ local 'timestamp=2019-02-08 15:12:19.979411963 '
++ test 1 -gt 0
++ echo '2019-02-08 15:12:19.979411963 WARNING:
Failed to create initrd for kernel version '\''3.10.0-957.5.1.el7.x86_64'\''.
Check '\''/var/log/rear/rear-bosarip1.log'\'' to see the error messages in detail
and decide yourself, whether the system will boot or not.
'
2019-02-08 15:12:19.979411963 WARNING:
Failed to create initrd for kernel version '3.10.0-957.5.1.el7.x86_64'.
Check '/var/log/rear/rear-bosarip1.log' to see the error messages in detail
and decide yourself, whether the system will boot or not.

++ Print 'WARNING:
Failed to create initrd for kernel version '\''3.10.0-957.5.1.el7.x86_64'\''.
Check '\''/var/log/rear/rear-bosarip1.log'\'' to see the error messages in detail
and decide yourself, whether the system will boot or not.
'
++ test 1
++ echo -e 'WARNING:
Failed to create initrd for kernel version '\''3.10.0-957.5.1.el7.x86_64'\''.
Check '\''/var/log/rear/rear-bosarip1.log'\'' to see the error messages in detail
and decide yourself, whether the system will boot or not.

gozora commented at 2019-02-11 09:58:

@procurve86

I don't see anything obviously wrong in provided logs. But somehow I doubt that this is problem with ReaR. Maybe you can try to re-run recovery without

MODULES=( 'all_modules' ) 
FIRMWARE_FILES=( 'yes' )

When I have a bit of time, I'll try to download RHEL and reproduce your problem with provided configuration.

I'll keep you posted.

V.

gozora commented at 2019-02-11 10:01:

@rmetrich I've dared to assign you here just because this is RHEL, maybe it might be useful for you if similar problem arises in future ...

V.

rmetrich commented at 2019-02-11 10:08:

Hi @procurve86 , are you a Red Hat customer? If so we should work on this internally then report the status here.
Also, what happens when you enter the chroot after recovery and execute the following command:

/sbin/dracut -v -f --add-drivers " scsi_transport_sas sd_mod ses sg smartpqi sr_mod uas" /boot/initramfs-3.10.0-957.5.1.el7.x86_64.img 3.10.0-957.5.1.el7.x86_64

procurve86 commented at 2019-02-14 08:37:

@gozora

It still gets stuck when I remove

MODULES=( 'all_modules' ) 
FIRMWARE_FILES=( 'yes' )

jsmeix commented at 2019-02-14 09:54:

@procurve86
to get the contents of the recovery system noticeable smaller
you need to explicitly specify that e.g. via

MODULES=( 'loaded_modules' ) 
FIRMWARE_FILES=( 'no' )

see the description of each one of the config variables in
usr/share/rear/conf/default.conf also available online at
https://raw.githubusercontent.com/rear/rear/master/usr/share/rear/conf/default.conf

For some comparison how much space the recovery system ISO image
may need (it is compressed therein) see "Space requirements" at
https://github.com/rear/rear/issues/2041#issue-409740875

To inspect the contents of the recovery system use KEEP_BUILD_DIR
(see its description in default.conf) so that you could also
chroot into TMPDIR/rear.XXXXXXXXXXXXXXX/rootfs/
to try out things inside the recovery system.

procurve86 commented at 2019-02-14 09:54:

Hi @rmetrich

We're using Oracle Linux but we also do have a Red Hat Subscription.

When I run

/sbin/dracut -v -f --add-drivers " scsi_transport_sas sd_mod ses sg smartpqi sr_mod uas" /boot/initramfs-3.10.0-957.5.1.el7.x86_64.img 3.10.0-957.5.1.el7.x86_64

It sais -bash: /sbin/dracut: No such file or directory

rmetrich commented at 2019-02-14 10:06:

@procurve86 it's /usr/bin/dracut
When entering the chroot, make sure to mount the procfs first:

export TARGET_FS_ROOT=/mnt/local
mount -t proc none $TARGET_FS_ROOT/proc 
mount -t sysfs sys $TARGET_FS_ROOT/sys
mount -o bind /dev $TARGET_FS_ROOT/dev

procurve86 commented at 2019-02-14 10:19:

@rmetrich

thank you.
What i did:

export TARGET_FS_ROOT=/mnt/local
mount -t proc none $TARGET_FS_ROOT/proc 
mount -t sysfs sys $TARGET_FS_ROOT/sys
mount -o bind /dev $TARGET_FS_ROOT/dev
chroot $TARGET_FS_ROOT
/sbin/dracut -v -f --add-drivers " scsi_transport_sas sd_mod ses sg smartpqi sr_mod uas" /boot/initramfs-3.10.0-957.5.1.el7.x86_64.img 3.10.0-957.5.1.el7.x86_64

And now I do have the same problem as when I run rear recover.
Its hanging!

After a few minutes i did get some output regarding missing modules:

dracut module 'busybox' will not be installed, because command 'busybox' could not be found! dracut module 'dmsquash-live-ntfs' will not be installed, because command 'ntfs-3g' could not be found!

Nevertheless the command is still hanging...

procurve86 commented at 2019-02-14 10:49:

@jsmeix

Thanks for your hint - I'm going to test it this afternoon

rmetrich commented at 2019-02-14 10:54:

@procurve86 Please run with following arguments and provide /mnt/local/tmp/dracut.out:

/usr/bin/dracut --debug -v -f --add-drivers " scsi_transport_sas sd_mod ses sg smartpqi sr_mod uas" /boot/initramfs-3.10.0-957.5.1.el7.x86_64.img 3.10.0-957.5.1.el7.x86_64 >/tmp/dracut.out 2>&1


#### <img src="https://avatars.githubusercontent.com/u/20219180?v=4" width="50">[procurve86](https://github.com/procurve86) commented at [2019-02-14 12:21](https://github.com/rear/rear/issues/2035#issuecomment-463607461):

@rmetrich 
Please find dracut.out attached to this post.
After 1.5h I've aborted the hanging command...

[dracut.out.txt](https://github.com/rear/rear/files/2864923/dracut.out.txt)

#### <img src="https://avatars.githubusercontent.com/u/1163635?u=36b5e32e1dd55f1ce77cad431a5683fce40a7934&v=4" width="50">[rmetrich](https://github.com/rmetrich) commented at [2019-02-14 12:36](https://github.com/rear/rear/issues/2035#issuecomment-463611762):

Looks like `lvm vgs --noheadings -o pv_name VolGroup_System` is hanging.
Can you strace the command?

#### <img src="https://avatars.githubusercontent.com/u/20219180?v=4" width="50">[procurve86](https://github.com/procurve86) commented at [2019-02-14 14:46](https://github.com/rear/rear/issues/2035#issuecomment-463653471):

@jsmeix

When I disable FIRMWARE_FILES, the driver for the 25G Network Adapter is missing.

@rmetrich

When I run:

`/usr/sbin/lvm vgs --noheadings -o pv_name VolGroup_System`

It takes a long time but gets finished

with the following output:

```WARNING: Failed to connect to lvmetad. Falling back to device scanning.
  WARNING: Device /dev/sda not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/VolGroup_System/LogVol_u01 not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/sda1 not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/VolGroup_System/LogVol_swap not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/sda2 not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/VolGroup_System/LogVol_var not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/sda3 not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/VolGroup_System/LogVol_usr not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/VolGroup_System/LogVol_tmp not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/VolGroup_System/LogVol_root not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/sdb not initialized in udev database even after waiting 10000000 microseconds.
  /dev/sdc: open failed: No medium found
  WARNING: Device /dev/VolGroup_System/LogVol_u01 not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/sda1 not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/VolGroup_System/LogVol_swap not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/sda2 not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/VolGroup_System/LogVol_var not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/sda3 not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/VolGroup_System/LogVol_usr not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/VolGroup_System/LogVol_tmp not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/VolGroup_System/LogVol_root not initialized in udev database even after waiting 10000000 microseconds.
  WARNING: Device /dev/sdb not initialized in udev database even after waiting 10000000 microseconds.
  /dev/sda3```

#### <img src="https://avatars.githubusercontent.com/u/12116358?u=1c5ba9dcee5ca3082f03029a7fbe647efd30eb49&v=4" width="50">[gozora](https://github.com/gozora) commented at [2019-02-14 18:58](https://github.com/rear/rear/issues/2035#issuecomment-463748941):

@procurve86

can you do following bind mount on top of  @rmetrich:

> ```
> export TARGET_FS_ROOT=/mnt/local
> mount -t proc none $TARGET_FS_ROOT/proc 
> mount -t sysfs sys $TARGET_FS_ROOT/sys
> mount -o bind /dev $TARGET_FS_ROOT/dev
> ```

```
mount --bind /run $TARGET_FS_ROOT/run
```
and run again:

```
chroot $TARGET_FS_ROOT
/sbin/dracut -v -f --add-drivers " scsi_transport_sas sd_mod ses sg smartpqi sr_mod uas" /boot/initramfs-3.10.0-957.5.1.el7.x86_64.img 3.10.0-957.5.1.el7.x86_64
```

and let us know if `dracut` runs faster this time?

V.

#### <img src="https://avatars.githubusercontent.com/u/20219180?v=4" width="50">[procurve86](https://github.com/procurve86) commented at [2019-02-15 06:20](https://github.com/rear/rear/issues/2035#issuecomment-463922300):

@gozora

`mount --bind /run $TARGET_FS_ROOT/run`

does the trick!  👍

How can we force this in the recovery process?

#### <img src="https://avatars.githubusercontent.com/u/1788608?u=925fc54e2ce01551392622446ece427f51e2f0ce&v=4" width="50">[jsmeix](https://github.com/jsmeix) commented at [2019-02-15 07:53](https://github.com/rear/rear/issues/2035#issuecomment-463942102):

@procurve86 
see your https://github.com/rear/rear/issues/2035#issuecomment-462254983
```
+ source /usr/share/rear/finalize/Fedora/i386/550_rebuild_initramfs.sh
...
++ mount -t proc none /mnt/local/proc
++ mount -t sysfs none /mnt/local/sys
...
++ chroot /mnt/local /usr/bin/mkinitrd -v -f --with=scsi_transport_sas --with=sd_mod --with=ses --with=sg --with=smartpqi --with=sr_mod --with=uas /boot/initramfs-3.10.0-693.el7.x86_64.img 3.10.0-693.el7.x86_64
```
I.e. enhance your /usr/share/rear/finalize/Fedora/i386/550_rebuild_initramfs.sh
to mount whatever additionally needed things you need in your particular case.

Then redo "rear mkrescue/mkbackup" to get your enhanced
finalize/Fedora/i386/550_rebuild_initramfs.sh script into the
ReaR recovery system so that it works during "rear recover".

#### <img src="https://avatars.githubusercontent.com/u/12116358?u=1c5ba9dcee5ca3082f03029a7fbe647efd30eb49&v=4" width="50">[gozora](https://github.com/gozora) commented at [2019-02-15 07:54](https://github.com/rear/rear/issues/2035#issuecomment-463942391):

@procurve86

You can try following patch:

```
diff --git a/usr/share/rear/finalize/Fedora/i386/550_rebuild_initramfs.sh b/usr/share/rear/finalize/Fedora/i386/550_rebuild_initramfs.sh
index 40ccce92..4f3bf146 100644
--- a/usr/share/rear/finalize/Fedora/i386/550_rebuild_initramfs.sh
+++ b/usr/share/rear/finalize/Fedora/i386/550_rebuild_initramfs.sh
@@ -66,6 +66,11 @@ WITH_INITRD_MODULES=$( printf '%s\n' ${INITRD_MODULES[@]} | awk '{printf "--with
 mount -t proc none $TARGET_FS_ROOT/proc
 mount -t sysfs none $TARGET_FS_ROOT/sys

+if mountpoint -q /run && ! mountpoint -q $TARGET_FS_ROOT/run; then
+  mkdir -p $TARGET_FS_ROOT/run
+  mount --bind /run $TARGET_FS_ROOT/run
+fi
+
 # Recreate any initrd or initramfs image under $TARGET_FS_ROOT/boot/ with new drivers
 # Images ignored:
 # kdump images as they are build by kdump
```

It tells ReaR to mount _/run_ into $TARGET_FS_ROOT/run if something is mounted on _/run_ and _$TARGET_FS_ROOT/run_ is not yet mounted, I did not test it though, so good luck ;-)

To test it should be enough to boot your ReaR recovery system and add previously listed 5 lines into _/usr/share/rear/finalize/Fedora/i386/550_rebuild_initramfs.sh_

V.

#### <img src="https://avatars.githubusercontent.com/u/12116358?u=1c5ba9dcee5ca3082f03029a7fbe647efd30eb49&v=4" width="50">[gozora](https://github.com/gozora) commented at [2019-02-15 07:56](https://github.com/rear/rear/issues/2035#issuecomment-463942941):

@jsmeix this time you was quicker 🥇 !

V.

#### <img src="https://avatars.githubusercontent.com/u/1788608?u=925fc54e2ce01551392622446ece427f51e2f0ce&v=4" width="50">[jsmeix](https://github.com/jsmeix) commented at [2019-02-15 08:07](https://github.com/rear/rear/issues/2035#issuecomment-463945426):

@gozora 
yes this time by luck, but what a poor win of me ;-)
Of course you had the real win because you found the actual solution :-)

I wonder how you had the idea that bind mount of /run is missing here.
I would assume that running a program within `chroot $TARGET_FS_ROOT`
should not need anything from the `/run` directory of the "parent" system.

@gozora @rmetrich 
could you perhaps explain what `chroot /mnt/local /usr/bin/mkinitrd`
needs from the `/run` directory of the "parent" system?

#### <img src="https://avatars.githubusercontent.com/u/1788608?u=925fc54e2ce01551392622446ece427f51e2f0ce&v=4" width="50">[jsmeix](https://github.com/jsmeix) commented at [2019-02-15 08:17](https://github.com/rear/rear/issues/2035#issuecomment-463947934):

Perhaps in this case here something is not right
with the `run` directory in $TARGET_FS_ROOT?

For comparison:
On my SLES12-like openSUSE Leap 15.0 system "rear mkrescue" results
var/lib/rear/recovery/directories_permissions_owner_group
that contains for `run` those entries
```
/run 755 root root
/var/lock -> /run/lock
/var/run -> /run
```
cf. usr/share/rear/prep/default/400_save_directories.sh
and usr/share/rear/restore/default/900_create_missing_directories.sh
and DIRECTORY_ENTRIES_TO_RECOVER

#### <img src="https://avatars.githubusercontent.com/u/12116358?u=1c5ba9dcee5ca3082f03029a7fbe647efd30eb49&v=4" width="50">[gozora](https://github.com/gozora) commented at [2019-02-15 08:24](https://github.com/rear/rear/issues/2035#issuecomment-463949782):

@jsmeix

> I wonder how you had the idea that bind mount of /run is missing here.
> I would assume that running a program within `chroot $TARGET_FS_ROOT`
> should not need anything from the `/run` directory of the "parent" system.

I was just running `strace` on `lvs` commands and checking what files are accessed.

Just a blind guess, but since RedHat is always using cutting edge LVM versions, can it be that we are dealing here with some new LVM feature ?

V.

#### <img src="https://avatars.githubusercontent.com/u/12116358?u=1c5ba9dcee5ca3082f03029a7fbe647efd30eb49&v=4" width="50">[gozora](https://github.com/gozora) commented at [2019-02-15 08:30](https://github.com/rear/rear/issues/2035#issuecomment-463951425):

@jsmeix

> Perhaps in this case here something is not right
> with the `run` directory in $TARGET_FS_ROOT?
> ...

I guess that this code just creates _$TARGET_FS_ROOT/run_, but it is empty.

`strace` showed me that LVM tried to access files (most probably created by systemd-udevd) in _/run/udev/data/_ ...

V.

#### <img src="https://avatars.githubusercontent.com/u/12116358?u=1c5ba9dcee5ca3082f03029a7fbe647efd30eb49&v=4" width="50">[gozora](https://github.com/gozora) commented at [2019-02-15 08:37](https://github.com/rear/rear/issues/2035#issuecomment-463953344):

@rmetrich 
Just FYI, if you are interested in reproducing this problem (on RHEL 7.6) on running system (not sure if it is reproducible on other distros):
1. remove /run/udev (`rm /run/udev`)
2. restart lvmetad
3. run `lvs`

V.

#### <img src="https://avatars.githubusercontent.com/u/1788608?u=925fc54e2ce01551392622446ece427f51e2f0ce&v=4" width="50">[jsmeix](https://github.com/jsmeix) commented at [2019-02-15 08:39](https://github.com/rear/rear/issues/2035#issuecomment-463953847):

Yes, usr/share/rear/restore/default/900_create_missing_directories.sh
only creates empty directories inside $TARGET_FS_ROOT as specified in
var/lib/rear/recovery/directories_permissions_owner_group
e.g. also for `sys` and `dev`
```
# egrep 'sys|dev' var/lib/rear/recovery/directories_permissions_owner_group
/dev 755 root root
/sys 555 root root
```

#### <img src="https://avatars.githubusercontent.com/u/1163635?u=36b5e32e1dd55f1ce77cad431a5683fce40a7934&v=4" width="50">[rmetrich](https://github.com/rmetrich) commented at [2019-02-15 08:53](https://github.com/rear/rear/issues/2035#issuecomment-463957796):

@gozora Thanks, will do

#### <img src="https://avatars.githubusercontent.com/u/20219180?v=4" width="50">[procurve86](https://github.com/procurve86) commented at [2019-02-18 14:29](https://github.com/rear/rear/issues/2035#issuecomment-464751059):

@gozora

Thanks you for your patch.
Unfortunately its not working, because the command mountpoint is not available on the recovery system.

What am I doing wrong?

#### <img src="https://avatars.githubusercontent.com/u/12116358?u=1c5ba9dcee5ca3082f03029a7fbe647efd30eb49&v=4" width="50">[gozora](https://github.com/gozora) commented at [2019-02-18 14:49](https://github.com/rear/rear/issues/2035#issuecomment-464758609):

Crap! This is what you get without testing ...

Just for testing purposes, try this one (just remove the condition with `mountpoint`):

```
diff --git a/usr/share/rear/finalize/Fedora/i386/550_rebuild_initramfs.sh b/usr/share/rear/finalize/Fedora/i386/550_rebuild_initramfs.sh
index 40ccce92..4f3bf146 100644
--- a/usr/share/rear/finalize/Fedora/i386/550_rebuild_initramfs.sh
+++ b/usr/share/rear/finalize/Fedora/i386/550_rebuild_initramfs.sh
@@ -66,6 +66,11 @@ WITH_INITRD_MODULES=$( printf '%s\n' ${INITRD_MODULES[@]} | awk '{printf "--with
 mount -t proc none $TARGET_FS_ROOT/proc
 mount -t sysfs none $TARGET_FS_ROOT/sys

+  mkdir -p $TARGET_FS_ROOT/run
+  mount --bind /run $TARGET_FS_ROOT/run
+
 # Recreate any initrd or initramfs image under $TARGET_FS_ROOT/boot/ with new drivers
 # Images ignored:
 # kdump images as they are build by kdump
```

#### <img src="https://avatars.githubusercontent.com/u/20219180?v=4" width="50">[procurve86](https://github.com/procurve86) commented at [2019-02-18 14:59](https://github.com/rear/rear/issues/2035#issuecomment-464763082):

@gozora

Without conditions it's working :)
So for the moment we need to adjust the script on all 7.6 servers? 
Or is it possible to get a rpm with the fix included?

#### <img src="https://avatars.githubusercontent.com/u/1788608?u=925fc54e2ce01551392622446ece427f51e2f0ce&v=4" width="50">[jsmeix](https://github.com/jsmeix) commented at [2019-02-18 15:24](https://github.com/rear/rear/issues/2035#issuecomment-464773208):

What nice coincidence - I was hit by the same right now while
implementing https://github.com/rear/rear/issues/2045
```
mountpoint no such file or directory
```

@procurve86 
have in your etc/rear/local.conf
```
REQUIRED_PROGS=( "${REQUIRED_PROGS[@]}" mountpoint )
```
to enforce it into the recovery system, see 'default.conf'.

#### <img src="https://avatars.githubusercontent.com/u/12116358?u=1c5ba9dcee5ca3082f03029a7fbe647efd30eb49&v=4" width="50">[gozora](https://github.com/gozora) commented at [2019-02-18 15:50](https://github.com/rear/rear/issues/2035#issuecomment-464784658):

@procurve86

> So for the moment we need to adjust the script on all 7.6 servers?
> Or is it possible to get a rpm with the fix included?

There is a good chance that this will be fixed  in ReaR 2.5.
So if you wait until than, you don't need to patch anything ...

V.

#### <img src="https://avatars.githubusercontent.com/u/12116358?u=1c5ba9dcee5ca3082f03029a7fbe647efd30eb49&v=4" width="50">[gozora](https://github.com/gozora) commented at [2019-02-18 15:52](https://github.com/rear/rear/issues/2035#issuecomment-464785452):

Looks like PR https://github.com/rear/rear/pull/2047 that should fix this is born ..

#### <img src="https://avatars.githubusercontent.com/u/20219180?v=4" width="50">[procurve86](https://github.com/procurve86) commented at [2019-02-19 09:26](https://github.com/rear/rear/issues/2035#issuecomment-465054671):

@gozora @rmetrich @jsmeix

Thanks a lot for your support - you're awesome!

#### <img src="https://avatars.githubusercontent.com/u/1788608?u=925fc54e2ce01551392622446ece427f51e2f0ce&v=4" width="50">[jsmeix](https://github.com/jsmeix) commented at [2019-02-19 14:47](https://github.com/rear/rear/issues/2035#issuecomment-465158105):

@procurve86 
of course we are! ;-))

You could do me a big favour and test if
https://github.com/rear/rear/pull/2047
makes things work for you on RHEL 7.6, cf.
https://github.com/rear/rear/pull/2047#issuecomment-465136524

How to do such a test:

Basically "git clone" my current code from https://github.com/rear/rear/pull/2047
into a separated directory and then configure and run ReaR
from within that directory, cf.
https://github.com/rear/rear/pull/2047#issuecomment-465123176
e.g. something like:
```
# git clone https://github.com/jsmeix/rear.git

# mv rear rear.jsmeix

# cd rear.jsmeix

# git branch -a

# git checkout remotes/origin/bind_mount_proc_sys_dev_run_at_one_place_issue2045

# vi etc/rear/local.conf

# usr/sbin/rear -D mkbackup
```
Note the relative paths "etc/rear/" and "usr/sbin/".

#### <img src="https://avatars.githubusercontent.com/u/20219180?v=4" width="50">[procurve86](https://github.com/procurve86) commented at [2019-02-20 14:37](https://github.com/rear/rear/issues/2035#issuecomment-465602717):

@jsmeix

sure :)
I've tested #2047 on RHEL 7.6

Strange, it seems that mkinitrd has worked but I get the following message:
![rear_warning](https://user-images.githubusercontent.com/20219180/53099158-7a188480-3525-11e9-84c1-7b82310ed9b2.png)

#### <img src="https://avatars.githubusercontent.com/u/12116358?u=1c5ba9dcee5ca3082f03029a7fbe647efd30eb49&v=4" width="50">[gozora](https://github.com/gozora) commented at [2019-02-20 16:06](https://github.com/rear/rear/issues/2035#issuecomment-465642305):

@procurve86 looks like a bug ;-), I've submitted https://github.com/rear/rear/pull/2051 to fix this ...

Thanks for reporting!

V.

#### <img src="https://avatars.githubusercontent.com/u/1788608?u=925fc54e2ce01551392622446ece427f51e2f0ce&v=4" width="50">[jsmeix](https://github.com/jsmeix) commented at [2019-02-20 16:54](https://github.com/rear/rear/issues/2035#issuecomment-465663479):

@procurve86 
thank you so much for testing it.
It helps us so much to find such hidden issues as
https://github.com/rear/rear/issues/2035#issuecomment-465602717

Could you attach the "rear -D recover" log file of that failed attempt in
https://github.com/rear/rear/issues/2035#issuecomment-465602717
so that we can see the exact details why it fails in your case.

Cf. "Debugging issues with Relax-and-Recover" in
https://en.opensuse.org/SDB:Disaster_Recovery

When you change your
usr/share/rear/finalize/Linux-i386/670_run_efibootmgr.sh
(it was  usr/share/rear/finalize/Linux-i386/630_run_efibootmgr.sh before)
as @gozora did in
https://github.com/rear/rear/pull/2051/files#diff-50b1e7333a3c6fa5c0d8a2409ca686f3
and re-do "rear mkrescue" and then again a "rear recover"
does then "rear recover" work in your case?

#### <img src="https://avatars.githubusercontent.com/u/12116358?u=1c5ba9dcee5ca3082f03029a7fbe647efd30eb49&v=4" width="50">[gozora](https://github.com/gozora) commented at [2019-02-20 18:31](https://github.com/rear/rear/issues/2035#issuecomment-465699764):

@jsmeix, @procurve86 
Here is what is happening in current ReaR master code:
```
+ source /usr/share/rear/finalize/Linux-i386/630_run_efibootmgr.sh
++ is_true 1
++ case "$1" in
++ return 0
++ is_true no
++ case "$1" in
++ return 1
++ test -f /boot/efi/EFI/redhat/grubx64.efi  <=== problematic test
++ return 0
+ source_return_code=0
+ test 0 -eq 0
+ test 1
```

_630_run_efibootmgr.sh_ then later creates UEFI boot entry and sets `NOBOOTLOADER=''`, but since we've returned prematurely, boot entry will not be created and keeps `NOBOOTLOADER=1`, which will later result to warning message mentioned in https://github.com/rear/rear/issues/2035#issuecomment-465602717.

With #2051 same condition looks like this:
```
+ source /usr/share/rear/finalize/Linux-i386/630_run_efibootmgr.sh
++ is_true 1
++ case "$1" in
++ return 0
++ is_true no
++ case "$1" in
++ return 1
++ test -f /mnt/local//boot/efi/EFI/redhat/grubx64.efi <=== now checking in /mnt/local/
+++ df -P /mnt/local//boot/efi/EFI/redhat/grubx64.efi
+++ tail -1
+++ awk '{print $6}'
++ esp_mountpoint=/mnt/local/boot/efi
++ test /mnt/local/boot/efi
++ test -d /mnt/local/boot/efi
+++ mount
+++ grep /mnt/local/boot/efi
...
```

V.

#### <img src="https://avatars.githubusercontent.com/u/12116358?u=1c5ba9dcee5ca3082f03029a7fbe647efd30eb49&v=4" width="50">[gozora](https://github.com/gozora) commented at [2019-02-20 18:35](https://github.com/rear/rear/issues/2035#issuecomment-465701188):

In general it makes no sense to work with $UEFI_BOOTLOADER variable **when restoring**, unless we are inside of `chroot`.

V.

#### <img src="https://avatars.githubusercontent.com/u/20219180?v=4" width="50">[procurve86](https://github.com/procurve86) commented at [2019-02-21 08:20](https://github.com/rear/rear/issues/2035#issuecomment-465905111):

@jsmeix

Please find the recover debug log below:
[rear-recover.log.txt](https://github.com/rear/rear/files/2888288/rear-recover.log.txt)

#### <img src="https://avatars.githubusercontent.com/u/20219180?v=4" width="50">[procurve86](https://github.com/procurve86) commented at [2019-02-21 08:48](https://github.com/rear/rear/issues/2035#issuecomment-465913236):

@jsmeix @gozora

I've  adjusted /usr/share/rear/finalize/Linux-i386/670_run_efibootmgr.sh
As gozora did.

I was able to relax and the system was recovered :)

many thanks for your help!

best regards

#### <img src="https://avatars.githubusercontent.com/u/1788608?u=925fc54e2ce01551392622446ece427f51e2f0ce&v=4" width="50">[jsmeix](https://github.com/jsmeix) commented at [2019-02-21 09:44](https://github.com/rear/rear/issues/2035#issuecomment-465932144):

@procurve86 
thank you very much for your testing and
your verification that @gozora 's fix helps and is sufficient.

#### <img src="https://avatars.githubusercontent.com/u/20219180?v=4" width="50">[procurve86](https://github.com/procurve86) commented at [2019-02-21 09:50](https://github.com/rear/rear/issues/2035#issuecomment-465933957):

@jsmeix

my pleasure! I'm really impressed what effort you all put in this project and your kind support 👍

viele grüsse aus der schweiz

#### <img src="https://avatars.githubusercontent.com/u/1788608?u=925fc54e2ce01551392622446ece427f51e2f0ce&v=4" width="50">[jsmeix](https://github.com/jsmeix) commented at [2019-02-21 12:44](https://github.com/rear/rear/issues/2035#issuecomment-465986497):

With https://github.com/rear/rear/pull/2047 merged
this issue should be fixed.
This issue can be reopened if this particular issue is not yet fixed.
But new separated issues should be reported separatedly
(like regressions because of the changes in https://github.com/rear/rear/pull/2047).

#### <img src="https://avatars.githubusercontent.com/u/1788608?u=925fc54e2ce01551392622446ece427f51e2f0ce&v=4" width="50">[jsmeix](https://github.com/jsmeix) commented at [2019-02-21 12:48](https://github.com/rear/rear/issues/2035#issuecomment-465987439):

@procurve86 
FYI how to try out our current ReaR GitHub master code
where https://github.com/rear/rear/pull/2047 is merged:

Basically "git clone" our current ReaR upstream GitHub master code
into a separated directory and then configure and run ReaR
from within that directory like:
```
# git clone https://github.com/rear/rear.git

# mv rear rear.github.master

# cd rear.github.master

# vi etc/rear/local.conf

# usr/sbin/rear -D mkbackup
```
Note the relative paths "etc/rear/" and "usr/sbin/".

#### <img src="https://avatars.githubusercontent.com/u/48693522?v=4" width="50">[jcarter3d](https://github.com/jcarter3d) commented at [2019-03-18 19:25](https://github.com/rear/rear/issues/2035#issuecomment-474065443):

I'm hitting this while trying to "rear recover" RHEL 7.6 images (on 10 machines).
Is there a work-around I can run after "rear recover" hangs on the "Running mkinitrd..." screen?

The previously mentioned work-arounds seem to be for those who which to build a new image. I'm just hoping to recover the images I already have for now.

#### <img src="https://avatars.githubusercontent.com/u/20219180?v=4" width="50">[procurve86](https://github.com/procurve86) commented at [2019-03-19 07:01](https://github.com/rear/rear/issues/2035#issuecomment-474221716):

@jcarter3d

execute the following commands before running "rear recover":

`export TARGET_FS_ROOT=/mnt/local`
`mount --bind /run $TARGET_FS_ROOT/run`


-------------------------------------------------------------------------------



[Export of Github issue for [rear/rear](https://github.com/rear/rear).]