#2045 Issue closed: bind-mount proc sys dev and run at one central place in finalize stage

Labels: enhancement, bug, cleanup, fixed / solved / done

jsmeix opened issue at 2019-02-18 10:04:

Currently during finalize stage proc and sys
are mounted and umounted in various scripts:

# find usr/share/rear/finalize/ -type f | xargs grep 'mount ' | grep -v ':#' | egrep '/proc|/sys|/dev' | cut -b16-

finalize/Fedora/i386/550_rebuild_initramfs.sh:
mount -t proc none $TARGET_FS_ROOT/proc

finalize/Fedora/i386/550_rebuild_initramfs.sh:
mount -t sysfs none $TARGET_FS_ROOT/sys

finalize/Fedora/i386/550_rebuild_initramfs.sh:
umount $TARGET_FS_ROOT/proc $TARGET_FS_ROOT/sys

finalize/Debian/i386/550_rebuild_initramfs.sh:
mount -t proc none $TARGET_FS_ROOT/proc

finalize/Debian/i386/550_rebuild_initramfs.sh:
mount -t sysfs none $TARGET_FS_ROOT/sys

finalize/Debian/i386/550_rebuild_initramfs.sh:
umount $TARGET_FS_ROOT/proc $TARGET_FS_ROOT/sys

finalize/Linux-i386/610_install_grub.sh:
mount -t proc none $TARGET_FS_ROOT/proc

finalize/Linux-i386/610_install_grub.sh:
umount $TARGET_FS_ROOT/proc

finalize/Linux-i386/610_install_lilo.sh:
mount -t proc none $TARGET_FS_ROOT/proc

finalize/Linux-i386/610_install_lilo.sh:
umount $TARGET_FS_ROOT/proc

finalize/Linux-i386/620_install_elilo.sh:
mount -t proc none $TARGET_FS_ROOT/proc

finalize/Linux-i386/620_install_elilo.sh:
umount $TARGET_FS_ROOT/proc

finalize/SUSE_LINUX/i386/550_rebuild_initramfs.sh:
mount -t proc none $TARGET_FS_ROOT/proc

finalize/SUSE_LINUX/i386/550_rebuild_initramfs.sh:
mount -t sysfs none $TARGET_FS_ROOT/sys

finalize/SUSE_LINUX/i386/550_rebuild_initramfs.sh:
umount $TARGET_FS_ROOT/proc $TARGET_FS_ROOT/sys

I think this is basically an overcomplicated mess.

I think proc sys dev and run are in general needed during finalize stage
so that all of them should be bind-mounted at the beginning of the
finalize stage in one single dedicated script.

I even think umounting them is not needed, cf.
usr/share/rear/finalize/Linux-i386/620_install_grub2.sh

# Make /proc /sys /dev available in TARGET_FS_ROOT
# so that later things work in the "chroot TARGET_FS_ROOT" environment,
# cf. https://github.com/rear/rear/issues/1828#issuecomment-398717889
# and do not umount them when leaving this script because
# it is better when also after "rear recover" things still
# work in the "chroot TARGET_FS_ROOT" environment so that
# the user could more easily adapt things after "rear recover":
for mount_device in proc sys dev ; do
    umount $TARGET_FS_ROOT/$mount_device && sleep 1
    mount --bind /$mount_device $TARGET_FS_ROOT/$mount_device
done

According to
https://github.com/rear/rear/issues/2035#issuecomment-463748941
nowadays also run is needed inside TARGET_FS_ROOT

Each of proc sys dev and run can only be bind-mounted
into TARGET_FS_ROOT if each one is actually mounted in
the recovery system.

I don't know what is best if one is already mounted in TARGET_FS_ROOT

Perhaps the code should be as in
https://github.com/rear/rear/issues/2035#issuecomment-463942391
without re-mounting if already mounted
or is an enforced re-mounting in any case even if already mounted
better to enforce a clean state at the beginning of the finalize stage?

E.g. with enforced re-mounting in any case even if already mounted:

for mount_device in proc sys dev run ; do
    if ! mountpoint $mount_device ; then
        Log "$mount_device not mounted - cannot bind-mount it at $TARGET_FS_ROOT"
        continue
    fi
    umount $TARGET_FS_ROOT/$mount_device && sleep 1
    mount --bind /$mount_device $TARGET_FS_ROOT/$mount_device
done

Or without re-mounting if already mounted:

for mount_device in proc sys dev run ; do
    if mountpoint $mount_device ; then
        if mountpoint $TARGET_FS_ROOT/$mount_device ; then
            Log "$mount_device already mounted at $TARGET_FS_ROOT/$mount_device"
            continue
        fi
        umount $TARGET_FS_ROOT/$mount_device && sleep 1
        mount --bind /$mount_device $TARGET_FS_ROOT/$mount_device
    fi
done

jsmeix commented at 2019-02-18 10:04:

@rear/contributors
what do you think about it?
Should I do a pull request?

gozora commented at 2019-02-18 11:21:

@jsmeix I fully agree that proc, sys, dev and run should be only mounted once, and they should not explicitly unmounted during whole existence of ReaR rescue system. It will make life easier for people that want to e.g. recreate initrd, or install bootloader even after successful run of rear recover. They will not need to check/mount these file systems again.

jsmeix commented at 2019-02-18 13:05:

@gozora
exactly my reasoning.
I will do a pull request soon so that we can decide about real code...

jsmeix commented at 2019-02-18 13:48:

I found usr/share/rear/finalize/default/100_populate_dev.sh

# many systems now use udev and thus have an empty /dev
# this prevents our chrooted grub install later on, so we copy
# the /dev from our rescue system to the freshly installed system
cp -fa /dev $TARGET_FS_ROOT/

which looks somewhat scaring to me at least on first glance.

Shouldn't bind-mount /dev to $TARGET_FS_ROOT/dev
be the actually right way to do that?

@gdha
git log -p --follow usr/share/rear/finalize/default/100_populate_dev.sh
indicates you did some changes in that file 10 years ago in 2009.
Do you perhaps know about the reason for that script?

I would like to remove that script and replace it with a
bind-mount /dev to $TARGET_FS_ROOT/dev

gdha commented at 2019-02-18 14:09:

@jsmeix If I remember something of 10 years ago (cfr https://github.com/rear/rear/issues/2045#issuecomment-464737610) - hmm - I cannot even remember what I did last week ;-)
That being said, I fully agree that it should become a bind-mount as after the copy operation a bind-mount will happen anyhow.
Yeah, A good cleanup would be nice that is true.
I would even say (cfr https://github.com/rear/rear/issues/2045#issuecomment-464694089) if /run/ is available to take that with it as well

Conclusion: yes please prepare a nice PR for it - thanks a lot

jsmeix commented at 2019-02-22 07:46:

With https://github.com/rear/rear/pull/2047 merged this issue is fixed.

jsmeix commented at 2019-04-09 10:27:

Since https://github.com/rear/rear/pull/2047 is merged
"rear recover" fails on SLES 11:

On SLES 11 it does not work to bind-mount /dev into TARGET_FS_ROOT
because within the ReaR recovery system /dev is no mountpoint
(in a running SLES 11 system 'udev' is mounted on /dev)
and subsequently 'mkinitrd' fails within the ReaR recovery system
so that the recreated system cannot be booted.

Because SLES 11 is meanwhile rather old
every SLES 11 user should meanwhile have
a working disaster recovery procedure with ReaR < 2.5
so that there should be no need for SLES 11 users
to upgrade ReaR to version 2.5.

Therefore I dropped ReaR upstream support for SLES 11
and documented that in the Rear 2.5 release notes via
https://github.com/rear/rear.github.com/commit/277094fbc506671eaccb078ec663d3c3af0ec696

jsmeix commented at 2019-04-11 08:35:

With https://github.com/rear/rear/pull/2113 merged
ReaR 2.5 works again also on older systems like SLES11
where /dev is no mountpoint in the recovery system
(in particular "chroot TARGET_FS_ROOT mkinitrd" during "rear recover"), cf.
https://github.com/rear/rear/commit/65de6701c7095e7bf5b545f10c831bef729e9305

jsmeix commented at 2019-04-11 08:40:

I re-added ReaR 2.5 upstream support for SLES 11
and documented that in the Rear 2.5 release notes via
https://github.com/rear/rear.github.com/commit/da99073e881b2bb99b12fef8d89d2832aa4506a2


[Export of Github issue for rear/rear.]