#2200 Issue closed: New use-case for BLOCKCLONE backup method

Labels: enhancement, fixed / solved / done

petroniusniger opened issue at 2019-08-02 12:25:

  • ReaR version ("/usr/sbin/rear -V"): git latest

  • OS version ("cat /etc/rear/os.conf" or "lsb_release -a" or "cat /etc/os-release"):
    NAME="openSUSE Leap"
    VERSION="15.0"
    ID="opensuse-leap"
    ID_LIKE="suse opensuse"
    VERSION_ID="15.0"
    PRETTY_NAME="openSUSE Leap 15.0"
    ANSI_COLOR="0;32"
    CPE_NAME="cpe:/o:opensuse:leap:15.0"
    BUG_REPORT_URL="https://bugs.opensuse.org"
    HOME_URL="https://www.opensuse.org/"

  • ReaR configuration files ("cat /etc/rear/site.conf" and/or "cat /etc/rear/local.conf"):
    Custom

  • Hardware (PC or PowerNV BareMetal or ARM) or virtual machine (KVM guest or PoverVM LPAR): VMware VM

  • System architecture (x86 compatible or PPC64/PPC64LE or what exact ARM device): x86

  • Firmware (BIOS or UEFI or Open Firmware) and bootloader (GRUB or ELILO or Petitboot): EFI + grub2-uefi

  • Storage (local disk or SSD) and/or SAN (FC or iSCSI or FCoE) and/or multipath (DM or NVMe): local virtual disk + LVM + LUKS

  • Description of the issue (ideally so that others can reproduce it):

The current support for LUKS-encrypted filesystems means that the backup of the data it contains will be taken while the filesystem is decrypted and that during the recover phase a new encrypted filesystem will be created with new encryption keys, and the data then restored inside.

This is all very good and it works very well, but what if you have a more complex LUKS setup, with different passphrases or keys stored in some of the 8 slots it provides to do so. Perhaps you would care as much about recovering your data as you cared about recovering the exact LUKS setup you created, including those keys.

So I thought about using the "multiple backups" approach, ignoring the encrypted filesystem during the "base system" backup phase, and then using a second BLOCKCLONE phase using dd to image the underlying block device (a logical volume in my case) that contains the encrypted filesystem. And it mostly worked out of the box. I only faced 2 minor issues:

  • I still got prompted to create a new encryption key for the target device -- this is purely cosmetic, as it will just get overwritten by the BLOCKCLONE restore
  • since the various scripts didn't take any steps to try and unmount the filesystem before the backup and the restore phase, the recovered system failed to boot as it considered the filesystem corrupted, and I had to run the repair command manually.

To remedy this latter shortcoming, I've created a new configuration variable and improved the BLOCKCLONE backup and restore scripts:

  • if BLOCKCLONE_TRY_UNMOUNTis set to "yes", both scripts will now try to unmount the source/target device before performing the backup/restore. If the unmounting fails, this is not considered fatal and a warning message is issued (as before)
  • if the device was initially mounted and it could be unmounted, then the backup script will try to remount it once its job is over (the restore script doesn't try to remount, just issues a message stating that the user may do so manually if needed)
  • I've also documented this new use-case in the User Guide, chapter 12, "BLOCKCLONE"

I'll prepare a branch with the updated code for review.

petroniusniger commented at 2019-08-02 13:09:

In addition to the aforementioned changes, the branch will also contain:

  • bug fix in global function umount_mountpoint(): return value now visible by calling script
  • cosmetics in global function is_device_mounted(): local variables now defined as local
  • new global function get_mountpoint()
  • fixed typo in User Guide, chapter 11, "multiple backups"

petroniusniger commented at 2019-08-02 13:16:

All current changes pushed to branch "issue-2200".

jsmeix commented at 2019-09-03 11:58:

With https://github.com/rear/rear/pull/2201 merged
I think this issue is done.


[Export of Github issue for rear/rear.]