#1793 Issue closed: diskrestore.sh fails with not consecutively numbered partitions

Labels: enhancement, bug, fixed / solved / done

chymian opened issue at 2018-05-03 13:32:

  • ReaR version: Relax-and-Recover 2.3 / Git from debian buster/testing 2.3+dfsg-1

  • OS version: debian stretch

  • System architecture (x86 compatible): amd64 bare metal

  • Are you using BIOS or UEFI: BIOS

  • issue: if partition-numbering is not in ascending order, diskrestore.sh fails

  • Work-around, if any: use fdisk to sort part. numbers, grub-install, reboot before rear mkrescue

  • ReaR configuration files ("cat /etc/rear/site.conf" or "cat /etc/rear/local.conf"):

OUTPUT=USB
USB_DEVICE=/dev/disk/by-label/REAR-000

BACKUP=BAREOS
BAREOS_CLIENT=hansa-fd
BAREOS_FILESET=LinuxHansa

cat rear-recover_df.txt

Filesystem                               Size  Used Avail Use% Mounted on
/dev/sdb3                                 47G   18G   28G  40% /
hubic                                    110G   28G   83G  26% /media/hubic
/dev/mapper/vg_hdd-lxc--coiner--data     150G   46G  103G  31% /mnt/.btrfs/vg_hdd/lxc-coiner-data
/dev/loop2                                87M   87M     0 100% /snap/core/4486
/dev/sdb3                                 47G   18G   28G  40% /mnt/.btrfs/vg_vec/root
/dev/loop5                                87M   87M     0 100% /snap/core/4407
/dev/sda5                                 46G   15G   31G  32% /mnt/.btrfs/vg_hdd/root
/dev/mapper/vg_vec-srv--projects          40G   24G   16G  60% /srv/virt/projects
/dev/mapper/vg_vec-srv--projects          40G   24G   16G  60% /usr/src
/dev/mapper/vg_vec-srv--projects          40G   24G   16G  60% /mnt/.btrfs/vg_vec/srv-projects
/dev/mapper/vg_vec-srv--containers       240G  197G   41G  83% /var/cache/lxc
/dev/mapper/vg_vec-srv--containers       240G  197G   41G  83% /var/lib/docker
/dev/mapper/vg_vec-srv--containers       240G  197G   41G  83% /var/lib/lxc
/dev/mapper/vg_vec-srv--containers       240G  197G   41G  83% /mnt/.btrfs/vg_vec/srv-containers
/dev/mapper/vg_vec-home                   91G   69G   22G  76% /home
/dev/mapper/vg_vec-home                   91G   69G   22G  76% /mnt/.btrfs/vg_vec/home
/dev/mapper/vg_hdd-home                  230G  190G   39G  84% /mnt/.btrfs/vg_hdd/home
/dev/mapper/vg_hdd-home                  230G  190G   39G  84% /home/guenter/HDD
/dev/mapper/vg_hdd-srv--files            450G  285G  164G  64% /mnt/.btrfs/vg_hdd/srv-files
/dev/mapper/vg_hdd-srv--files            450G  285G  164G  64% /srv/media
/dev/mapper/vg_hdd-srv--files            450G  285G  164G  64% /srv/files
/dev/mapper/vg_hdd-srv--files            450G  285G  164G  64% /home/guenter/Videos
/home/guenter/.Private                    91G   69G   22G  76% /home/guenter/Private
/dev/mapper/vg_hdd-srv--containers--bkp  250G  196G   53G  79% /mnt/.btrfs/vg_hdd/srv-containers-bkp
/dev/loop3                               140M  140M     0 100% /snap/atom/148
/dev/loop4                               171M  171M     0 100% /snap/filebot/9
/dev/loop1                                82M   82M     0 100% /snap/keepassxc/37
/dev/loop0                                39M   39M     0 100% /snap/telegram-sergiusens/68
/dev/mapper/vg_vec-srv--containers       240G  197G   41G  83% /var/lib/libvirt
/dev/mapper/vg_hdd-vm--xxx                80G   37G   43G  47% /home/guenter/.aMule
/dev/loop6                               140M  140M     0 100% /snap/atom/150

cat rear-recover.log

...
  WARNING: Failed to connect to lvmetad. Falling back to device scanning.
+++ create_component /dev/sda disk
+++ local device=/dev/sda
+++ local type=disk
+++ local touchfile=disk--dev-sda
+++ '[' -e /tmp/rear.wvMmrPsK0PL1A1R/tmp/touch/disk--dev-sda ']'
+++ return 0
+++ Log 'Stop mdadm'
++++ date '+%Y-%m-%d %H:%M:%S.%N '
+++ local 'timestamp=2018-04-30 17:31:02.869612260 '
+++ test 1 -gt 0
+++ echo '2018-04-30 17:31:02.869612260 Stop mdadm'
2018-04-30 17:31:02.869612260 Stop mdadm
+++ grep -q md /proc/mdstat
+++ Log 'Erasing MBR of disk /dev/sda'
++++ date '+%Y-%m-%d %H:%M:%S.%N '
+++ local 'timestamp=2018-04-30 17:31:02.872839625 '
+++ test 1 -gt 0
+++ echo '2018-04-30 17:31:02.872839625 Erasing MBR of disk /dev/sda'
2018-04-30 17:31:02.872839625 Erasing MBR of disk /dev/sda
+++ dd if=/dev/zero of=/dev/sda bs=512 count=1
1+0 records in
1+0 records out
512 bytes copied, 0.000209927 s, 2.4 MB/s
+++ sync
+++ LogPrint 'Creating partitions for disk /dev/sda (gpt)'
+++ Log 'Creating partitions for disk /dev/sda (gpt)'
++++ date '+%Y-%m-%d %H:%M:%S.%N '
+++ local 'timestamp=2018-04-30 17:31:02.876990817 '
+++ test 1 -gt 0
+++ echo '2018-04-30 17:31:02.876990817 Creating partitions for disk /dev/sda (gpt)'
2018-04-30 17:31:02.876990817 Creating partitions for disk /dev/sda (gpt)
+++ Print 'Creating partitions for disk /dev/sda (gpt)'
+++ test 1
+++ echo -e 'Creating partitions for disk /dev/sda (gpt)'
+++ my_udevsettle
+++ has_binary udevadm
+++ for bin in $@
+++ type udevadm
+++ return 0
+++ udevadm settle
+++ return 0
+++ parted -s /dev/sda mklabel gpt
+++ my_udevsettle
+++ has_binary udevadm
+++ for bin in $@
+++ type udevadm
+++ return 0
+++ udevadm settle
+++ return 0
+++ my_udevsettle
+++ has_binary udevadm
+++ for bin in $@
+++ type udevadm
+++ return 0
+++ udevadm settle
+++ return 0
+++ parted -s /dev/sda mkpart efi 2097152B 208899275B
+++ my_udevsettle
+++ has_binary udevadm
+++ for bin in $@
+++ type udevadm
+++ return 0
+++ udevadm settle
+++ return 0
+++ my_udevsettle
+++ has_binary udevadm
+++ for bin in $@
+++ type udevadm
+++ return 0
+++ udevadm settle
+++ return 0
+++ parted -s /dev/sda set 1 boot on
+++ my_udevsettle
+++ has_binary udevadm
+++ for bin in $@
+++ type udevadm
+++ return 0
+++ udevadm settle
+++ return 0
+++ my_udevsettle
+++ has_binary udevadm
+++ for bin in $@
+++ type udevadm
+++ return 0
+++ udevadm settle
+++ return 0
+++ parted -s /dev/sda set 1 esp on
+++ my_udevsettle
+++ has_binary udevadm
+++ for bin in $@
+++ type udevadm
+++ return 0
+++ udevadm settle
+++ return 0
+++ my_udevsettle
+++ has_binary udevadm
+++ for bin in $@
+++ type udevadm
+++ return 0
+++ udevadm settle
+++ return 0
+++ parted -s /dev/sda name 1 efi
+++ my_udevsettle
+++ has_binary udevadm
+++ for bin in $@
+++ type udevadm
+++ return 0
+++ udevadm settle
+++ return 0
+++ my_udevsettle
+++ has_binary udevadm
+++ for bin in $@
+++ type udevadm
+++ return 0
+++ udevadm settle
+++ return 0
+++ parted -s /dev/sda mkpart swap 208900096B 106033772100B
Warning: The resulting partition is not properly aligned for best performance.
+++ my_udevsettle
+++ has_binary udevadm
+++ for bin in $@
+++ type udevadm
+++ return 0
+++ udevadm settle
+++ return 0
+++ my_udevsettle
+++ has_binary udevadm
+++ for bin in $@
+++ type udevadm
+++ return 0
+++ udevadm settle
+++ return 0
+++ parted -s /dev/sda name 3 swap
Error: Partition doesn't exist.
2018-04-30 17:31:03.357061256 UserInput: called in /usr/share/rear/layout/recreate/default/200_run_layout_code.sh line 80
2018-04-30 17:31:03.360305387 UserInput: Default input in choices - using choice number 1 as default input
2018-04-30 17:31:03.361585976 The disk layout recreation script failed
2018-04-30 17:31:03.364282943 1) Rerun disk recreation script (/var/lib/rear/layout/diskrestore.sh)
2018-04-30 17:31:03.367017468 2) View 'rear recover' log file (/var/log/rear/rear-hansa.log)
2018-04-30 17:31:03.369763490 3) Edit disk recreation script (/var/lib/rear/layout/diskrestore.sh)
2018-04-30 17:31:03.372419560 4) View original disk space usage (/var/lib/rear/layout/config/df.txt)
2018-04-30 17:31:03.375079236 5) Use Relax-and-Recover shell and return back to here
2018-04-30 17:31:03.377906690 6) Abort 'rear recover'
2018-04-30 17:31:03.380748401 (default '1' timeout 300 seconds)
2018-04-30 17:31:19.352617106 UserInput: 'read' got as user input '5'

rear-hansa_debug.log
rear-recover_df.txt
rear-recover.log

jsmeix commented at 2018-05-03 13:44:

@chymian
without an actual error message I guess this one is the same as
https://github.com/rear/rear/issues/1681
which will not be fixed by us for the next ReaR 2.4 release
(currently it is postponed for the next ReaR 2.5 release)
unless you provide us a GitHub pull request that fixes it
within the currently planned time until the ReaR 2.4 release,
cf. https://github.com/rear/rear/issues/1790#issue-319663141

jsmeix commented at 2018-05-03 14:13:

@chymian
I don't know what exactly
Relax-and-Recover 2.3 / Git from debian buster/testing 2.3+dfsg-1
is but if that is a recent ReaR upstream GitHub master code
there is another - perhaps more usable - workaround:

Before running rear -D recover you may

# export MIGRATION_MODE=yes

to enforce MIGRATION_MODE during 'rear -D recover'
because in MIGRATION_MODE it does not "just proceed"
but lets you edit disklayout.conf and diskrestore.sh which is
perhaps easier than your current manual commands.

jsmeix commented at 2018-05-03 14:18:

An addendum:
If you switch in MIGRATION_MODE
from a non-consecutive partitioning scheme to a consecutive partitioning scheme
(e.g. from /dev/sda1 /dev/sda3 /dev/sda4 to /dev/sda1 /dev/sda2 /dev/sda3)
you get likely a wrong/outdated etc/fstab restored during backup restore
that you need to adapt to the new consecutive partitioning scheme
after the backup was restored.
With current ReaR GitHub master code you can do that
provided you are in MIGRATION_MODE, cf.
https://github.com/rear/rear/pull/1758

jsmeix commented at 2018-05-04 07:48:

According to
https://github.com/rear/rear/issues/1795#issuecomment-386527903
what is called
Relax-and-Recover 2.3 / Git from debian buster 2.3+dfsg-1
is not a recent ReaR upstream GitHub master code
(but probably the 2.3 release plus whatever GitHub stuff).

I would like to really recommend to work with
our current ReaR upstream GitHub master code
because that is the only place where we at ReaR upstream fix bugs.

jsmeix commented at 2019-10-23 15:11:

With https://github.com/rear/rear/pull/2081 merged
I think this issue is fixed.


[Export of Github issue for rear/rear.]