#2424 Issue closed
: Missing disk entry in disklayout.conf leads to gpt disks beeing initialized as msdos¶
Labels: support / question
, fixed / solved / done
abbbi opened issue at 2020-06-10 17:14:¶
Rear 2.5 running on SLES12 SP4
Today i have come accross following Situation, which i would like to
discuss, as the recovery
worked with workarounds, im quite interested wether if this is a bug or
a mis-configuration.
- SLES12 SP4, operating system uses Multipath SAN attached disks (without LVM)
- Some LVM volumes exist, but only one volume group and the local
discs should be
recovered.
The following configuration was used:
AUTOEXCLUDE_MULTIPATH=n
BOOT_OVER_SAN=y
ONLY_INCLUDE_VG=( 'included_vg')
REAR_INITRD_COMPRESSION="lzma"
The OS disks consist of one disk (/dev/mapper/OS) with two partitions.
One partition is
the Root volume (btrfs), the other one is the EFI partition.
During mklayout it is to be seen, that this disk's volume should be part of the recovery:
2020-06-09 19:00:48.558006452 Saving filesystem layout (using the findmnt command).
2020-06-09 19:00:48.578050243 Processing filesystem 'vfat' on '/dev/mapper/OS-part1' mounted at '/boot/efi'
2020-06-09 19:00:48.639323912 Processing filesystem 'btrfs' on '/dev/mapper/OS-part2' mounted at '/'
The resulting disklayout.conf howerver, includes only "part" statement for that disk, not any "disk" statement, just another multipath one, which does not include any information about initialization type to be used:
multipath /dev/mapper/OS 107374182400 /dev/sda,/dev/sdam,/dev/sdbf,/dev/sdt
part /dev/mapper/OS 251658240 16777216 primary boot /dev/mapper/OS-part1
part /dev/mapper/OS 107105730048 268435456 primary none /dev/mapper/OS-part2
That leads to a situation, where the disk layout restore, does
initialize the disk allways
with msdos as default partition table! As there is no disk entry in the
disklayout.conf, the
code in in 100_include_partition_code.sh allways uses msdos in case
of primary partitions:
while read part disk size pstart name junk ; do
names=( "${names[@]}" $name )
case $name in
(primary|extended|logical)
if [[ -z "$label" ]] ; then
Log "Disk label for $device detected as msdos."
label="msdos"
fi
;;
esac
done < <( grep "^part $device " "$LAYOUT_FILE" )
Now, this becomes a problem if GPT is really required and for example secureboot is used. The recovered system does simply not boot as grub is unable to access the EFI directory due the OS disk beeing MBR instead of GPT.
Now my question is: how come that there is a missing disk entry in disklayout.conf (due to multipath?) or is the existing code during recovery at fault here?
The workaround was to add the following line at the top of the disklayout.conf:
disk /dev/mapper/OS 107374182400 gpt
before running "rear recover", which worked flawlessly then.
jsmeix commented at 2020-06-15 08:02:¶
As far as I see from the description this issue here is a duplicate of
https://github.com/rear/rear/issues/2234
that is fixed in current ReaR GitHub master code via
https://github.com/rear/rear/pull/2235
Additionally there was
https://github.com/rear/rear/issues/2236
that is fixed in current ReaR GitHub master code via
https://github.com/rear/rear/pull/2237
@abbbi
please verify that things also work for you
with current ReaR GitHub master code
as described in the section
"Testing current ReaR upstream GitHub master code" in
https://en.opensuse.org/SDB:Disaster_Recovery
We need your verification rather soon because
we will release the next ReaR 2.6 soon, cf.
https://github.com/rear/rear/issues/2368
and I would like to be sure that the above fixes
(which worked for us at SUSE and for our customer)
also work in your particular case.
abbbi commented at 2020-06-15 08:50:¶
@jsmeix
Thanks for your quick reply! I will try to test things this week, if i get access to the systems i spotted this issue and will report back!
abbbi commented at 2020-06-15 17:57:¶
@jsmeix
I may not be able to test on the real system as it requires downtime to
be scheduled, i however patched version 2.5 with both of the commits you
mentioned and tested with an Qemu vm, simulating multipath by attaching
multiple disks with the same UUID, and it worked flawlessly!
So i think this issue is fixed with the commits you mentioned.
The patch also addresses following issue which i was not able to reproduce using 2.5 and the patcheset:
https://github.com/rear/rear/issues/2319
jsmeix commented at 2020-06-16 07:49:¶
@abbbi
thank you so much for your inventive extempore test
and your prompt feedback!
[Export of Github issue for rear/rear.]