#2428 Issue closed
: MIGRATION_MODE: Autodetect when required disk mappings are missing¶
Labels: enhancement
, minor bug
, no-issue-activity
gozora opened issue at 2020-06-16 21:20:¶
-
ReaR version ("/usr/sbin/rear -V"):https://github.com/rear/rear/commit/fb23c5d711af9ee505a9b03ea7324a098f90891d
-
OS version ("cat /etc/os-release" or "lsb_release -a" or "cat /etc/rear/os.conf"): Centos 7
-
ReaR configuration files ("cat /etc/rear/site.conf" and/or "cat /etc/rear/local.conf"):
BACKUP=NETFS
BACKUP_OPTIONS="nfsvers=3,nolock"
OUTPUT=ISO
BACKUP_URL=nfs://backup/mnt/rear
OUTPUT_URL=nfs://backup/mnt/rear/iso
SSH_FILES="yes"
SSH_UNPROTECTED_PRIVATE_KEYS="yes"
PROGS+=( /usr/libexec/openssh/sftp-server )
COPY_AS_IS+=( /usr/libexec/openssh/sftp-server )
USE_RESOLV_CONF="no"
USE_DHCLIENT="no"
NETWORKING_PREPARATION_COMMANDS=( 'ip addr add 192.168.56.200/24 dev enp0s8' 'ip link set dev enp0s8 up' 'return' )
EXCLUDE_RECREATE+=( /dev/mapper/data )
BOOT_OVER_SAN="yes"
AUTOEXCLUDE_MULTIPATH="no"
-
Hardware (PC or PowerNV BareMetal or ARM) or virtual machine (KVM guest or PoverVM LPAR): VirtualBox
-
System architecture (x86 compatible or PPC64/PPC64LE or what exact ARM device): x86_64
-
Firmware (BIOS or UEFI or Open Firmware) and bootloader (GRUB or ELILO or Petitboot):UEFI
-
Storage (local disk or SSD) and/or SAN (FC or iSCSI or FCoE) and/or multipath (DM or NVMe): local disk
-
Storage layout ("lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,SIZE,MOUNTPOINT" or "lsblk" as makeshift):
NAME KNAME PKNAME TRAN TYPE FSTYPE SIZE MOUNTPOINT
/dev/sda /dev/sda sata disk 8G
|-/dev/sda1 /dev/sda1 /dev/sda part vfat 200M /boot/efi
|-/dev/sda2 /dev/sda2 /dev/sda part xfs 1G /boot
`-/dev/sda3 /dev/sda3 /dev/sda part LVM2_member 6.8G
|-/dev/mapper/centos-root /dev/dm-0 /dev/sda3 lvm xfs 6G /
`-/dev/mapper/centos-swap /dev/dm-1 /dev/sda3 lvm swap 820M [SWAP]
/dev/sdb /dev/sdb sata disk mpath_member 8G
`-/dev/mapper/disk_2 /dev/dm-3 /dev/sdb mpath linux_raid_member 8G
`-/dev/md0 /dev/md0 /dev/dm-3 raid1 xfs 8G /data
/dev/sdc /dev/sdc sata disk mpath_member 8G
`-/dev/mapper/disk_1 /dev/dm-2 /dev/sdc mpath linux_raid_member 8G
`-/dev/md0 /dev/md0 /dev/dm-2 raid1 xfs 8G /data
- Description of the issue (ideally so that others can reproduce
it):
When I try to recover to slightly different (different disk sizes) VM, right after following disk mapping confirmation
Current disk mapping table (source => target):
/dev/sda => /dev/sdc
/dev/mapper/disk_2 => /dev/mapper/mpatha
/dev/mapper/disk_1 => /dev/mapper/mpathb
I got following error:
Failed to apply layout mappings to /var/lib/rear/layout/disklayout.conf for /dev/sdc (probably no mapping for /dev/sdc in /var/lib/rear/layout/disk_mappings)
Failed to apply disk layout mappings to /var/lib/rear/layout/disklayout.conf
Applied disk layout mappings to /var/lib/rear/layout/config/df.txt
Applied disk layout mappings to /etc/rear/rescue.conf
ERROR: Failed to apply disk layout mappings
Some latest log messages since the last called script 320_apply_mappings.sh:
2020-06-16 22:53:36.110912048 Including layout/prepare/default/320_apply_mappings.sh
2020-06-16 22:53:36.111942350 Entering debugscript mode via 'set -x'.
2020-06-16 22:53:36.164819410 Failed to apply layout mappings to /var/lib/rear/layout/disklayout.conf for /dev/sdc (probably no mapping for /dev/sdc in /var/lib/rear/layout/disk_mappings)
2020-06-16 22:53:36.170025168 Failed to apply disk layout mappings to /var/lib/rear/layout/disklayout.conf
2020-06-16 22:53:36.224116028 Applied disk layout mappings to /var/lib/rear/layout/config/df.txt
2020-06-16 22:53:36.281945958 Applied disk layout mappings to /etc/rear/rescue.conf
Aborting due to an error, check /var/log/rear/rear-centos7.log for details
Exiting rear recover (PID 503) and its descendant processes ...
Running exit tasks
You should also rm -Rf /tmp/rear.lFXgLlXD5BQgp7w
The problem is probably with multipath slaves (sdb, sdc) in disklayout.conf
multipath /dev/mapper/disk_2 8589934592 unknown /dev/sdb
multipath /dev/mapper/disk_1 8589934592 unknown /dev/sdc
These slaves are not listed in disk_mappings (hence are not considered full-featured disks) but ReaR still tries to replace them with apply_layout_mappings() which results to disklayout.conf like this:
multipath /dev/mapper/mpatha 8589934592 unknown /dev/sdb
multipath /dev/mapper/mpathb 8589934592 unknown _REAR1_
and the above error.
- Workaround, if any:
Remove slaves from disklayout.conf before startingrear recover
e.g.
multipath /dev/mapper/mpatha 8589934592 unknown
multipath /dev/mapper/mpathb 8589934592 unknown
- Attachments, as applicable ("rear -D mkrescue/mkbackup/recover"
debug log files):
rear-centos7.log
jsmeix commented at 2020-06-17 06:35:¶
This is a result of how the current apply_layout_mappings
function
in usr/share/rear/lib/layout-functions.sh
works and in particular the behaviour that leads to this issue here
is described in the comments of the apply_layout_mappings
function
excerpts:
# Step 0:
# For each original device in the mapping file generate a unique word (the "replacement").
# E.g. when the mapping file content is
# /dev/sda /dev/sdb
# /dev/sdb /dev/sda
# /dev/sdd /dev/sdc
# the replacement file will contain
# /dev/sda _REAR0_
# /dev/sdb _REAR1_
# /dev/sdd _REAR2_
# /dev/sdc _REAR3_
...
# Step 1:
# Replace all original devices with their replacements.
# E.g. when the file_to_migrate content is
# disk /dev/sda
# disk /dev/sdb
# disk /dev/sdc
# disk /dev/sdd
# it will get temporarily replaced (with the replacement file content in step 0 above) by
# disk _REAR0_
# disk _REAR1_
# disk _REAR3_
# disk _REAR2_
...
# Step 2:
# Replace all unique replacement words with the matching target device of the source device in the mapping file.
# E.g. when the file_to_migrate content was in step 1 above temporarily changed to
# disk _REAR0_
# disk _REAR1_
# disk _REAR3_
# disk _REAR2_
# it will now get finally replaced (with the replacement file and mapping file contents in step 0 above) by
# disk /dev/sdb
# disk /dev/sda
# disk _REAR3_
# disk /dev/sdc
# where the temporary replacement "disk _REAR3_" from step 1 above is left because
# there is (erroneously) no mapping for /dev/sdc (as source device) in the mapping file (in step 0 above).
...
# Step 3:
# Verify that there are none of those temporary replacement words from step 1 left in file_to_migrate
# to ensure the replacement was done correctly and completely (cf. the above example where '_REAR3_' is left).
So the root cause of this issue here is in the
disk mapping table (source => target)
/dev/sda => /dev/sdc
/dev/mapper/disk_2 => /dev/mapper/mpatha
/dev/mapper/disk_1 => /dev/mapper/mpathb
where all counterpart mappings are missing, i.e.
/dev/sda is mapped to /dev/sdc
but there is no counterpart mapping for /dev/sdc
/dev/mapper/disk_2 is mapped to /dev/mapper/mpatha
but there is no counterpart mapping for /dev/mapper/mpatha
/dev/mapper/disk_1 is mapped to /dev/mapper/mpathb
but there is no counterpart mapping for /dev/mapper/mpathb
Simply put:
The current disk mapping code only works
when all mapping targets are also specified as a mapping source.
The mapping file is created by
usr/share/rear/layout/prepare/default/300_map_disks.sh
so I think there could be an issue therein when it creates
a mapping file with missing counterpart mappings.
According to the comments in
usr/share/rear/layout/prepare/default/300_map_disks.sh
I think it should create a mapping file with counterpart mappings
but from what I see in your
https://github.com/rear/rear/files/4789134/rear-centos7.log
it seems it creates an incomplete mapping
because it seems you got only this user dialogs (excerpts):
Using user provided mapping file disk_mappings
Using /dev/sdc (same size) for recreating /dev/sda
Original disk /dev/mapper/disk_2 does not exist (with same size) in the target system
/dev/sdc excluded from device mapping choices (is already used as mapping target)
sr0 excluded from device mapping choices (is a removable device)
UserInput: called in /usr/share/rear/layout/prepare/default/300_map_disks.sh line 238
UserInput: Default input not in choices
UserInput -I LAYOUT_MIGRATION_REPLACEMENT_DISK2 needed in /usr/share/rear/layout/prepare/default/300_map_disks.sh line 238
Choose an appropriate replacement for /dev/mapper/disk_2
1) /dev/mapper/mpatha
2) /dev/mapper/mpathb
3) /dev/sda
4) /dev/sdb
5) Do not map /dev/mapper/disk_2
6) Use Relax-and-Recover shell and return back to here
(default '1' timeout 300 seconds)
UserInput: 'read' got as user input '1'
UserInput: Valid choice number result '/dev/mapper/mpatha'
Using /dev/mapper/mpatha (chosen by user) for recreating /dev/mapper/disk_2
...
Original disk /dev/mapper/disk_1 does not exist (with same size) in the target system
/dev/mapper/mpatha excluded from device mapping choices (is already used as mapping target)
/dev/sdc excluded from device mapping choices (is already used as mapping target)
sr0 excluded from device mapping choices (is a removable device)
UserInput: called in /usr/share/rear/layout/prepare/default/300_map_disks.sh line 238
UserInput: Default input not in choices
UserInput -I LAYOUT_MIGRATION_REPLACEMENT_DISK1 needed in /usr/share/rear/layout/prepare/default/300_map_disks.sh line 238
Choose an appropriate replacement for /dev/mapper/disk_1
1) /dev/mapper/mpathb
2) /dev/sda
3) /dev/sdb
4) Do not map /dev/mapper/disk_1
5) Use Relax-and-Recover shell and return back to here
(default '1' timeout 300 seconds)
UserInput: 'read' got as user input '1'
UserInput: Valid choice number result '/dev/mapper/mpathb'
Using /dev/mapper/mpathb (chosen by user) for recreating /dev/mapper/disk_1
gozora commented at 2020-06-17 07:09:¶
@jsmeix thanks for your input, I was basically just guessing what is
going on because I did not work whit this part of ReaR code before.
I general I think that missing multipath slaves in disk_mappings is
OK, because mapping sometimes hundreds of disks could be realy
cumbersome and annoying. IMHO it would be enough to avoid
apply_layout_mappings()
replacing multipath
slaves in
disklaout.conf.
V.
gozora commented at 2020-06-17 07:12:¶
I've found some code that uses multipath slave entries in
disklaout.conf when running backup, but didn't found any code that
would need multipath slaves during restore. So maybe another approach to
modifying apply_layout_mappings()
could be to remove slave entries
entirely when restoring...
V.
jsmeix commented at 2020-06-17 07:58:¶
No, the apply_layout_mappings
function must not make decisions
whether or not entries in the disk mapping file are valid or needed.
In contrast what creates the disk mapping file is the right place
to make decisions what valid and needed mappings are and
to create a valid disk mapping file for the needed mappings.
I know basically nothing about multipath (I never used it myself).
I assume to access a single unique disk that is connected via
multipath
one must only use the one single unique high level device node
that matches the single unique disk
but one must never use one of the several lower level device nodes
that match the several hardware paths to the single unique disk.
In your case
/dev/sdb /dev/sdb sata disk mpath_member 8G
`-/dev/mapper/disk_2 /dev/dm-3 /dev/sdb mpath linux_raid_member 8G
`-/dev/md0 /dev/md0 /dev/dm-3 raid1 xfs 8G /data
/dev/sdc /dev/sdc sata disk mpath_member 8G
`-/dev/mapper/disk_1 /dev/dm-2 /dev/sdc mpath linux_raid_member 8G
`-/dev/md0 /dev/md0 /dev/dm-2 raid1 xfs 8G /data
I assume this shows a single unique disk that is connected via
multipath
where that one disk appears via two paths as /dev/sdb and /dev/sdc
and the one single unique high level device node for that disk
is /dev/md0
which is the only device node that should be used
e.g. by parted to create partitions on that disk
but the various lower level device nodes that match the two paths
to the single unique disk must not be used to access that disk.
This would match that you "didn't found any code that would need
multipath slaves during restore".
If my above assumptions are right I would think there should be
not any mapping of any of the several lower level device nodes
that match the several hardware paths to the single unique disk.
Accordingly I would think that in your example
there should be no mapping that contains any of
/dev/mapper/disk_1
/dev/mapper/disk_2
/dev/mapper/mpatha
/dev/mapper/mpathb
as mapping source or as mapping target.
What puzzles me is your mapping
/dev/sda => /dev/sdc
because I would think /dev/sda is a normal single-path disk
while /dev/sdb and /dev/sdc is one same disk via two paths
so a mapping of /dev/sda to /dev/sdb or /dev/sdc looks wrong.
Or what do I misunderstand here?
@gozora
could you describe your disks on your original system
versus what there is on your replacement system in more detail
so that I could better imagine how a mapping could look like.
gozora commented at 2020-06-17 08:35:¶
Hello @jsmeix,
I assume to access a single unique disk that is connected via multipath
one must only use the one single unique high level device node
that matches the single unique disk
but one must never use one of the several lower level device nodes
that match the several hardware paths to the single unique disk.
This assumption is right!
The other one not ;-). It is partly my fault because setup I've created is quite doctle to simulate multipath in very simple way. Normally multipath consists of several slaves. To illustrate, I'll show you how multipath looks like on my other sever which is running cluster, please note that following output is not related to this issue and serves just for multipath output demonstration:
node1:~ # multipath -l
site_A_3 (360000000000000000e00e6b900000003) dm-4 IET,VIRTUAL-DISK
size=512M features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 7:0:0:3 sdi 8:128 active undef running
|- 9:0:0:3 sdm 8:192 active undef running
`- 8:0:0:3 sdp 8:240 active undef running
site_B_3 (360000000000000000e00e5a500000003) dm-9 IET,VIRTUAL-DISK
size=512M features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 13:0:0:3 sdw 65:96 active undef running
|- 15:0:0:3 sdab 65:176 active undef running
`- 14:0:0:3 sdae 65:224 active undef running
site_A_2 (360000000000000000e00e6b900000002) dm-3 IET,VIRTUAL-DISK
size=512M features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 7:0:0:2 sdh 8:112 active undef running
|- 9:0:0:2 sdl 8:176 active undef running
`- 8:0:0:2 sdo 8:224 active undef running
site_B_2 (360000000000000000e00e5a500000002) dm-8 IET,VIRTUAL-DISK
size=512M features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 13:0:0:2 sdv 65:80 active undef running
|- 15:0:0:2 sdaa 65:160 active undef running
`- 14:0:0:2 sdad 65:208 active undef running
site_A_1 (360000000000000000e00e6b900000001) dm-2 IET,VIRTUAL-DISK
size=5.0G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 7:0:0:1 sdf 8:80 active undef running
|- 9:0:0:1 sdk 8:160 active undef running
`- 8:0:0:1 sdn 8:208 active undef running
site_B_1 (360000000000000000e00e5a500000001) dm-7 IET,VIRTUAL-DISK
size=5.0G features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 13:0:0:1 sdu 65:64 active undef running
|- 15:0:0:1 sdz 65:144 active undef running
`- 14:0:0:1 sdac 65:192 active undef running
iscsi1_lun2 (360000000000000000e00e6b900000005) dm-0 IET,VIRTUAL-DISK
size=256M features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 5:0:0:2 sdc 8:32 active undef running
|- 4:0:0:2 sde 8:64 active undef running
`- 6:0:0:2 sdj 8:144 active undef running
iscsi1_lun1 (360000000000000000e00e6b900000004) dm-1 IET,VIRTUAL-DISK
size=512M features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 4:0:0:1 sdd 8:48 active undef running
|- 5:0:0:1 sdb 8:16 active undef running
`- 6:0:0:1 sdg 8:96 active undef running
iscsi2_lun2 (360000000000000000e00e5a500000005) dm-6 IET,VIRTUAL-DISK
size=256M features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 11:0:0:2 sdr 65:16 active undef running
|- 10:0:0:2 sdt 65:48 active undef running
`- 12:0:0:2 sdy 65:128 active undef running
iscsi2_lun1 (360000000000000000e00e5a500000004) dm-5 IET,VIRTUAL-DISK
size=512M features='1 queue_if_no_path' hwhandler='0' wp=rw
`-+- policy='round-robin 0' prio=0 status=active
|- 11:0:0:1 sdq 65:0 active undef running
|- 10:0:0:1 sds 65:32 active undef running
`- 12:0:0:1 sdx 65:112 active undef running
Here you can see that we have one high device (e.g. site_A_3) with 3 slaves (sdi, sdm, sdp).
Corresponding lsblk
output looks something like this:
node1:~ # lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,SIZE,MOUNTPOINT
NAME KNAME PKNAME TRAN TYPE FSTYPE SIZE MOUNTPOINT
/dev/sda /dev/sda sata disk 50G
|-/dev/sda1 /dev/sda1 /dev/sda part swap 995M [SWAP]
`-/dev/sda2 /dev/sda2 /dev/sda part btrfs 49G /
/dev/sdb /dev/sdb iscsi disk linux_raid_member 512M
`-/dev/mapper/iscsi1_lun1 /dev/dm-1 /dev/sdb mpath linux_raid_member 512M
/dev/sdc /dev/sdc iscsi disk linux_raid_member 256M
`-/dev/mapper/iscsi1_lun2 /dev/dm-0 /dev/sdc mpath linux_raid_member 256M
/dev/sdd /dev/sdd iscsi disk linux_raid_member 512M
`-/dev/mapper/iscsi1_lun1 /dev/dm-1 /dev/sdd mpath linux_raid_member 512M
/dev/sde /dev/sde iscsi disk linux_raid_member 256M
`-/dev/mapper/iscsi1_lun2 /dev/dm-0 /dev/sde mpath linux_raid_member 256M
/dev/sdf /dev/sdf iscsi disk linux_raid_member 5G
`-/dev/mapper/site_A_1 /dev/dm-2 /dev/sdf mpath linux_raid_member 5G
/dev/sdg /dev/sdg iscsi disk linux_raid_member 512M
`-/dev/mapper/iscsi1_lun1 /dev/dm-1 /dev/sdg mpath linux_raid_member 512M
/dev/sdh /dev/sdh iscsi disk linux_raid_member 512M
`-/dev/mapper/site_A_2 /dev/dm-3 /dev/sdh mpath linux_raid_member 512M
/dev/sdi /dev/sdi iscsi disk linux_raid_member 512M
`-/dev/mapper/site_A_3 /dev/dm-4 /dev/sdi mpath linux_raid_member 512M
/dev/sdj /dev/sdj iscsi disk linux_raid_member 256M
`-/dev/mapper/iscsi1_lun2 /dev/dm-0 /dev/sdj mpath linux_raid_member 256M
/dev/sdk /dev/sdk iscsi disk linux_raid_member 5G
`-/dev/mapper/site_A_1 /dev/dm-2 /dev/sdk mpath linux_raid_member 5G
/dev/sdl /dev/sdl iscsi disk linux_raid_member 512M
`-/dev/mapper/site_A_2 /dev/dm-3 /dev/sdl mpath linux_raid_member 512M
/dev/sdm /dev/sdm iscsi disk linux_raid_member 512M
`-/dev/mapper/site_A_3 /dev/dm-4 /dev/sdm mpath linux_raid_member 512M
/dev/sdn /dev/sdn iscsi disk linux_raid_member 5G
`-/dev/mapper/site_A_1 /dev/dm-2 /dev/sdn mpath linux_raid_member 5G
/dev/sdo /dev/sdo iscsi disk linux_raid_member 512M
`-/dev/mapper/site_A_2 /dev/dm-3 /dev/sdo mpath linux_raid_member 512M
/dev/sdp /dev/sdp iscsi disk linux_raid_member 512M
`-/dev/mapper/site_A_3 /dev/dm-4 /dev/sdp mpath linux_raid_member 512M
/dev/sr0 /dev/sr0 ata rom 1024M
/dev/sdq /dev/sdq iscsi disk linux_raid_member 512M
`-/dev/mapper/iscsi2_lun1 /dev/dm-5 /dev/sdq mpath linux_raid_member 512M
/dev/sdr /dev/sdr iscsi disk linux_raid_member 256M
`-/dev/mapper/iscsi2_lun2 /dev/dm-6 /dev/sdr mpath linux_raid_member 256M
/dev/sds /dev/sds iscsi disk linux_raid_member 512M
`-/dev/mapper/iscsi2_lun1 /dev/dm-5 /dev/sds mpath linux_raid_member 512M
/dev/sdt /dev/sdt iscsi disk linux_raid_member 256M
`-/dev/mapper/iscsi2_lun2 /dev/dm-6 /dev/sdt mpath linux_raid_member 256M
/dev/sdu /dev/sdu iscsi disk linux_raid_member 5G
`-/dev/mapper/site_B_1 /dev/dm-7 /dev/sdu mpath linux_raid_member 5G
/dev/sdv /dev/sdv iscsi disk linux_raid_member 512M
`-/dev/mapper/site_B_2 /dev/dm-8 /dev/sdv mpath linux_raid_member 512M
/dev/sdw /dev/sdw iscsi disk linux_raid_member 512M
`-/dev/mapper/site_B_3 /dev/dm-9 /dev/sdw mpath linux_raid_member 512M
/dev/sdx /dev/sdx iscsi disk linux_raid_member 512M
`-/dev/mapper/iscsi2_lun1 /dev/dm-5 /dev/sdx mpath linux_raid_member 512M
/dev/sdy /dev/sdy iscsi disk linux_raid_member 256M
`-/dev/mapper/iscsi2_lun2 /dev/dm-6 /dev/sdy mpath linux_raid_member 256M
/dev/sdz /dev/sdz iscsi disk linux_raid_member 5G
`-/dev/mapper/site_B_1 /dev/dm-7 /dev/sdz mpath linux_raid_member 5G
/dev/sdaa /dev/sdaa iscsi disk linux_raid_member 512M
`-/dev/mapper/site_B_2 /dev/dm-8 /dev/sdaa mpath linux_raid_member 512M
/dev/sdab /dev/sdab iscsi disk linux_raid_member 512M
`-/dev/mapper/site_B_3 /dev/dm-9 /dev/sdab mpath linux_raid_member 512M
/dev/sdac /dev/sdac iscsi disk linux_raid_member 5G
`-/dev/mapper/site_B_1 /dev/dm-7 /dev/sdac mpath linux_raid_member 5G
/dev/sdad /dev/sdad iscsi disk linux_raid_member 512M
`-/dev/mapper/site_B_2 /dev/dm-8 /dev/sdad mpath linux_raid_member 512M
/dev/sdae /dev/sdae iscsi disk linux_raid_member 512M
`-/dev/mapper/site_B_3 /dev/dm-9 /dev/sdae mpath linux_raid_member 512M
in general this setup has 3 layers:
- /dev/sd* (single block device)
- /dev/mapper (multipath device with /dev/sd* as slaves)
- /dev/md (software RAID with /dev/mapper/site* as mirror sites)
This setup shows how multipath setup might look in reality :-)
But now back to my testing Centos7 ...
My setup is very simmilar to one described in demonstration before, with
one small exception. Multipath devices have only one path (In reality
such setup doesent make any sense, but this is only for testing
purposes).
Source server
[root@centos7 ~]# multipath -l
disk_2 (VBOX_HARDDISK_VB2127148a-83767d0d) dm-2 ATA ,VBOX HARDDISK
size=8.0G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=0 status=active
`- 3:0:0:0 sdb 8:16 active undef running
disk_1 (VBOX_HARDDISK_VBaa6af5b7-df6dd760) dm-3 ATA ,VBOX HARDDISK
size=8.0G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=0 status=active
`- 4:0:0:0 sdc 8:32 active undef running
[root@centos7 ~]# lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,SIZE,MOUNTPOINT
NAME KNAME PKNAME TRAN TYPE FSTYPE SIZE MOUNTPOINT
/dev/sda /dev/sda sata disk 8G
|-/dev/sda1 /dev/sda1 /dev/sda part vfat 200M /boot/efi
|-/dev/sda2 /dev/sda2 /dev/sda part xfs 1G /boot
`-/dev/sda3 /dev/sda3 /dev/sda part LVM2_member 6.8G
|-/dev/mapper/centos-root /dev/dm-0 /dev/sda3 lvm xfs 6G /
`-/dev/mapper/centos-swap /dev/dm-1 /dev/sda3 lvm swap 820M [SWAP]
/dev/sdb /dev/sdb sata disk mpath_member 8G
`-/dev/mapper/disk_2 /dev/dm-2 /dev/sdb mpath linux_raid_member 8G
`-/dev/md0 /dev/md0 /dev/dm-2 raid1 xfs 8G /data
/dev/sdc /dev/sdc sata disk mpath_member 8G
`-/dev/mapper/disk_1 /dev/dm-3 /dev/sdc mpath linux_raid_member 8G
`-/dev/md0 /dev/md0 /dev/dm-3 raid1 xfs 8G /data
/dev/sr0 /dev/sr0 ata rom 1024M
and there is RAID1 build on top of /dev/mapper/disk_2 and /dev/mapper/disk_1
Destination server
RESCUE centos7:~ # multipath -l
RESCUE centos7:~ # multipath -l
mpathc (VBOX_HARDDISK_VB3b781a87-111c1fed) dm-0 ATA ,VBOX HARDDISK
size=20G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=0 status=active
`- 3:0:0:0 sda 8:0 active undef running
mpatha (VBOX_HARDDISK_VB10e9f9a8-6b9c578b) dm-1 ATA ,VBOX HARDDISK
size=10G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=0 status=active
`- 4:0:0:0 sdb 8:16 active undef running
RESCUE centos7:~ # lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,SIZE,MOUNTPOINT
NAME KNAME PKNAME TRAN TYPE FSTYPE SIZE MOUNTPOINT
/dev/sda /dev/sda sata disk mpath_member 20G
`-/dev/mapper/mpathc /dev/dm-0 /dev/sda mpath 20G
/dev/sdb /dev/sdb sata disk mpath_member 10G
`-/dev/mapper/mpatha /dev/dm-1 /dev/sdb mpath 10G
/dev/sdc /dev/sdc sata disk 8G
/dev/sr0 /dev/sr0 sata rom udf 319.6M
Note that this lsblk
output is short after starting rear recover
and
does not contain RAID yet.
V.
jsmeix commented at 2020-06-17 08:51:¶
Correction of my above
https://github.com/rear/rear/issues/2428#issuecomment-645181468
Simply put:
The current disk mapping code only works
when all mapping targets are also specified as a mapping source.
As far as I see this is wrong because the following
simple mapping can also work (source => target)
/dev/sda => /dev/sdb
provided the mapping target /dev/sdb does not exist
in a file where that mapping should be applied
in particular provided the mapping target /dev/sdb
does not exist in the disklayout.conf file.
This simple mapping can happen when the ReaR recovery system
was booted from a removable disk (e.g. a USB stick or a USB disk)
where on the replacement hardware the ReaR recovery system
became /dev/sda and the actual target system disk is /dev/sdb
and on the original system there was only one disk /dev/sda
so that disklayout.conf contains /dev/sda but not /dev/sdb.
Then the apply_layout_mappings function generates from
the /dev/sda /dev/sdb
mapping file entry
a replacement file that contains
/dev/sda _REAR0_
/dev/sdb _REAR1_
and replaces e.g. in disklayout.conf
all /dev/sda
by _REAR0_
and afterwards
all _REAR0_
by its matching target /dev/sdb
.
So correctly it must be:
Simply put:
The current disk mapping code only works
when those mapping targets are also specified as a mapping source
when the mapping target appears in a file where mappings are applied.
gdha commented at 2020-06-17 08:54:¶
@gozora Your server itself cannot be covered by ReaR because iSCSI is not (yet) supported...
jsmeix commented at 2020-06-17 09:06:¶
@gozora
thank you for expaining your disk layout to me!
I was already puzzled by the raid1
TYPE entries of your lsblk output
but I coud not make sense of them so I just mixed them up with
multipath.
gozora commented at 2020-06-17 09:43:¶
@gdha as I've already stated:
To illustrate, I'll show you how multipath looks like on my other sever which is running cluster, please not that following output is not related to this issue and serves just for multipath output demonstration:
So the output I've pasted (with iSCSI) is not the problematic/restored
server, It was pasted there only to show @jsmeix how multipath can be
setup.
The real server disk layout is mentioned in
https://github.com/rear/rear/issues/2428#issue-639979434
(issue template).
In general I'm guessing that this problem will arise only if one would restore server that has multipath enabled to different HW. Because when restoring to original HW you normally don't need to do mapping.
V.
gozora commented at 2020-06-17 09:57:¶
Just to add some more info, here is complete restore session with following setup:
RESCUE centos7:~ # lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,SIZE,MOUNTPOINT
NAME KNAME PKNAME TRAN TYPE FSTYPE SIZE MOUNTPOINT
/dev/sda /dev/sda sata disk 20G
`-/dev/mapper/mpatha /dev/dm-0 /dev/sda mpath 20G
/dev/sdb /dev/sdb sata disk 10G
`-/dev/mapper/mpathb /dev/dm-1 /dev/sdb mpath 10G
/dev/sdc /dev/sdc sata disk 8G
/dev/sr0 /dev/sr0 sata rom udf 319.6M
RESCUE centos7:~ # multipath -l
mpathb (VBOX_HARDDISK_VB10e9f9a8-6b9c578b) dm-1 ATA ,VBOX HARDDISK
size=10G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=0 status=active
`- 4:0:0:0 sdb 8:16 active undef running
mpatha (VBOX_HARDDISK_VB3b781a87-111c1fed) dm-0 ATA ,VBOX HARDDISK
size=20G features='0' hwhandler='0' wp=rw
`-+- policy='service-time 0' prio=0 status=active
`- 3:0:0:0 sda 8:0 active undef running
- the session
RESCUE centos7:~ # rear -d -D recover
Relax-and-Recover 2.5 / Git
Running rear recover (PID 478)
Using log file: /var/log/rear/rear-centos7.log
Running workflow recover within the ReaR rescue/recovery system
Starting required daemons for NFS: RPC portmapper (portmap or rpcbind) and rpc.statd if available.
Started RPC portmapper 'rpcbind'.
RPC portmapper 'rpcbind' available.
Started rpc.statd.
RPC status rpc.statd available.
Using backup archive '/tmp/rear.OKkEbdifOMHNZhJ/outputfs/centos7/backup.tar.gz'
Will do driver migration (recreating initramfs/initrd)
Calculating backup archive size
Backup archive size is 1.1G /tmp/rear.OKkEbdifOMHNZhJ/outputfs/centos7/backup.tar.gz (compressed)
Setting up multipathing
Activating multipath
multipath activated
Starting multipath daemon
multipathd started
Listing multipath device found
mpathb (VBOX_HARDDISK_VB10e9f9a8-6b9c578b) dm-1 ATA ,VBOX HARDDISK size=10G
mpatha (VBOX_HARDDISK_VB3b781a87-111c1fed) dm-0 ATA ,VBOX HARDDISK size=20G
Comparing disks
Ambiguous disk layout needs manual configuration (more than one disk with same size used in '/var/lib/rear/layout/disklayout.conf')
Switching to manual disk layout configuration
Using /dev/sdc (same size) for recreating /dev/sda
Original disk /dev/mapper/disk_2 does not exist (with same size) in the target system
UserInput -I LAYOUT_MIGRATION_REPLACEMENT_DISK2 needed in /usr/share/rear/layout/prepare/default/300_map_disks.sh line 238
Choose an appropriate replacement for /dev/mapper/disk_2
1) /dev/mapper/mpatha
2) /dev/mapper/mpathb
3) /dev/sda
4) /dev/sdb
5) Do not map /dev/mapper/disk_2
6) Use Relax-and-Recover shell and return back to here
(default '1' timeout 300 seconds)
2
UserInput: Valid choice number result '/dev/mapper/mpathb'
Using /dev/mapper/mpathb (chosen by user) for recreating /dev/mapper/disk_2
Original disk /dev/mapper/disk_1 does not exist (with same size) in the target system
UserInput -I LAYOUT_MIGRATION_REPLACEMENT_DISK1 needed in /usr/share/rear/layout/prepare/default/300_map_disks.sh line 238
Choose an appropriate replacement for /dev/mapper/disk_1
1) /dev/mapper/mpatha
2) /dev/sda
3) /dev/sdb
4) Do not map /dev/mapper/disk_1
5) Use Relax-and-Recover shell and return back to here
(default '1' timeout 300 seconds)
1
UserInput: Valid choice number result '/dev/mapper/mpatha'
Using /dev/mapper/mpatha (chosen by user) for recreating /dev/mapper/disk_1
Current disk mapping table (source => target):
/dev/sda => /dev/sdc
/dev/mapper/disk_2 => /dev/mapper/mpathb
/dev/mapper/disk_1 => /dev/mapper/mpatha
UserInput -I LAYOUT_MIGRATION_CONFIRM_MAPPINGS needed in /usr/share/rear/layout/prepare/default/300_map_disks.sh line 275
Confirm or edit the disk mapping
1) Confirm disk mapping and continue 'rear recover'
2) n/a
3) Edit disk mapping (/var/lib/rear/layout/disk_mappings)
4) Use Relax-and-Recover shell and return back to here
5) Abort 'rear recover'
(default '1' timeout 300 seconds)
1
UserInput: Valid choice number result 'Confirm disk mapping and continue 'rear recover''
User confirmed disk mapping
Failed to apply layout mappings to /var/lib/rear/layout/disklayout.conf for /dev/sdc (probably no mapping for /dev/sdc in /var/lib/rear/layout/disk_mappings)
Failed to apply disk layout mappings to /var/lib/rear/layout/disklayout.conf
Applied disk layout mappings to /var/lib/rear/layout/config/df.txt
Applied disk layout mappings to /etc/rear/rescue.conf
ERROR: Failed to apply disk layout mappings
Some latest log messages since the last called script 320_apply_mappings.sh:
2020-06-17 11:51:03.680543288 Including layout/prepare/default/320_apply_mappings.sh
2020-06-17 11:51:03.681513284 Entering debugscript mode via 'set -x'.
2020-06-17 11:51:03.734287374 Failed to apply layout mappings to /var/lib/rear/layout/disklayout.conf for /dev/sdc (probably no mapping for /dev/sdc in /var/lib/rear/layout/disk_mappings)
2020-06-17 11:51:03.739325049 Failed to apply disk layout mappings to /var/lib/rear/layout/disklayout.conf
2020-06-17 11:51:03.793384670 Applied disk layout mappings to /var/lib/rear/layout/config/df.txt
2020-06-17 11:51:03.848300496 Applied disk layout mappings to /etc/rear/rescue.conf
Aborting due to an error, check /var/log/rear/rear-centos7.log for details
Exiting rear recover (PID 478) and its descendant processes ...
Running exit tasks
You should also rm -Rf /tmp/rear.OKkEbdifOMHNZhJ
Terminated
- disk mapping
RESCUE centos7:~ # less /tmp/rear.OKkEbdifOMHNZhJ/tmp/replacement_file
/dev/sda _REAR0_
/dev/sdc _REAR1_
/dev/mapper/disk_2 _REAR2_
/dev/mapper/mpathb _REAR3_
/dev/mapper/disk_1 _REAR4_
/dev/mapper/mpatha _REAR5_
- disklayout.conf BEFORE the error
RESCUE centos7:/var/lib/rear/layout # cat disklayout.conf.20200617115859.recover.489.orig
# Disk layout dated 20200616202902 (YYYYmmddHHMMSS)
# NAME KNAME PKNAME TRAN TYPE FSTYPE SIZE MOUNTPOINT
# /dev/sda /dev/sda sata disk 8G
# |-/dev/sda1 /dev/sda1 /dev/sda part vfat 200M /boot/efi
# |-/dev/sda2 /dev/sda2 /dev/sda part xfs 1G /boot
# `-/dev/sda3 /dev/sda3 /dev/sda part LVM2_member 6.8G
# |-/dev/mapper/centos-root /dev/dm-0 /dev/sda3 lvm xfs 6G /
# `-/dev/mapper/centos-swap /dev/dm-1 /dev/sda3 lvm swap 820M [SWAP]
# /dev/sdb /dev/sdb sata disk mpath_member 8G
# `-/dev/mapper/disk_2 /dev/dm-2 /dev/sdb mpath linux_raid_member 8G
# `-/dev/md0 /dev/md0 /dev/dm-2 raid1 xfs 8G /data
# /dev/sdc /dev/sdc sata disk mpath_member 8G
# `-/dev/mapper/disk_1 /dev/dm-3 /dev/sdc mpath linux_raid_member 8G
# `-/dev/md0 /dev/md0 /dev/dm-3 raid1 xfs 8G /data
# /dev/sr0 /dev/sr0 ata rom 1024M
# Disk /dev/sda
# Format: disk <devname> <size(bytes)> <partition label type>
disk /dev/sda 8589934592 gpt
# Partitions on /dev/sda
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
part /dev/sda 209715200 1048576 EFI%20System%20Partition boot /dev/sda1
part /dev/sda 1073741824 210763776 rear-noname none /dev/sda2
part /dev/sda 7304380416 1284505600 rear-noname lvm /dev/sda3
raid /dev/md0 metadata=1.2 level=raid1 raid-devices=2 uuid=a672fba8:7628a3f7:05753a9f:d9b53313 name=0 devices=/dev/mapper/disk_1,/dev/mapper/disk_2
# Format for LVM PVs
# lvmdev <volume_group> <device> [<uuid>] [<size(bytes)>]
lvmdev /dev/centos /dev/sda3 3mI5ya-szZe-iX2Y-Jh5p-kzGL-3WhB-gZw0RD 14266368
# Format for LVM VGs
# lvmgrp <volume_group> <extentsize> [<size(extents)>] [<size(bytes)>]
lvmgrp /dev/centos 4096 1741 7131136
# Format for LVM LVs
# lvmvol <volume_group> <name> <size(bytes)> <layout> [key:value ...]
lvmvol /dev/centos root 6442450944b linear
lvmvol /dev/centos swap 859832320b linear
# Filesystems (only ext2,ext3,ext4,vfat,xfs,reiserfs,btrfs are supported).
# Format: fs <device> <mountpoint> <fstype> [uuid=<uuid>] [label=<label>] [<attributes>]
fs /dev/mapper/centos-root / xfs uuid=b57117c1-a7f0-4d11-84e2-1631ec6e95ae label= options=rw,relatime,attr2,inode64,noquota
fs /dev/md0 /data xfs uuid=19cba473-2aca-43bd-bb03-77b3c715a1d3 label= options=rw,relatime,attr2,inode64,noquota
fs /dev/sda1 /boot/efi vfat uuid=892A-2713 label= options=rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro
fs /dev/sda2 /boot xfs uuid=cc788f46-c117-4b68-bcfb-3aa238f8d6cf label= options=rw,relatime,attr2,inode64,noquota
# Swap partitions or swap files
# Format: swap <filename> uuid=<uuid> label=<label>
swap /dev/mapper/centos-swap uuid=cafa5a50-4b28-4aad-92c3-879d5727055d label=
multipath /dev/mapper/disk_2 8589934592 unknown /dev/sdb
multipath /dev/mapper/disk_1 8589934592 unknown /dev/sdc
*disklayout.conf AFTER the error
# Disk layout dated 20200616202902 (YYYYmmddHHMMSS)
# NAME KNAME PKNAME TRAN TYPE FSTYPE SIZE MOUNTPOINT
# /dev/sdc /dev/sdc sata disk 8G
# |-/dev/sdc1 /dev/sdc1 /dev/sdc part vfat 200M /boot/efi
# |-/dev/sdc2 /dev/sdc2 /dev/sdc part xfs 1G /boot
# `-/dev/sdc3 /dev/sdc3 /dev/sdc part LVM2_member 6.8G
# |-/dev/mapper/centos-root /dev/dm-0 /dev/sdc3 lvm xfs 6G /
# `-/dev/mapper/centos-swap /dev/dm-1 /dev/sdc3 lvm swap 820M [SWAP]
# /dev/sdb /dev/sdb sata disk mpath_member 8G
# `-/dev/mapper/mpathb /dev/dm-2 /dev/sdb mpath linux_raid_member 8G
# `-/dev/md0 /dev/md0 /dev/dm-2 raid1 xfs 8G /data
# _REAR1_ _REAR1_ sata disk mpath_member 8G
# `-/dev/mapper/mpatha /dev/dm-3 _REAR1_ mpath linux_raid_member 8G
# `-/dev/md0 /dev/md0 /dev/dm-3 raid1 xfs 8G /data
# /dev/sr0 /dev/sr0 ata rom 1024M
# Disk /dev/sdc
# Format: disk <devname> <size(bytes)> <partition label type>
disk /dev/sdc 8589934592 gpt
# Partitions on /dev/sdc
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
part /dev/sdc 209715200 1048576 EFI%20System%20Partition boot /dev/sdc1
part /dev/sdc 1073741824 210763776 rear-noname none /dev/sdc2
part /dev/sdc 7304380416 1284505600 rear-noname lvm /dev/sdc3
raid /dev/md0 metadata=1.2 level=raid1 raid-devices=2 uuid=a672fba8:7628a3f7:05753a9f:d9b53313 name=0 devices=/dev/mapper/mpatha,/dev/mapper/mpathb
# Format for LVM PVs
# lvmdev <volume_group> <device> [<uuid>] [<size(bytes)>]
lvmdev /dev/centos /dev/sdc3 3mI5ya-szZe-iX2Y-Jh5p-kzGL-3WhB-gZw0RD 14266368
# Format for LVM VGs
# lvmgrp <volume_group> <extentsize> [<size(extents)>] [<size(bytes)>]
lvmgrp /dev/centos 4096 1741 7131136
# Format for LVM LVs
# lvmvol <volume_group> <name> <size(bytes)> <layout> [key:value ...]
lvmvol /dev/centos root 6442450944b linear
lvmvol /dev/centos swap 859832320b linear
# Filesystems (only ext2,ext3,ext4,vfat,xfs,reiserfs,btrfs are supported).
# Format: fs <device> <mountpoint> <fstype> [uuid=<uuid>] [label=<label>] [<attributes>]
fs /dev/mapper/centos-root / xfs uuid=b57117c1-a7f0-4d11-84e2-1631ec6e95ae label= options=rw,relatime,attr2,inode64,noquota
fs /dev/md0 /data xfs uuid=19cba473-2aca-43bd-bb03-77b3c715a1d3 label= options=rw,relatime,attr2,inode64,noquota
fs /dev/sdc1 /boot/efi vfat uuid=892A-2713 label= options=rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro
fs /dev/sdc2 /boot xfs uuid=cc788f46-c117-4b68-bcfb-3aa238f8d6cf label= options=rw,relatime,attr2,inode64,noquota
# Swap partitions or swap files
# Format: swap <filename> uuid=<uuid> label=<label>
swap /dev/mapper/centos-swap uuid=cafa5a50-4b28-4aad-92c3-879d5727055d label=
multipath /dev/mapper/mpathb 8589934592 unknown /dev/sdb
multipath /dev/mapper/mpatha 8589934592 unknown _REAR1_
V.
jsmeix commented at 2020-06-17 09:58:¶
Because of
https://github.com/rear/rear/issues/2428#issuecomment-645246213
I did
https://github.com/rear/rear/issues/2429
gozora commented at 2020-06-17 10:07:¶
@gdha is there some really special code in ReaR that would fail for
iSCSI? honestly I've never tried this, but the structure (e.g.
multipath -l
output) for iSCSI and Fibre Channel look quite similar to
me.
Maybe it is time for me to try it out ;-)
V.
jsmeix commented at 2020-06-17 12:19:¶
@schabrolles
I dared to also assign this issue to you
because it is primarily about multipath
but also to some extent about iSCSI
cf.
https://github.com/rear/rear/issues/2429
@schabrolles
do you perhaps have personal experience in using ReaR
on systems with iSCSI disks?
jsmeix commented at 2020-06-17 12:21:¶
@gozora
did you try out if
Edit disk mapping (/var/lib/rear/layout/disk_mappings)
could help in your case to avoid the useless mapping attempts?
gozora commented at 2020-06-17 12:36:¶
@jsmeix not sure that I understand your question.
My disk_mappings file looked something like this after error was
thrown:
RESCUE centos7:~ # cat /var/lib/rear/layout/disk_mappings
/dev/sda /dev/sdc
/dev/mapper/disk_2 /dev/mapper/mpathb
/dev/mapper/disk_1 /dev/mapper/mpatha
As mentioned in
https://github.com/rear/rear/issues/2428#issue-639979434,
I've successfully restored system after slaves of multipath
in
disklayout.conf were removed:
multipath /dev/mapper/mpatha 8589934592 unknown
multipath /dev/mapper/mpathb 8589934592 unknown
gozora commented at 2020-06-17 12:38:¶
Small correction of my
https://github.com/rear/rear/issues/2428#issuecomment-645347360
where I've pasted disklayout.conf after replacement
This one is the original one:
multipath /dev/mapper/disk_2 8589934592 unknown
multipath /dev/mapper/disk_1 8589934592 unknown
schabrolles commented at 2020-06-17 12:50:¶
@jsmeix
I never really play with iSCSI ... only real SAN disk.
gozora commented at 2020-06-17 12:54:¶
@schabrolles iSCSI is real SAN disk too ;-)
V.
jsmeix commented at 2020-06-17 13:53:¶
@gozora
sorry I confused things (too many device name indirections drive me
nuts)
so that my "avoid the useless mapping attempts" in
https://github.com/rear/rear/issues/2428#issuecomment-645340673
was plain wrong and misleading.
But manually Edit disk mapping
should help:
Now I had a closer look and as far as I understand it now
you like to migrate the original system
NAME KNAME PKNAME TRAN TYPE FSTYPE SIZE MOUNTPOINT
/dev/sda /dev/sda sata disk 8G
|-/dev/sda1 /dev/sda1 /dev/sda part vfat 200M /boot/efi
|-/dev/sda2 /dev/sda2 /dev/sda part xfs 1G /boot
`-/dev/sda3 /dev/sda3 /dev/sda part LVM2_member 6.8G
|-/dev/mapper/centos-root /dev/dm-0 /dev/sda3 lvm xfs 6G /
`-/dev/mapper/centos-swap /dev/dm-1 /dev/sda3 lvm swap 820M [SWAP]
/dev/sdb /dev/sdb sata disk mpath_member 8G
`-/dev/mapper/disk_2 /dev/dm-3 /dev/sdb mpath linux_raid_member 8G
`-/dev/md0 /dev/md0 /dev/dm-3 raid1 xfs 8G /data
/dev/sdc /dev/sdc sata disk mpath_member 8G
`-/dev/mapper/disk_1 /dev/dm-2 /dev/sdc mpath linux_raid_member 8G
`-/dev/md0 /dev/md0 /dev/dm-2 raid1 xfs 8G /data
to the replacement system
NAME KNAME PKNAME TRAN TYPE FSTYPE SIZE MOUNTPOINT
/dev/sda /dev/sda sata disk 20G
`-/dev/mapper/mpatha /dev/dm-0 /dev/sda mpath 20G
/dev/sdb /dev/sdb sata disk 10G
`-/dev/mapper/mpathb /dev/dm-1 /dev/sdb mpath 10G
/dev/sdc /dev/sdc sata disk 8G
Because of your disk mapping file
/dev/sda /dev/sdc
/dev/mapper/disk_2 /dev/mapper/mpathb
/dev/mapper/disk_1 /dev/mapper/mpatha
I think the intended migration is:
original /dev/sda should
become /dev/sdc on replacement hardware
original /dev/sdb /dev/mapper/disk_2 should
become /dev/sdb /dev/mapper/mpathb on replacement hardware
original /dev/sdc /dev/mapper/disk_1 should
become /dev/sda /dev/mapper/mpatha on replacement hardware
Your unchanged disklayout.conf contains the following device names:
/dev/centos
/dev/mapper/centos-root
/dev/mapper/centos-swap
/dev/mapper/disk_1
/dev/mapper/disk_2
/dev/md0
/dev/sda
/dev/sda1
/dev/sda2
/dev/sda3
/dev/sdb
/dev/sdc
According to
https://github.com/rear/rear/issues/2428#issuecomment-645244888
Simply put:
The current disk mapping code only works
when those mapping targets are also specified as a mapping source
when the mapping target appears in a file where mappings are applied.
it means for your disk mapping targets
/dev/sdc
/dev/mapper/mpathb
/dev/mapper/mpatha
that those that already exist in your unchanged disklayout.conf
must be also specified as a mapping source.
In your case /dev/sdc
is the only one of your disk mapping targets
that already exist in your unchanged disklayout.conf.
So - as far as I understand it - what seems to be missing is
to manually add an additional disk mapping target for /dev/sdc
in your disk mapping file like
/dev/sda /dev/sdc
/dev/mapper/disk_2 /dev/mapper/mpathb
/dev/mapper/disk_1 /dev/mapper/mpatha
/dev/sdc /dev/sda
according to what "I think the intended migration is" (see above).
@gozora
could you try out if it helps to manually add
an additional disk mapping target for /dev/sdc
in your disk mapping file so that in the end it looks like
/dev/sda /dev/sdc
/dev/mapper/disk_2 /dev/mapper/mpathb
/dev/mapper/disk_1 /dev/mapper/mpatha
/dev/sdc /dev/sda
This way the problematic disklayout.conf line
multipath /dev/mapper/disk_1 8589934592 unknown /dev/sdc
should become changed to
multipath /dev/mapper/mpatha 8589934592 unknown /dev/sda
i.e. now also the multipath slave gets adapted to match
/dev/mapper/mpatha
(regardless that the multipath slave value is not really needed here)
but I like to understand what exactly goes wrong here
with ReaR's semi-automated disk mapping functionality.
jsmeix commented at 2020-06-17 14:04:¶
FYI
how to manually derive the disk mapping file contents
from an intended disk migration:
When the intended migration is
(I)
original /dev/sda should
become /dev/sdc on replacement hardware
(II)
original /dev/sdb /dev/mapper/disk_2 should
become /dev/sdb /dev/mapper/mpathb on replacement hardware
(III)
original /dev/sdc /dev/mapper/disk_1 should
become /dev/sda /dev/mapper/mpatha on replacement hardware
then (I) results this disk mapping entry
/dev/sda /dev/sdc
and (II) results this disk mapping entries
/dev/sdb /dev/sdb
/dev/mapper/disk_2 /dev/mapper/mpathb
and (III) results this disk mapping entries
/dev/sdc /dev/sda
/dev/mapper/disk_1 /dev/mapper/mpatha
so (I) and (II) and (III) result this disk mapping entries
/dev/sda /dev/sdc
/dev/sdb /dev/sdb
/dev/mapper/disk_2 /dev/mapper/mpathb
/dev/sdc /dev/sda
/dev/mapper/disk_1 /dev/mapper/mpatha
where the /dev/sdb /dev/sdb
is an identical mapping that can be
omitted
so in the end the relevant disk mapping entries are
/dev/sda /dev/sdc
/dev/mapper/disk_2 /dev/mapper/mpathb
/dev/sdc /dev/sda
/dev/mapper/disk_1 /dev/mapper/mpatha
gozora commented at 2020-06-17 14:50:¶
@jsmeix
Once I've manually added /dev/sdc /dev/sda
into disk_mappings all
works fine!
Final /var/lib/rear/layout/disk_mappings looks like:
RESCUE centos7:/var/lib/rear/layout # cat disk_mappings
/dev/sda /dev/sdc
/dev/mapper/disk_2 /dev/mapper/mpathb
/dev/mapper/disk_1 /dev/mapper/mpatha
/dev/sdc /dev/sda
Corresponding /var/lib/rear/layout/disklayout.conf:
RESCUE centos7:/var/lib/rear/layout # cat /var/lib/rear/layout/disklayout.conf
# Disk layout dated 20200616202902 (YYYYmmddHHMMSS)
# NAME KNAME PKNAME TRAN TYPE FSTYPE SIZE MOUNTPOINT
# /dev/sdc /dev/sdc sata disk 8G
# |-/dev/sdc1 /dev/sdc1 /dev/sdc part vfat 200M /boot/efi
# |-/dev/sdc2 /dev/sdc2 /dev/sdc part xfs 1G /boot
# `-/dev/sdc3 /dev/sdc3 /dev/sdc part LVM2_member 6.8G
# |-/dev/mapper/centos-root /dev/dm-0 /dev/sdc3 lvm xfs 6G /
# `-/dev/mapper/centos-swap /dev/dm-1 /dev/sdc3 lvm swap 820M [SWAP]
# /dev/sdb /dev/sdb sata disk mpath_member 8G
# `-/dev/mapper/mpathb /dev/dm-2 /dev/sdb mpath linux_raid_member 8G
# `-/dev/md0 /dev/md0 /dev/dm-2 raid1 xfs 8G /data
# /dev/sda /dev/sda sata disk mpath_member 8G
# `-/dev/mapper/mpatha /dev/dm-3 /dev/sda mpath linux_raid_member 8G
# `-/dev/md0 /dev/md0 /dev/dm-3 raid1 xfs 8G /data
# /dev/sr0 /dev/sr0 ata rom 1024M
# Disk /dev/sdc
# Format: disk <devname> <size(bytes)> <partition label type>
disk /dev/sdc 8589934592 gpt
# Partitions on /dev/sdc
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
part /dev/sdc 209715200 1048576 EFI%20System%20Partition boot /dev/sdc1
part /dev/sdc 1073741824 210763776 rear-noname none /dev/sdc2
part /dev/sdc 7304380416 1284505600 rear-noname lvm /dev/sdc3
raid /dev/md0 metadata=1.2 level=raid1 raid-devices=2 uuid=a672fba8:7628a3f7:05753a9f:d9b53313 name=0 devices=/dev/mapper/mpatha,/dev/mapper/mpathb
# Format for LVM PVs
# lvmdev <volume_group> <device> [<uuid>] [<size(bytes)>]
lvmdev /dev/centos /dev/sdc3 3mI5ya-szZe-iX2Y-Jh5p-kzGL-3WhB-gZw0RD 14266368
# Format for LVM VGs
# lvmgrp <volume_group> <extentsize> [<size(extents)>] [<size(bytes)>]
lvmgrp /dev/centos 4096 1741 7131136
# Format for LVM LVs
# lvmvol <volume_group> <name> <size(bytes)> <layout> [key:value ...]
lvmvol /dev/centos root 6442450944b linear
lvmvol /dev/centos swap 859832320b linear
# Filesystems (only ext2,ext3,ext4,vfat,xfs,reiserfs,btrfs are supported).
# Format: fs <device> <mountpoint> <fstype> [uuid=<uuid>] [label=<label>] [<attributes>]
fs /dev/mapper/centos-root / xfs uuid=b57117c1-a7f0-4d11-84e2-1631ec6e95ae label= options=rw,relatime,attr2,inode64,noquota
fs /dev/md0 /data xfs uuid=19cba473-2aca-43bd-bb03-77b3c715a1d3 label= options=rw,relatime,attr2,inode64,noquota
fs /dev/sdc1 /boot/efi vfat uuid=892A-2713 label= options=rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro
fs /dev/sdc2 /boot xfs uuid=cc788f46-c117-4b68-bcfb-3aa238f8d6cf label= options=rw,relatime,attr2,inode64,noquota
# Swap partitions or swap files
# Format: swap <filename> uuid=<uuid> label=<label>
swap /dev/mapper/centos-swap uuid=cafa5a50-4b28-4aad-92c3-879d5727055d label=
multipath /dev/mapper/mpathb 8589934592 unknown /dev/sdb
multipath /dev/mapper/mpatha 8589934592 unknown /dev/sda
Now I'm not sure if this is a bug or a feature :-)
Thanks for your help!
V.
jsmeix commented at 2020-06-17 15:59:¶
@gozora
many thanks for your prompt testing and your explanatory feedback!
I think the current issue labels "enhancement" and "minor-bug"
are exactly the right ones.
The "minor-bug" is that the current semi-automated disk mapping
functionality
does not verify that the disk mapping entries are complete i.e. a check
is missing
that verifies that all mapping targets are also specified as a mapping
source
when the mapping target appears in a file where mappings are applied
(where as a first step testing only disklayout.conf would probably
catch
99% of all cases where things would go wrong as in this issue here).
In this case here a check is missing that detects that for /dev/sdc
a mapping target is missing.
The "enhancement" is that the current semi-automated disk mapping
functionality
should ask the user via more such dialogs about what mapping target
the user wants to have for those already specified mapping targets
that appear in a file where mappings are applied.
In this case here a user dialog is missing that asks the user for the
mapping target of /dev/sdc.
github-actions commented at 2020-10-14 01:49:¶
Stale issue message
[Export of Github issue for rear/rear.]