#2812 Issue closed
: Migrating to other PowerVM LPAR with different disks: Restore multipath problem: no WWIDs¶
Labels: enhancement
, support / question
,
special hardware or VM
, no-issue-activity
markbertolin opened issue at 2022-05-23 15:25:¶
Relax-and-Recover (ReaR) Issue Template¶
Fill in the following items before submitting a new issue
(quick response is not guaranteed with free support):
-
ReaR version ("/usr/sbin/rear -V"):
Relax-and-Recover 2.4 / Git -
OS version ("cat /etc/os-release" or "lsb_release -a" or "cat /etc/rear/os.conf"):
NAME="SLES"
VERSION="12-SP3"
VERSION_ID="12.3"
PRETTY_NAME="SUSE Linux Enterprise Server 12 SP3"
ID="sles"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sles_sap:12:sp3"
- ReaR configuration files ("cat /etc/rear/site.conf" and/or "cat /etc/rear/local.conf"):
mm-001-hbp01:/etc/multipath # cat /etc/rear/site.conf
cat: /etc/rear/site.conf: No such file or directory
mm-001-hbp01:/etc/multipath # cat /etc/rear/local.conf|grep -v grep |grep -v '#'
MIGRATION_MODE='true'
BOOT_OVER_SAN=y
BACKUP=NETFS
AUTORESIZE_PARTITIONS=true
OUTPUT=ISO
AUTOEXCLUDE_MULTIPATH=n
REQUIRED_PROGS=( "${REQUIRED_PROGS[@]}" snapper chattr lsattr multipath )
BACKUP_URL=nfs://nfs-export.mtrmil.locale/u01/data
OUTPUT_URL=nfs://nfs-export.mtrmil.locale/u01/data
USE_STATIC_NETWORKING=y
BACKUP_PROG_EXCLUDE=("${BACKUP_PROG_EXCLUDE[@]}" '/hana/shared' '/hana/data' '/hana/log' '/media' '/var/tmp' '/var/crash' '/usr/sap' '/sapmnt/BPP' '/mnt')
SSH_ROOT_PASSWORD="zaq12wsx"
-
Hardware vendor/product (PC or PowerNV BareMetal or ARM) or VM (KVM guest or PowerVM LPAR):
PowerVM LPAR -
System architecture (x86 compatible or /PPC64LE or what exact ARM device):
PPC64 -
Firmware (BIOS or UEFI or Open Firmware) and bootloader (GRUB or ELILO or Petitboot):
GRUB2 -
Storage (local disk or SSD) and/or SAN (FC or iSCSI or FCoE) and/or multipath (DM or NVMe):
SAN FC and mutipath DM -
Storage layout ("lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,LABEL,SIZE,MOUNTPOINT"):
mm-001-hbp01:~ # lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,LABEL,SIZE,MOUNTPOINT
NAME KNAME PKNAME TRAN TYPE FSTYPE LABEL SIZE MOUNTPOINT
/dev/sda /dev/sda disk 120G
|-/dev/sda1 /dev/sda1 /dev/sda part 7M
`-/dev/sda2 /dev/sda2 /dev/sda part LVM2_member 120G
/dev/sdb /dev/sdb disk 120G
|-/dev/sdb1 /dev/sdb1 /dev/sdb part 7M
`-/dev/sdb2 /dev/sdb2 /dev/sdb part LVM2_member 120G
/dev/sdc /dev/sdc disk 120G
|-/dev/sdc1 /dev/sdc1 /dev/sdc part 7M
`-/dev/sdc2 /dev/sdc2 /dev/sdc part LVM2_member 120G
/dev/sdd /dev/sdd disk 120G
|-/dev/sdd1 /dev/sdd1 /dev/sdd part 7M
`-/dev/sdd2 /dev/sdd2 /dev/sdd part LVM2_member 120G
`-/dev/mapper/system-root /dev/dm-0 /dev/sdd2 lvm xfs 60G /
/dev/sde /dev/sde disk 120G
`-/dev/mapper/360050763808102f52400000000000080 /dev/dm-1 /dev/sde mpath 120G
/dev/sdf /dev/sdf disk 120G
`-/dev/mapper/360050763808102f52400000000000080 /dev/dm-1 /dev/sdf mpath 120G
/dev/sdg /dev/sdg disk 120G
`-/dev/mapper/360050763808102f52400000000000080 /dev/dm-1 /dev/sdg mpath 120G
/dev/sdh /dev/sdh disk 120G
`-/dev/mapper/360050763808102f52400000000000080 /dev/dm-1 /dev/sdh mpath 120G
/dev/sr0 /dev/sr0 rom iso9660 RELAXRECOVER 92.9M
- Description of the issue (ideally so that others can reproduce it):
I'm trying to restore a LPAR to anther IBM Power;
when restore the LPAR I have duplicate PV and
mutipath software not seems good:
mm-001-hbp01:~ # pvscan
Found duplicate PV xOiaj3oTeupHtFHRtOPqzB9ALugFrNSl: using /dev/sdb2 not /dev/sdc2
Using duplicate PV /dev/sdb2 which is last seen, replacing /dev/sdc2
Found duplicate PV xOiaj3oTeupHtFHRtOPqzB9ALugFrNSl: using /dev/sda2 not /dev/sdb2
Using duplicate PV /dev/sda2 which is last seen, replacing /dev/sdb2
Found duplicate PV xOiaj3oTeupHtFHRtOPqzB9ALugFrNSl: using /dev/sdc2 not /dev/sda2
Using duplicate PV /dev/sdc2 which is last seen, replacing /dev/sda2
Found duplicate PV xOiaj3oTeupHtFHRtOPqzB9ALugFrNSl: using /dev/sdb2 not /[dev](url)/sdc2
Using duplicate PV /dev/sdb2 which is last seen, replacing /dev/sdc2
Found duplicate PV xOiaj3oTeupHtFHRtOPqzB9ALugFrNSl: using /dev/sda2 not /dev/sdb2
Using duplicate PV /dev/sda2 which is last seen, replacing /dev/sdb2
Found duplicate PV xOiaj3oTeupHtFHRtOPqzB9ALugFrNSl: using /dev/sdc2 not /dev/sda2
Using duplicate PV /dev/sdc2 which is last seen, replacing /dev/sda2
Found duplicate PV xOiaj3oTeupHtFHRtOPqzB9ALugFrNSl: using /dev/sdb2 not /dev/sdc2
Using duplicate PV /dev/sdb2 which is last seen, replacing /dev/sdc2
Found duplicate PV xOiaj3oTeupHtFHRtOPqzB9ALugFrNSl: using /dev/sda2 not /dev/sdb2
Using duplicate PV /dev/sda2 which is last seen, replacing /dev/sdb2
PV /dev/sda2 VG system lvm2 [119.99 GiB / 0 free]
Total: 1 [119.99 GiB] / in use: 1 [119.99 GiB] / in no VG: 0 [0 ]
mm-001-hbp01:~ # mutipath -ll
If 'mutipath' is not a typo you can use command-not-found to lookup the package that contains it, like this:
cnf mutipath
-
Workaround, if any:
-
Attachments, as applicable ("rear -D mkrescue/mkbackup/recover" debug log files):
rear-mm-001-hbp01.log
To paste verbatim text like command output or file content,
include it between a leading and a closing line of three backticks like
```
verbatim content
```
How coud I do?
Many thx
Marco
pcahyna commented at 2022-05-23 15:49:¶
Your storage layout seems to show that the system disks are not using
multipath (LVM sits directly on top of /dev/sd[a-d]2
, without any
multipath devices involved), so I wonder why should multipath be a
problem in this situation. Or is the output of lsblk
wrong and there
should actually be multipath devices layered between the disks and LVM?
markbertolin commented at 2022-05-23 20:32:¶
Hi,
the original server has multipath on and in good state:
mm-001-h4p01:~ # multipath -ll
360050763808182f5fc00000000000009 dm-1 IBM,2145
size=4.5T features='1 retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:0:1 sdb 8:16 active ready running
| |- 2:0:0:1 sdr 65:16 active ready running
| |- 3:0:0:1 sdah 66:16 active ready running
| `- 4:0:0:1 sdax 67:16 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 1:0:1:1 sdj 8:144 active ready running
|- 2:0:1:1 sdz 65:144 active ready running
|- 3:0:1:1 sdap 66:144 active ready running
`- 4:0:1:1 sdbf 67:144 active ready running
360050763808182f5fc00000000000037 dm-7 IBM,2145
size=1.0G features='1 retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:1:7 sdp 8:240 active ready running
| |- 2:0:1:7 sdaf 65:240 active ready running
| |- 3:0:1:7 sdav 66:240 active ready running
| `- 4:0:1:7 sdbl 67:240 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 1:0:0:7 sdh 8:112 active ready running
|- 2:0:0:7 sdx 65:112 active ready running
|- 3:0:0:7 sdan 66:112 active ready running
`- 4:0:0:7 sdbd 67:112 active ready running
360050763808182f5fc00000000000036 dm-6 IBM,2145
size=1.0G features='1 retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:0:6 sdg 8:96 active ready running
With the local.conf file set as above,
I give the read mkbackup command and
the ISO file is generated for the recovery
of the operating system and the whole LPAR machine.
It is carried with the scp command on the
VIOS Virtual Media Repository and mounted it
on the partition with loadopt command.
After i start the LPAR and REAR RECOVER mask appear
with the login prompt.
I enter the Shell and give the "read -v recovery" command.
Seems that the multipath service starts automatically
(MIGRATION_MODE is true) but I continue to see 4 disks
even if they are only one.
By changing the disklayout.conf with the proposals given,
I can restore but the multipath no longer works:
WWIDS magically disappear in the configuration file and
the machine sees the boot disk as if it were local,
not in san.
i do the command multipath -ll and i see.. nothings,
only the prompt!!!
Only after changing the file wwids with the correct number
I have:
mm-001-hbp01:~ # cd /etc/multipath/
mm-001-hbp01:/etc/multipath # ll
total 8
-rw------- 1 root root 200 May 23 12:17 bindings
-rw------- 1 root root 226 May 23 15:21 wwids
mm-001-hbp01:/etc/multipath # cat wwids
# Multipath wwids, Version : 1.0
# NOTE: This file is automatically maintained by multipath and multipathd.
# You should not need to edit this file in normal circumstances.
#
# Valid WWIDs:
/360050763808102f52400000000000080/
but
mm-001-hbp01:~ # multipath
May 23 22:25:26 | 360050763808102f5240000000000007d: ignoring map
May 23 22:25:26 | 360050763808102f5240000000000007d: ignoring map
May 23 22:25:27 | 360050763808102f5240000000000007d: ignoring map
May 23 22:25:27 | 360050763808102f5240000000000007d: ignoring map
and
mm-001-hbp01:~ # cat /etc/multipath/bindings
# Multipath bindings, Version : 1.0
# NOTE: this file is automatically maintained by the multipath program.
# You should not need to edit this file in normal circumstances.
#
# Format:
# alias wwid
#
There are some customization to put into local.conf
or other files to work in good manner the
multipath software layer ad the smooth recovery
withouth stops?
Many thx
Marco
pcahyna commented at 2022-05-24 09:10:¶
The lsblk
command was executed on the original server? If so, why
doesn't it show sdb
as part of a multipath device?
By the way, please quote the output of commands using triple backticks, otherwise it is unreadable.
markbertolin commented at 2022-05-24 10:07:¶
Hi,
I want to show a disks with mutipath into recover shell ...
the original system is POWERPC with VIOS and NPIV
with boot over SAN by
mm-001-hbp02:~ # cat /etc/multipath.conf
defaults {
user_friendly_names no
}
devices {
device {
vendor "IBM"
product "2145"
path_grouping_policy group_by_prio
prio "alua"
path_checker "tur"
Path_selector "service-time 0"
failback "immediate"
rr_weight "priorities"
no_path_retry "fail"
rr_min_io_rq 10
dev_loss_tmo 600
fast_io_fail_tmo 5
}
}
have you some ideas to doi it?
without multipathd i cannot restore the system.
thx
pcahyna commented at 2022-05-24 10:27:¶
I don't get it. Is your original lsblk
output on the original system
or not? If yes, why does not it show the same multipath devices as your
multipath -ll
output? And are you restoring on a different system,
with different disks, than where the backup was created? Can you please
provide your /var/lib/rear/layout/disklayout.conf
file?
markbertolin commented at 2022-05-24 12:35:¶
lsblk output isn't original ..
I restored on a different system, with different new disks.
The original lsblk and multipath are:
mm-001-h4p01:/etc/multipath # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 120G 0 disk
├─sda1 8:1 0 7M 0 part
├─sda2 8:2 0 120G 0 part
└─360050763808182f5fc0000000000000d 254:0 0 120G 0 mpath
├─360050763808182f5fc0000000000000d-part1
│ 254:8 0 7M 0 part
└─360050763808182f5fc0000000000000d-part2
254:9 0 120G 0 part
├─system-root 254:10 0 60G 0 lvm /
└─system-swap 254:11 0 60G 0 lvm [SWAP]
sdb 8:16 0 4.6T 0 disk
└─360050763808182f5fc00000000000009 254:1 0 4.6T 0 mpath
└─vg_data-lv_data 254:15 0 4.6T 0 lvm /hana/data
sdc 8:32 0 512G 0 disk
└─360050763808182f5fc0000000000000a 254:2 0 512G 0 mpath
└─vg_log-lv_log 254:13 0 512G 0 lvm /hana/log
sdd 8:48 0 512G 0 disk
└─360050763808182f5fc0000000000000b 254:3 0 512G 0 mpath
└─vg_shared-lv_shared 254:12 0 512G 0 lvm /hana/shared
sde 8:64 0 10G 0 disk
└─360050763808182f5fc0000000000000c 254:4 0 10G 0 mpath
└─vg_usr-lv_usr 254:14 0 10G 0 lvm /usr/sap
sdf 8:80 0 1G 0 disk
└─360050763808182f5fc00000000000031 254:5 0 1G 0 mpath
sdg 8:96 0 1G 0 disk
└─360050763808182f5fc00000000000036 254:6 0 1G 0 mpath
sdh 8:112 0 1G 0 disk
└─360050763808182f5fc00000000000037 254:7 0 1G 0 mpath
sdi 8:128 0 120G 0 disk
├─sdi1 8:129 0 7M 0 part
├─sdi2 8:130 0 120G 0 part
└─360050763808182f5fc0000000000000d 254:0 0 120G 0 mpath
├─360050763808182f5fc0000000000000d-part1
│ 254:8 0 7M 0 part
└─360050763808182f5fc0000000000000d-part2
254:9 0 120G 0 part
├─system-root 254:10 0 60G 0 lvm /
└─system-swap 254:11 0 60G 0 lvm [SWAP]
sdj 8:144 0 4.6T 0 disk
└─360050763808182f5fc00000000000009 254:1 0 4.6T 0 mpath
└─vg_data-lv_data 254:15 0 4.6T 0 lvm /hana/data
sdk 8:160 0 512G 0 disk
└─360050763808182f5fc0000000000000a 254:2 0 512G 0 mpath
└─vg_log-lv_log 254:13 0 512G 0 lvm /hana/log
sdl 8:176 0 512G 0 disk
└─360050763808182f5fc0000000000000b 254:3 0 512G 0 mpath
└─vg_shared-lv_shared 254:12 0 512G 0 lvm /hana/shared
sdm 8:192 0 10G 0 disk
└─360050763808182f5fc0000000000000c 254:4 0 10G 0 mpath
└─vg_usr-lv_usr 254:14 0 10G 0 lvm /usr/sap
sdn 8:208 0 1G 0 disk
└─360050763808182f5fc00000000000031 254:5 0 1G 0 mpath
sdo 8:224 0 1G 0 disk
└─360050763808182f5fc00000000000036 254:6 0 1G 0 mpath
sdp 8:240 0 1G 0 disk
└─360050763808182f5fc00000000000037 254:7 0 1G 0 mpath
sdq 65:0 0 120G 0 disk
├─sdq1 65:1 0 7M 0 part
├─sdq2 65:2 0 120G 0 part
└─360050763808182f5fc0000000000000d 254:0 0 120G 0 mpath
├─360050763808182f5fc0000000000000d-part1
│ 254:8 0 7M 0 part
└─360050763808182f5fc0000000000000d-part2
254:9 0 120G 0 part
├─system-root 254:10 0 60G 0 lvm /
└─system-swap 254:11 0 60G 0 lvm [SWAP]
sdr 65:16 0 4.6T 0 disk
└─360050763808182f5fc00000000000009 254:1 0 4.6T 0 mpath
└─vg_data-lv_data 254:15 0 4.6T 0 lvm /hana/data
sds 65:32 0 512G 0 disk
└─360050763808182f5fc0000000000000a 254:2 0 512G 0 mpath
└─vg_log-lv_log 254:13 0 512G 0 lvm /hana/log
sdt 65:48 0 512G 0 disk
└─360050763808182f5fc0000000000000b 254:3 0 512G 0 mpath
└─vg_shared-lv_shared 254:12 0 512G 0 lvm /hana/shared
sdu 65:64 0 10G 0 disk
└─360050763808182f5fc0000000000000c 254:4 0 10G 0 mpath
└─vg_usr-lv_usr 254:14 0 10G 0 lvm /usr/sap
sdv 65:80 0 1G 0 disk
└─360050763808182f5fc00000000000031 254:5 0 1G 0 mpath
sdw 65:96 0 1G 0 disk
└─360050763808182f5fc00000000000036 254:6 0 1G 0 mpath
sdx 65:112 0 1G 0 disk
└─360050763808182f5fc00000000000037 254:7 0 1G 0 mpath
sdy 65:128 0 120G 0 disk
├─sdy1 65:129 0 7M 0 part
├─sdy2 65:130 0 120G 0 part
└─360050763808182f5fc0000000000000d 254:0 0 120G 0 mpath
├─360050763808182f5fc0000000000000d-part1
│ 254:8 0 7M 0 part
└─360050763808182f5fc0000000000000d-part2
254:9 0 120G 0 part
├─system-root 254:10 0 60G 0 lvm /
└─system-swap 254:11 0 60G 0 lvm [SWAP]
sdz 65:144 0 4.6T 0 disk
└─360050763808182f5fc00000000000009 254:1 0 4.6T 0 mpath
└─vg_data-lv_data 254:15 0 4.6T 0 lvm /hana/data
sdaa 65:160 0 512G 0 disk
└─360050763808182f5fc0000000000000a 254:2 0 512G 0 mpath
└─vg_log-lv_log 254:13 0 512G 0 lvm /hana/log
sdab 65:176 0 512G 0 disk
└─360050763808182f5fc0000000000000b 254:3 0 512G 0 mpath
└─vg_shared-lv_shared 254:12 0 512G 0 lvm /hana/shared
sdac 65:192 0 10G 0 disk
└─360050763808182f5fc0000000000000c 254:4 0 10G 0 mpath
└─vg_usr-lv_usr 254:14 0 10G 0 lvm /usr/sap
sdad 65:208 0 1G 0 disk
└─360050763808182f5fc00000000000031 254:5 0 1G 0 mpath
sdae 65:224 0 1G 0 disk
└─360050763808182f5fc00000000000036 254:6 0 1G 0 mpath
sdaf 65:240 0 1G 0 disk
└─360050763808182f5fc00000000000037 254:7 0 1G 0 mpath
sdag 66:0 0 120G 0 disk
├─sdag1 66:1 0 7M 0 part
├─sdag2 66:2 0 120G 0 part
└─360050763808182f5fc0000000000000d 254:0 0 120G 0 mpath
├─360050763808182f5fc0000000000000d-part1
│ 254:8 0 7M 0 part
└─360050763808182f5fc0000000000000d-part2
254:9 0 120G 0 part
├─system-root 254:10 0 60G 0 lvm /
└─system-swap 254:11 0 60G 0 lvm [SWAP]
sdah 66:16 0 4.6T 0 disk
└─360050763808182f5fc00000000000009 254:1 0 4.6T 0 mpath
└─vg_data-lv_data 254:15 0 4.6T 0 lvm /hana/data
sdai 66:32 0 512G 0 disk
└─360050763808182f5fc0000000000000a 254:2 0 512G 0 mpath
└─vg_log-lv_log 254:13 0 512G 0 lvm /hana/log
sdaj 66:48 0 512G 0 disk
└─360050763808182f5fc0000000000000b 254:3 0 512G 0 mpath
└─vg_shared-lv_shared 254:12 0 512G 0 lvm /hana/shared
sdak 66:64 0 10G 0 disk
└─360050763808182f5fc0000000000000c 254:4 0 10G 0 mpath
└─vg_usr-lv_usr 254:14 0 10G 0 lvm /usr/sap
sdal 66:80 0 1G 0 disk
└─360050763808182f5fc00000000000031 254:5 0 1G 0 mpath
sdam 66:96 0 1G 0 disk
└─360050763808182f5fc00000000000036 254:6 0 1G 0 mpath
sdan 66:112 0 1G 0 disk
└─360050763808182f5fc00000000000037 254:7 0 1G 0 mpath
sdao 66:128 0 120G 0 disk
├─sdao1 66:129 0 7M 0 part
├─sdao2 66:130 0 120G 0 part
└─360050763808182f5fc0000000000000d 254:0 0 120G 0 mpath
├─360050763808182f5fc0000000000000d-part1
│ 254:8 0 7M 0 part
└─360050763808182f5fc0000000000000d-part2
254:9 0 120G 0 part
├─system-root 254:10 0 60G 0 lvm /
└─system-swap 254:11 0 60G 0 lvm [SWAP]
sdap 66:144 0 4.6T 0 disk
└─360050763808182f5fc00000000000009 254:1 0 4.6T 0 mpath
└─vg_data-lv_data 254:15 0 4.6T 0 lvm /hana/data
sdaq 66:160 0 512G 0 disk
└─360050763808182f5fc0000000000000a 254:2 0 512G 0 mpath
└─vg_log-lv_log 254:13 0 512G 0 lvm /hana/log
sdar 66:176 0 512G 0 disk
└─360050763808182f5fc0000000000000b 254:3 0 512G 0 mpath
└─vg_shared-lv_shared 254:12 0 512G 0 lvm /hana/shared
sdas 66:192 0 10G 0 disk
└─360050763808182f5fc0000000000000c 254:4 0 10G 0 mpath
└─vg_usr-lv_usr 254:14 0 10G 0 lvm /usr/sap
sdat 66:208 0 1G 0 disk
└─360050763808182f5fc00000000000031 254:5 0 1G 0 mpath
sdau 66:224 0 1G 0 disk
└─360050763808182f5fc00000000000036 254:6 0 1G 0 mpath
sdav 66:240 0 1G 0 disk
└─360050763808182f5fc00000000000037 254:7 0 1G 0 mpath
sdaw 67:0 0 120G 0 disk
├─sdaw1 67:1 0 7M 0 part
├─sdaw2 67:2 0 120G 0 part
└─360050763808182f5fc0000000000000d 254:0 0 120G 0 mpath
├─360050763808182f5fc0000000000000d-part1
│ 254:8 0 7M 0 part
└─360050763808182f5fc0000000000000d-part2
254:9 0 120G 0 part
├─system-root 254:10 0 60G 0 lvm /
└─system-swap 254:11 0 60G 0 lvm [SWAP]
sdax 67:16 0 4.6T 0 disk
└─360050763808182f5fc00000000000009 254:1 0 4.6T 0 mpath
└─vg_data-lv_data 254:15 0 4.6T 0 lvm /hana/data
sday 67:32 0 512G 0 disk
└─360050763808182f5fc0000000000000a 254:2 0 512G 0 mpath
└─vg_log-lv_log 254:13 0 512G 0 lvm /hana/log
sdaz 67:48 0 512G 0 disk
└─360050763808182f5fc0000000000000b 254:3 0 512G 0 mpath
└─vg_shared-lv_shared 254:12 0 512G 0 lvm /hana/shared
sdba 67:64 0 10G 0 disk
└─360050763808182f5fc0000000000000c 254:4 0 10G 0 mpath
└─vg_usr-lv_usr 254:14 0 10G 0 lvm /usr/sap
sdbb 67:80 0 1G 0 disk
└─360050763808182f5fc00000000000031 254:5 0 1G 0 mpath
sdbc 67:96 0 1G 0 disk
└─360050763808182f5fc00000000000036 254:6 0 1G 0 mpath
sdbd 67:112 0 1G 0 disk
└─360050763808182f5fc00000000000037 254:7 0 1G 0 mpath
sdbe 67:128 0 120G 0 disk
├─sdbe1 67:129 0 7M 0 part
├─sdbe2 67:130 0 120G 0 part
└─360050763808182f5fc0000000000000d 254:0 0 120G 0 mpath
├─360050763808182f5fc0000000000000d-part1
│ 254:8 0 7M 0 part
└─360050763808182f5fc0000000000000d-part2
254:9 0 120G 0 part
├─system-root 254:10 0 60G 0 lvm /
└─system-swap 254:11 0 60G 0 lvm [SWAP]
sdbf 67:144 0 4.6T 0 disk
└─360050763808182f5fc00000000000009 254:1 0 4.6T 0 mpath
└─vg_data-lv_data 254:15 0 4.6T 0 lvm /hana/data
sdbg 67:160 0 512G 0 disk
└─360050763808182f5fc0000000000000a 254:2 0 512G 0 mpath
└─vg_log-lv_log 254:13 0 512G 0 lvm /hana/log
sdbh 67:176 0 512G 0 disk
└─360050763808182f5fc0000000000000b 254:3 0 512G 0 mpath
└─vg_shared-lv_shared 254:12 0 512G 0 lvm /hana/shared
sdbi 67:192 0 10G 0 disk
└─360050763808182f5fc0000000000000c 254:4 0 10G 0 mpath
└─vg_usr-lv_usr 254:14 0 10G 0 lvm /usr/sap
sdbj 67:208 0 1G 0 disk
└─360050763808182f5fc00000000000031 254:5 0 1G 0 mpath
sdbk 67:224 0 1G 0 disk
└─360050763808182f5fc00000000000036 254:6 0 1G 0 mpath
sdbl 67:240 0 1G 0 disk
└─360050763808182f5fc00000000000037 254:7 0 1G 0 mpath
mm-001-h4p01:/etc/multipath # multipath -ll
360050763808182f5fc00000000000009 dm-1 IBM,2145
size=4.5T features='1 retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:0:1 sdb 8:16 active ready running
| |- 2:0:0:1 sdr 65:16 active ready running
| |- 3:0:0:1 sdah 66:16 active ready running
| `- 4:0:0:1 sdax 67:16 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 1:0:1:1 sdj 8:144 active ready running
|- 2:0:1:1 sdz 65:144 active ready running
|- 3:0:1:1 sdap 66:144 active ready running
`- 4:0:1:1 sdbf 67:144 active ready running
360050763808182f5fc00000000000037 dm-7 IBM,2145
size=1.0G features='1 retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:1:7 sdp 8:240 active ready running
| |- 2:0:1:7 sdaf 65:240 active ready running
| |- 3:0:1:7 sdav 66:240 active ready running
| `- 4:0:1:7 sdbl 67:240 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 1:0:0:7 sdh 8:112 active ready running
|- 2:0:0:7 sdx 65:112 active ready running
|- 3:0:0:7 sdan 66:112 active ready running
`- 4:0:0:7 sdbd 67:112 active ready running
360050763808182f5fc00000000000036 dm-6 IBM,2145
size=1.0G features='1 retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:0:6 sdg 8:96 active ready running
| |- 2:0:0:6 sdw 65:96 active ready running
| |- 3:0:0:6 sdam 66:96 active ready running
| `- 4:0:0:6 sdbc 67:96 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 1:0:1:6 sdo 8:224 active ready running
|- 2:0:1:6 sdae 65:224 active ready running
|- 3:0:1:6 sdau 66:224 active ready running
`- 4:0:1:6 sdbk 67:224 active ready running
360050763808182f5fc0000000000000d dm-0 IBM,2145
size=120G features='1 retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:0:0 sda 8:0 active ready running
| |- 2:0:0:0 sdq 65:0 active ready running
| |- 3:0:0:0 sdag 66:0 active ready running
| `- 4:0:0:0 sdaw 67:0 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 1:0:1:0 sdi 8:128 active ready running
|- 2:0:1:0 sdy 65:128 active ready running
|- 3:0:1:0 sdao 66:128 active ready running
`- 4:0:1:0 sdbe 67:128 active ready running
360050763808182f5fc0000000000000c dm-4 IBM,2145
size=10G features='1 retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:1:4 sdm 8:192 active ready running
| |- 2:0:1:4 sdac 65:192 active ready running
| |- 3:0:1:4 sdas 66:192 active ready running
| `- 4:0:1:4 sdbi 67:192 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 1:0:0:4 sde 8:64 active ready running
|- 2:0:0:4 sdu 65:64 active ready running
|- 3:0:0:4 sdak 66:64 active ready running
`- 4:0:0:4 sdba 67:64 active ready running
360050763808182f5fc00000000000031 dm-5 IBM,2145
size=1.0G features='1 retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:1:5 sdn 8:208 active ready running
| |- 2:0:1:5 sdad 65:208 active ready running
| |- 3:0:1:5 sdat 66:208 active ready running
| `- 4:0:1:5 sdbj 67:208 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 1:0:0:5 sdf 8:80 active ready running
|- 2:0:0:5 sdv 65:80 active ready running
|- 3:0:0:5 sdal 66:80 active ready running
`- 4:0:0:5 sdbb 67:80 active ready running
360050763808182f5fc0000000000000b dm-3 IBM,2145
size=512G features='1 retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:0:3 sdd 8:48 active ready running
| |- 2:0:0:3 sdt 65:48 active ready running
| |- 3:0:0:3 sdaj 66:48 active ready running
| `- 4:0:0:3 sdaz 67:48 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 1:0:1:3 sdl 8:176 active ready running
|- 2:0:1:3 sdab 65:176 active ready running
|- 3:0:1:3 sdar 66:176 active ready running
`- 4:0:1:3 sdbh 67:176 active ready running
360050763808182f5fc0000000000000a dm-2 IBM,2145
size=512G features='1 retain_attached_hw_handler' hwhandler='1 alua' wp=rw
|-+- policy='service-time 0' prio=50 status=active
| |- 1:0:1:2 sdk 8:160 active ready running
| |- 2:0:1:2 sdaa 65:160 active ready running
| |- 3:0:1:2 sdaq 66:160 active ready running
| `- 4:0:1:2 sdbg 67:160 active ready running
`-+- policy='service-time 0' prio=10 status=enabled
|- 1:0:0:2 sdc 8:32 active ready running
|- 2:0:0:2 sds 65:32 active ready running
|- 3:0:0:2 sdai 66:32 active ready running
`- 4:0:0:2 sday 67:32 active ready running
jsmeix commented at 2022-05-25 07:01:¶
@markbertolin
first and foremost:
I am not at all a multipath or SAN expert.
I have no personal experience with multipath or SAN and
I cannot reproduce multipath or SAN issues on my homeoffice laptop
so all I can do is generic help from what I see in the ReaR code
and from what I can imagine from the information you show here.
As far as I see you have two systems:
- mm-001-h4p01 is your original system according to your https://github.com/rear/rear/issues/2812#issuecomment-1135864534
- mm-001-hbp01 seems to be your replacement system
On the original system you would normally run "rear mkbackup".
Then you would normally boot the ReaR recovery system
on your replacement hardware, log in as 'root' and
run "rear recover" on your replacement hardware.
But your "rear mkbackup" log file
https://github.com/rear/rear/files/8755935/rear-mm-001-hbp01.log
seems to be from "rear mkbackup" on your replacement system
so I do not understand what that actually is.
Your "rear mkbackup" log file
https://github.com/rear/rear/files/8755935/rear-mm-001-hbp01.log
contains (excerpts)
egrep " echo 'disk | echo 'multipath | echo 'part " rear-mm-001-hbp01.log | cut -d "'" -f2
multipath /dev/mapper/360050763808182f5fc0000000000003a 1073741824 /dev/sdaf,/dev/sdan,/dev/sdav,/dev/sdbd,/dev/sdbl,/dev/sdh,/dev/sdp,/dev/sdx
multipath /dev/mapper/360050763808182f5fc0000000000001c 10737418240 /dev/sdac,/dev/sdak,/dev/sdas,/dev/sdba,/dev/sdbi,/dev/sde,/dev/sdm,/dev/sdu
multipath /dev/mapper/360050763808182f5fc00000000000039 1073741824 /dev/sdae,/dev/sdam,/dev/sdau,/dev/sdbc,/dev/sdbk,/dev/sdg,/dev/sdo,/dev/sdw
multipath /dev/mapper/360050763808182f5fc0000000000001b 128849018880 /dev/sda,/dev/sdag,/dev/sdao,/dev/sdaw,/dev/sdbe,/dev/sdi,/dev/sdq,/dev/sdy
part /dev/mapper/360050763808182f5fc0000000000001b 7340032 1048576 primary boot,prep /dev/mapper/360050763808182f5fc0000000000001b-part1
part /dev/mapper/360050763808182f5fc0000000000001b 128840630272 8388608 primary lvm /dev/mapper/360050763808182f5fc0000000000001b-part2
multipath /dev/mapper/360050763808182f5fc00000000000038 1073741824 /dev/sdad,/dev/sdal,/dev/sdat,/dev/sdbb,/dev/sdbj,/dev/sdf,/dev/sdn,/dev/sdv
multipath /dev/mapper/360050763808182f5fc0000000000001a 137438953472 /dev/sdab,/dev/sdaj,/dev/sdar,/dev/sdaz,/dev/sdbh,/dev/sdd,/dev/sdl,/dev/sdt
multipath /dev/mapper/360050763808182f5fc00000000000019 137438953472 /dev/sdaa,/dev/sdai,/dev/sdaq,/dev/sday,/dev/sdbg,/dev/sdc,/dev/sdk,/dev/sds
multipath /dev/mapper/360050763808182f5fc00000000000018 137438953472 /dev/sdah,/dev/sdap,/dev/sdax,/dev/sdb,/dev/sdbf,/dev/sdj,/dev/sdr,/dev/sdz
so this should be the disk
multipath
and part
entries
in your var/lib/rear/layout/disklayout.conf file
(you don't have normal disk
entries because you have only
multipath
).
What confuses me are the non-matching WWIDs as far as I see.
None of the /dev/mapper/WWIDs in the disklayout.conf
match the WWIDs that are shown in the 'lsblk' output in
https://github.com/rear/rear/issues/2812#issuecomment-1135864534
which are
360050763808182f5fc00000000000009
360050763808182f5fc0000000000000a
360050763808182f5fc0000000000000b
360050763808182f5fc0000000000000c
360050763808182f5fc0000000000000d
360050763808182f5fc00000000000031
360050763808182f5fc00000000000036
360050763808182f5fc00000000000037
so it seems the disklayout.conf does not match the original system?
But I am not a multipath expert so I may confuse things here.
jsmeix commented at 2022-05-25 07:09:¶
@pcahyna
thank you for your help here - I really need it!
I have a question:
I failed to find out by Googling for "multipath WWID"
how the WWID in /dev/mapper/WWID
is determined.
All documentation I found only tells that a WWID is set
but I didn't find explained how the WWID is determined.
Is the WWID autogenerated from scratch as a random number
or is the WWID a real existing value from the disk hardware?
For example on my homeoffice laptop I have
# lsblk -ipdo KNAME,WWN,PTUUID,PARTUUID,UUID /dev/sda
KNAME WWN PTUUID PARTUUID UUID
/dev/sda 0x5000039462b83c55 9be0015e-cf90-4c6b-80ac-c4ea89832553
# /usr/lib/udev/scsi_id -gud /dev/sda
35000039462b83c55
# cat /sys/block/sda/device/wwid
naa.5000039462b83c55
so what scsi_id
shows matches the lsblk
WWN of my /dev/sda
and that WWN matches the /sys/block/sda/device/wwid content
(I do not understand the leading 0x
versus 3
versus naa.
differences)
but I don't know if that WWN/WWID is a random number
that is autogenerated by some software (e.g. the kernel)
or if it is a real existing value from my disk hardware?
pcahyna commented at 2022-05-25 07:29:¶
@jsmeix
so it seems the disklayout.conf does not match the original system?
indeed:
I restored on a different system, with different new disks.
and I suspect this may be part of the problem - I restored PowerVM LPARs
with multipath many times with success (not with VIOS, but that should
not matter too much), but always on the same machine and I am not sure
whether MIGRATION_MODE
can cope with WWID changes in this situation
(different hardware, different disks).
Concerning
but I didn't find explained how the WWID is determined.
Is the WWID autogenerated from scratch as a random number
or is the WWID a real existing value from the disk hardware?
-- it is real existing value from hardware, similar to a MAC address for
NICs. There are several types of such persistent identifiers on disks
and I must admit I am not a big expert on this topic, so I can't answer
your question on leading 3
or naa
(but hopefully this is not very
important for understanding the problem in question).
pcahyna commented at 2022-05-25 07:33:¶
Also WWIDs are not that much related to multipath, it's just that without multipath you can mostly ignore them, with multipath you need them (or some other persistent identifier), because you need to tell which device nodes correspond to the same physical device.
jsmeix commented at 2022-05-25 07:33:¶
@pcahyna
thank you so much!
I don't need hardware specific details.
That the WWID/WWN is a hardware value is the important information.
So different hardware (in particular different disks)
result different WWID/WWN values so the WWID/WWN values
from the original system for multipath
in disklayout.conf
cannot match different hardware (in particular different disks)
and therefore "rear recover" can not "just work".
pcahyna commented at 2022-05-25 07:39:¶
In RHEL, multipath seem to be configured by default to name devices like
mpatha
, mpathb
, ..., without embedding the WWID in the device name,
so part of the problem is avoided there. Still, I am not sure it would
work properly, because one also needs to transform WWIDs in the
/etc/multipath/* config files.
jsmeix commented at 2022-05-25 07:51:¶
@markbertolin
as far as we (@pcahyna and @jsmeix) found out
you do a migration onto different hardware with ReaR.
Migration onto different hardware does not "just work".
For some sufficiently simple cases migration onto
only a bit different hardware could be even "relatively easy"
via some basic user dialogs in ReaR's MIGRATION_MODE.
In general regarding different hardware:
When you do not have fully compatible replacement hardware
then recreating the system becomes what we call a MIGRATION.
Cf. "Fully compatible replacement hardware is needed" in
https://en.opensuse.org/SDB:Disaster_Recovery
Migrating a system onto somewhat different hardware
will usually not work "out of the box" with ReaR, see
MIGRATION_MODE in default.conf currently online at
https://github.com/rear/rear/blob/master/usr/share/rear/conf/default.conf#L397
Regarding migration to a system with a bit smaller or a bit bigger
disk
see in conf/default.conf the description of the config variables
AUTORESIZE_PARTITIONS
AUTORESIZE_EXCLUDE_PARTITIONS
AUTOSHRINK_DISK_SIZE_LIMIT_PERCENTAGE
AUTOINCREASE_DISK_SIZE_THRESHOLD_PERCENTAGE
I reccommend to not use AUTORESIZE_PARTITIONS="yes"
with layout/prepare/default/430_autoresize_all_partitions.sh
because that may result bad aligned partitions in particular
bad aligned for what flash memory based disks (i.e. SSDs) need
that usually need a 4MiB or 8MiB alignment (a too small value
will result lower speed and less lifetime of flash memory devices),
see the comment at USB_PARTITION_ALIGN_BLOCK_SIZE
in default.conf
In general regarding system migration with ReaR
(e.g. to a system with substantially different disk size):
In general migrating a system onto different hardware
(where "hardware" could be also a virtual machine)
does not "just work", cf. "Inappropriate expectations" in
https://en.opensuse.org/SDB:Disaster_Recovery
In sufficiently simple cases it may "just work" but in general
do not expect too much built-in intelligence from a program
(written in plain bash which is not a programming language
that is primarily meant for artificial intelligence ;-)
that would do the annoying legwork for you.
In general ReaR is first and foremost meant to recreate
a system as much as possible exactly as it was before
on as much as possible same replacement hardware.
jsmeix commented at 2022-05-25 07:55:¶
Current ReaR does not support to migrate WWIDs.
So migrating WWIDs needs to be done manually
which means all values disklayout.conf need to be
manually adapted to match the new hardware
i.e. not only WWIDs but also all other values
that do not match the new hardware need to be
manually adapted to match the new hardware.
jsmeix commented at 2022-05-25 08:09:¶
Regardless that I am not at all SAN storage expert
I am thinking about the following:
On the one hand ReaR supports bare metal recovery.
For example ReaR supports to to recreate a system
on new compatible hardware with a new built-in disk
(provided the new disk is not substantially smaller).
On the other hand it seems ReaR does not support
to recreate a system on a new PowerVM LPAR
with new SAN disks.
I think the difference is that ReaR is only meant
to recreate local disks but ReaR is not meant
to recreate SAN storage.
I think the reason is that in general ReaR is not meant
to recreate any remote things.
pcahyna commented at 2022-05-25 08:14:¶
When you do not have fully compatible replacement hardware
then recreating the system becomes what we call a MIGRATION.
The problem is, if your disk(s) die, and you replace them with a
perfectly compatible disk(s), the WWIDs still change and there is no way
to get the same WWIDs as before. So this case should be handled somehow
even without MIGRATION_MODE
, otherwise ReaR does not fulfill even
"appropriate expectations".
pcahyna commented at 2022-05-25 08:16:¶
Note that you don't need a SAN to obtain a multipath setup. It is enough to have e.g. SAS disks and connect them using both ports (SAS disks are dual-port, at least those I saw).
jsmeix commented at 2022-05-25 08:28:¶
@pcahyna
regarding your
https://github.com/rear/rear/issues/2812#issuecomment-1136932550
if your disk(s) die, and you replace them
with a perfectly compatible disk(s),
the WWIDs still change and there is no way
to get the same WWIDs as before
I think this is the reason why ReaR normally
does not store WWID/WWN values in disklayout.conf
except in case of multipath
when the
default /dev/mapper/WWID
device names are used.
I think ReaR should support to recreate a system
on a new PowerVM LPAR with new SAN disks
when the new SAN and multipath disk layout
is same as it was on the original system
i.e. same number of disks with same size
and same multipath structure.
This would be an enhancement for a future ReaR version.
didacog commented at 2022-05-25 08:31:¶
Hello, just trying to help here. we've had good results with MPIO setup migrations, but we use this base configuration for SAN boot:
...
AUTOEXCLUDE_MULTIPATH=n
BOOT_OVER_SAN=y
MODULES=( ${MODULES[@]} dm-multipath )
MODULES_LOAD=( ${MODULES_LOAD[@]} dm-multipath )
...
On the other hand this is a rear 2.4, so I'll suggest to try with rear 2.6, as is SLES12 sp3.
Hope this can help.
Kind regards,
Didac
pcahyna commented at 2022-05-25 08:34:¶
I will need to look, because it is possible that it actually works for
this case already, e.g. by regenerating /etc/multipath/wwids
dynamically, or by transforming them with sed
, or something similar,
especially if WWIDs are not used as device names under /dev/mapper (but
the sed
transformation could handle even this case, because it could
transform even disklayout.conf
itself ). Maybe it is something that
was fixed in ReaR 2.6, as @didacog suggests?
jsmeix commented at 2022-05-25 08:56:¶
My totally offhanded thinking is
that migrating WWID/WWN values in disklayout.conf
could be done similar as migrating disk device nodes
(like '/dev/sda' -> '/dev/sdb' and '/dev/sdb' -> '/dev/sda')
in disklayout.conf which we do via the function
apply_layout_mappings() in lib/layout-functions.sh
that is called primarily in
layout/prepare/default/320_apply_mappings.sh
and also in
finalize/GNU/Linux/250_migrate_disk_devices_layout.sh
finalize/GNU/Linux/260_rename_diskbyid.sh
Additionally a user dialog to map WWID/WWN values as in
layout/prepare/default/300_map_disks.sh
so the user has final power to decide about the mapping.
There is something about 'multipath' in
layout/prepare/default/300_map_disks.sh
but I never tested how it behaves with multipath
because I do not use SAN or multipath.
jsmeix commented at 2022-05-25 09:13:¶
During "rear recover" multipath is activated
in the ReaR recovery system with a basic setup via
layout/prepare/GNU/Linux/210_load_multipath.sh
which in particular loads the dm-multipath kernel module.
A "rear -D recover" debug log file could help to see
what actually happened during "rear recover".
pcahyna commented at 2022-05-25 10:49:¶
As the first step I believe the current Git master code, or at least 2.6, should be tested, because it may behave differently or even fix the issue.
jsmeix commented at 2022-05-25 10:58:¶
@markbertolin
see the sections
"Testing current ReaR upstream GitHub master code"
and
"Debugging issues with Relax-and-Recover"
in
https://en.opensuse.org/SDB:Disaster_Recovery
markbertolin commented at 2022-05-26 08:17:¶
Hi ,
many thx for your updates ,,, I try all and I test it, pls stand by...
Marco
markbertolin commented at 2022-05-26 08:21:¶
PS: the Powers are twins, the systems are identical harware.
markbertolin commented at 2022-06-01 08:54:¶
Hi,
i have tried some time.. the multipath goes on but some path was
dead/unknown:
RESCUE mm-001-hbp02:~ # multipath -d
: 360050763808102f5240000000000008a undef IBM,2145
size=128G features='0' hwhandler='1 alua' wp=undef
|-+- policy='service-time 0' prio=50 status=undef
| |- 1:0:0:1 sdc 8:32 active ready running
| `- 2:0:0:1 sdd 8:48 active ready running
`-+- policy='service-time 0' prio=10 status=undef
|- 1:0:1:1 sds 65:32 undef ready running
`- 2:0:1:1 sdt 65:48 undef ready running
into the /var/lib/rear/layout/disklayout.conf conf file i have also:
#multipath /dev/mapper/360050763808102f52400000000000086 1073741824 unknown /dev/sdac,/dev/sdad,/dev/sdm,/dev/sdn
#multipath /dev/mapper/360050763808102f52400000000000085 1073741824 unknown /dev/sdaa,/dev/sdab,/dev/sdk,/dev/sdl
multipath /dev/mapper/360050763808102f5240000000000008b 10737418240 unknown /dev/sdi,/dev/sdj,/dev/sdy,/dev/sdz
multipath /dev/mapper/360050763808102f5240000000000008a 137438953472 unknown /dev/sdd,/dev/sde,/dev/sds,/dev/sdt
multipath /dev/mapper/360050763808102f52400000000000089 137438953472 unknown /dev/sdb,/dev/sdf,/dev/sdu,/dev/sdv
multipath /dev/mapper/360050763808102f52400000000000088 137438953472 unknown /dev/sdg,/dev/sdh,/dev/sdw,/dev/sdx
#multipath /dev/mapper/360050763808102f52400000000000087 1073741824 unknown /dev/sdae,/dev/sdaf,/dev/sdo,/dev/sdp
multipath /dev/mapper/360050763808102f5240000000000007e 128849018880 msdos /dev/sda,/dev/sdc,/dev/sdq,/dev/sdr
part /dev/mapper/360050763808102f5240000000000007e 7340032 2097152 primary boot,prep /dev/mapper/360050763808102f5240000000000007e-part1
part /dev/mapper/360050763808102f5240000000000007e 128839577600 9441280 primary lvm /dev/mapper/360050763808102f5240000000000007e-part2
the restore start good
RESCUE mm-001-hbp02:~ # rear -v -D recover
Relax-and-Recover 2.6 / 2020-06-17
Running rear recover (PID 17289)
Using log file: /var/log/rear/rear-mm-001-hbp02.log
Running workflow recover within the ReaR rescue/recovery system
Starting required daemons for NFS: RPC portmapper (portmap or rpcbind) and rpc.statd if available.
Started RPC portmapper 'rpcbind'.
RPC portmapper 'rpcbind' available.
RPC status rpc.statd available.
Starting rpc.idmapd failed.
Using backup archive '/tmp/rear.KL96sXAHLH28YHH/outputfs/m
[HMCLogs-31052022-2.txt](https://github.com/rear/rear/files/8812649/HMCLogs-31052022-2.txt)
m-001-hbp02/backup.tar.gz'
Will do driver migration (recreating initramfs/initrd)
Calculating backup archive size
Backup archive size is 33G /tmp/rear.KL96sXAHLH28YHH/outputfs/mm-001-hbp02/backup.tar.gz (compressed)
Setting up multipathing
Activating multipath
multipath activated
Starting multipath daemon
multipathd started
Listing multipath device found
360050763808102f52400000000000086 dm-8 IBM,2145 size=1.0G
360050763808102f52400000000000085 dm-7 IBM,2145 size=1.0G
360050763808102f5240000000000008b dm-6 IBM,2145 size=10G
360050763808102f5240000000000008a dm-3 IBM,2145 size=128G
360050763808102f52400000000000089 dm-4 IBM,2145 size=128G
360050763808102f52400000000000088 dm-5 IBM,2145 size=128G
360050763808102f52400000000000087 dm-9 IBM,2145 size=1.0G
360050763808102f5240000000000007e dm-0 IBM,2145 size=120G
Enforced manual disk layout configuration (MIGRATION_MODE is 'true')
Using /dev/mapper/360050763808102f5240000000000008b (same name and same size) for recreating /dev/mapper/360050763808102f5240000000000008b
Using /dev/mapper/360050763808102f5240000000000008a (same name and same size) for recreating /dev/mapper/360050763808102f5240000000000008a
Using /dev/mapper/360050763808102f52400000000000089 (same name and same size) for recreating /dev/mapper/360050763808102f52400000000000089
Using /dev/mapper/360050763808102f52400000000000088 (same name and same size) for recreating /dev/mapper/360050763808102f52400000000000088
Using /dev/mapper/360050763808102f5240000000000007e (same name and same size) for recreating /dev/mapper/360050763808102f5240000000000007e
Current disk mapping table (source => target):
/dev/mapper/360050763808102f5240000000000008b => /dev/mapper/360050763808102f5240000000000008b
/dev/mapper/360050763808102f5240000000000008a => /dev/mapper/360050763808102f5240000000000008a
/dev/mapper/360050763808102f52400000000000089 => /dev/mapper/360050763808102f52400000000000089
/dev/mapper/360050763808102f52400000000000088 => /dev/mapper/360050763808102f52400000000000088
/dev/mapper/360050763808102f5240000000000007e => /dev/mapper/360050763808102f5240000000000007e
but after it fall down :
UserInput -I LAYOUT_MIGRATION_CONFIRM_MAPPINGS needed in /usr/share/rear/layout/prepare/default/300_map_disks.sh line 275
I attach the logs files.
My config path is:
RESCUE mm-001-hbp02:~ # cat /etc/multipath.conf
defaults {
user_friendly_names no
}
devices {
device {
vendor "IBM"
product "2145"
path_grouping_policy group_by_prio
prio "alua"
path_checker "tur"
path_selector "service-time 0"
failback "immediate"
rr_weight "priorities"
no_path_retry "fail"
rr_min_io_rq 10
dev_loss_tmo 600
fast_io_fail_tmo 5
}
}
Could help me pls?
thx and regards
Marco
pcahyna commented at 2022-06-01 12:03:¶
@markbertolin I don't see the log files... and how it "fell down"?
UserInput seems to indicate that it merely needs confirmation from you
(on console).
By the way, it is interesting that the multipath device names like
360050763808102f52400000000000087
did not change (if you are restoring
to different disks, the WWIDs should be different).
pcahyna commented at 2022-06-01 12:10:¶
Actually, have the WWIDs changed or not? You say
into the /var/lib/rear/layout/disklayout.conf conf file i have also:
#multipath /dev/mapper/360050763808102f52400000000000086 1073741824 unknown /dev/sdac,/dev/sdad,/dev/sdm,/dev/sdn
#multipath /dev/mapper/360050763808102f52400000000000085 1073741824 unknown /dev/sdaa,/dev/sdab,/dev/sdk,/dev/sdl
multipath /dev/mapper/360050763808102f5240000000000008b 10737418240 unknown /dev/sdi,/dev/sdj,/dev/sdy,/dev/sdz
multipath /dev/mapper/360050763808102f5240000000000008a 137438953472 unknown /dev/sdd,/dev/sde,/dev/sds,/dev/sdt
but in your original storage layout I don't see any
360050763808102f524
devices. Where does this
/var/lib/rear/layout/disklayout.conf
come from? Is it how it is on the
original system? If so, why does it show different devices than in your
comment
https://github.com/rear/rear/issues/2812#issuecomment-1135111644
?
jsmeix commented at 2022-06-01 13:09:¶
@markbertolin
can you show us the output of the command
# lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,LABEL,SIZE,MOUNTPOINT,UUID,WWN
on you original system
and also
on your replacement system - you may run that command
inside the booted ReaR recovery system after you logged in
there as 'root' but without launching "rear recover".
pcahyna commented at 2022-06-01 13:20:¶
@markbertolin I would be also interested in indicating for your
disklayout.conf
snippets whether they come from the original system or
the replacement system in the rescue environment - I believe that ReaR
does some sed
transformation on disklayout.conf
, so after running
rear recover
the file might be different from the original system.
markbertolin commented at 2022-06-01 16:54:¶
Hi,
in attachment , the original and rescue logs files
HMCLogs-31052022-2-2.TXT
lsblk_output_original.log
Many thx for your support!!
Marco
pcahyna commented at 2022-06-02 12:31:¶
@markbertolin Your lsblk_output_original.log does not match the one shown in https://github.com/rear/rear/issues/2812#issuecomment-1135864534 where you said that "The original lsblk and multipath are: ...". So, which one is really the original? Also, can you please answer my question
Where does this /var/lib/rear/layout/disklayout.conf come from? Is it how it is on the original system? If so, why does it show different devices than in your comment https://github.com/rear/rear/issues/2812#issuecomment-1135111644 ?
pcahyna commented at 2022-06-02 12:39:¶
It could be also helpful to have the log file after the disk recreation script has failed ("The disk layout recreation script failed... View 'rear recover' log file (/var/log/rear/rear-mm-001-hbp02.log)")
github-actions commented at 2022-09-06 04:08:¶
Stale issue message
jsmeix commented at 2022-09-19 14:24:¶
Only as a side note FYI:
We already have some obscure WWID migration code in
layout/prepare/default/010_prepare_files.sh
and
finalize/GNU/Linux/250_migrate_lun_wwid.sh
where some LUN_WWID_MAP
file $CONFIG_DIR/lun_wwid_mapping.conf
can be used.
But I neither understand that code not its commits like
https://github.com/rear/rear/commit/e1a704b641e1ae1d92ba1e19dd756e05b128b9b5
and
https://github.com/rear/rear/commit/e822ad69a8ce8dec6132741806008db9c6c3b429
Furthermore I fail to find any documentation
about lun_wwid_mapping
or LUN_WWID_MAP
that explains what the idea behind is.
The above two scripts are the only files in ReaR that
contain lun_wwid_mapping
or LUN_WWID_MAP
and
finalize/GNU/Linux/250_migrate_lun_wwid.sh
applies the mappinh only to the restored files
etc/elilo.conf (if exists) and etc/fstab
so there is no code in ReaR that maps WWIDs
before the storage layout is recreated.
github-actions commented at 2022-11-19 02:59:¶
Stale issue message
github-actions commented at 2023-01-28 02:32:¶
Stale issue message
github-actions commented at 2023-04-02 02:19:¶
Stale issue message
github-actions commented at 2023-06-03 02:35:¶
Stale issue message
github-actions commented at 2023-08-05 02:09:¶
Stale issue message
github-actions commented at 2023-10-07 02:02:¶
Stale issue message
github-actions commented at 2023-12-09 02:06:¶
Stale issue message
github-actions commented at 2024-02-10 01:59:¶
Stale issue message
[Export of Github issue for rear/rear.]