#2333 Issue closed
: Mapping disks doesn't map corresponding XFS information file¶
Labels: enhancement
, bug
, fixed / solved / done
rmetrich opened issue at 2020-02-17 15:36:¶
-
ReaR version ("/usr/sbin/rear -V"): Latest Upstream
-
OS version ("cat /etc/rear/os.conf" or "lsb_release -a" or "cat /etc/os-release"): RHEL 7
-
ReaR configuration files ("cat /etc/rear/site.conf" and/or "cat /etc/rear/local.conf"): N/A
-
Hardware (PC or PowerNV BareMetal or ARM) or virtual machine (KVM guest or PoverVM LPAR): ALL
-
System architecture (x86 compatible or PPC64/PPC64LE or what exact ARM device): ALL
-
Firmware (BIOS or UEFI or Open Firmware) and bootloader (GRUB or ELILO or Petitboot): ALL
-
Storage (local disk or SSD) and/or SAN (FC or iSCSI or FCoE) and/or multipath (DM or NVMe): ALL
-
Storage layout ("lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,SIZE,MOUNTPOINT" or "lsblk" as makeshift): XFS
-
Description of the issue (ideally so that others can reproduce it):
When a disk mapping is performed and XFS is used, e.g. /dev/sda
to
/dev/vda
, there is no mapping done for
- content of
/var/lib/rear/layout/xfs/sdaX.xfs
- file name
/var/lib/rear/layout/xfs/sdaX.xfs
which should be mapped onto/var/lib/rear/layout/xfs/vdaX.xfs
This makes rear recover
fail due to not using XFS options found in the
information file /var/lib/rear/layout/xfs/vdaX.xfs
- Workaround, if any: Admin must renamed the file
/var/lib/rear/layout/xfs/sdaX.xfs
himself
Log file excerpt:¶
+++ mkfs.xfs -f -m uuid=93abea19-a62f-4319-b421-8caf4b01e57b /dev/vdb
--> SHOULD HAVE BEEN mkfs.xfs -f -d sunit=1024,swidth=1024 -m uuid=93abea19-a62f-4319-b421-8caf4b01e57b /dev/vdb
...
+++ mount -o rw,relatime,attr2,inode64,sunit=1024,swidth=1024,noquota /dev/vdb /mnt/local/mnt
mount: wrong fs type, bad option, bad superblock on /dev/vdb,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
Reproducer:¶
-
Format a XFS file systems with sunit/swidth option
# mkfs.xfs -d sunit=1024,swidth=1024 /dev/sdb
-
Create a ReaR ISO
# rear mkrescue
-
Recover in a VM with VirtIO instead of SCSI (disk will be mapped onto /dev/vdb)
rmetrich commented at 2020-02-17 15:46:¶
Probably /usr/share/rear/layout/prepare/default/320_apply_mappings.sh
should be updated to add XFS files:
for file_to_migrate in "$LAYOUT_FILE" "$original_disk_space_usage_file" "$rescue_config_file" $VAR_DIR/layout/xfs/*.xfs; do
But this isn't sufficient because the files must also be renamed.
Unfortunately I don't have time for now.
gozora commented at 2020-02-17 21:57:¶
I'll take a look.
V.
jsmeix commented at 2020-02-18 08:19:¶
Cf.
https://github.com/rear/rear/pull/2005#discussion_r242087432
I also set "enhancement" here because the ReaR migration mode
never promised to automatically do 100% of what needs to be done,
for example see things like
https://github.com/rear/rear/issues/1437
https://github.com/rear/rear/issues/2312
https://github.com/rear/rear/issues/1854
...
gozora commented at 2020-02-18 08:25:¶
So far I have following:
https://github.com/rear/rear/compare/master...gozora:xfs_mapping
Couple of test I've done so far looks good, but I need to do some more,
before PR is created.
V.
jsmeix commented at 2020-02-18 08:42:¶
@rmetrich
another workaround could be using MKFS_XFS_OPTIONS, cf.
https://github.com/rear/rear/blob/master/usr/share/rear/conf/default.conf#L485
@gozora
perhaps the current behaviour that
# The function xfs_parse in lib/filesystems-functions.sh falls back to mkfs.xfs defaults
# (i.e. xfs_parse outputs nothing) when there is no $xfs_info_filename file where the
# XFS filesystem options of that particular device on the original system were saved
https://github.com/rear/rear/blob/master/usr/share/rear/layout/prepare/GNU/Linux/131_include_filesystem_code.sh#L175
is not right in migration mode?
Perhaps in migration mode it should be "some kind of error"
when there is no $VAR_DIR/layout/xfs/$XFS_DEVICE.xfs
.
With "some kind of error" I mean there should be at least
a LogPrintError
message that tells about it or
might there perhaps be even a hard Error
exit
or is it perhaps almost always better to create the XFS
with defaults in migration mode when there is no
$VAR_DIR/layout/xfs/$XFS_DEVICE.xfs
?
jsmeix commented at 2020-02-18 09:01:¶
@gozora
on first glance I think the current code in your
https://github.com/rear/rear/compare/master...gozora:xfs_mapping
for file in $(ls ${work_file_source}[0-9]*.xfs); do
suffix=${file##$work_file_source}
cp $file ${work_file_target}${suffix}.restore
depends on the assumption that a device node where
XFS is created is always of the form disk_device[0-9]*
.
I fear this is perhaps not always the case
e.g. whatever higher-level storage objects
like LVM logical volumes or RAID arrays?
For example on one of my current test systems I have
# lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,SIZE,MOUNTPOINT
NAME KNAME PKNAME TRAN TYPE FSTYPE SIZE MOUNTPOINT
/dev/sda /dev/sda ata disk 20G
|-/dev/sda1 /dev/sda1 /dev/sda part 1G
|-/dev/sda2 /dev/sda2 /dev/sda part linux_raid_member 4G
| `-/dev/md126 /dev/md126 /dev/sda2 raid1 LVM2_member 4G
| `-/dev/mapper/vg15gib-lvroot /dev/dm-0 /dev/md126 lvm ext4 13G /
|-/dev/sda3 /dev/sda3 /dev/sda part linux_raid_member 5G
| `-/dev/md127 /dev/md127 /dev/sda3 raid1 LVM2_member 5G
| `-/dev/mapper/vg15gib-lvroot /dev/dm-0 /dev/md127 lvm ext4 13G /
|-/dev/sda4 /dev/sda4 /dev/sda part linux_raid_member 6G
| `-/dev/md125 /dev/md125 /dev/sda4 raid1 LVM2_member 6G
| |-/dev/mapper/vg15gib-lvroot /dev/dm-0 /dev/md125 lvm ext4 13G /
| `-/dev/mapper/vg15gib-lvhome /dev/dm-1 /dev/md125 lvm ext3 1G /home
`-/dev/sda5 /dev/sda5 /dev/sda part linux_raid_member 2G
`-/dev/md124 /dev/md124 /dev/sda5 raid0 ext2 3G /other
/dev/sdb /dev/sdb ata disk 21G
|-/dev/sdb1 /dev/sdb1 /dev/sdb part 1G
|-/dev/sdb2 /dev/sdb2 /dev/sdb part linux_raid_member 4G
| `-/dev/md126 /dev/md126 /dev/sdb2 raid1 LVM2_member 4G
| `-/dev/mapper/vg15gib-lvroot /dev/dm-0 /dev/md126 lvm ext4 13G /
|-/dev/sdb3 /dev/sdb3 /dev/sdb part linux_raid_member 5G
| `-/dev/md127 /dev/md127 /dev/sdb3 raid1 LVM2_member 5G
| `-/dev/mapper/vg15gib-lvroot /dev/dm-0 /dev/md127 lvm ext4 13G /
|-/dev/sdb4 /dev/sdb4 /dev/sdb part linux_raid_member 6G
| `-/dev/md125 /dev/md125 /dev/sdb4 raid1 LVM2_member 6G
| |-/dev/mapper/vg15gib-lvroot /dev/dm-0 /dev/md125 lvm ext4 13G /
| `-/dev/mapper/vg15gib-lvhome /dev/dm-1 /dev/md125 lvm ext3 1G /home
|-/dev/sdb5 /dev/sdb5 /dev/sdb part linux_raid_member 1G
| `-/dev/md124 /dev/md124 /dev/sdb5 raid0 ext2 3G /other
`-/dev/sdb6 /dev/sdb6 /dev/sdb part swap 2G [SWAP]
that contains in particular (excerpt)
NAME KNAME PKNAME TRAN TYPE FSTYPE SIZE MOUNTPOINT
| `-/dev/mapper/vg15gib-lvroot /dev/dm-0 /dev/md126 lvm ext4 13G /
| `-/dev/mapper/vg15gib-lvroot /dev/dm-0 /dev/md127 lvm ext4 13G /
| |-/dev/mapper/vg15gib-lvroot /dev/dm-0 /dev/md125 lvm ext4 13G /
| `-/dev/mapper/vg15gib-lvhome /dev/dm-1 /dev/md125 lvm ext3 1G /home
`-/dev/md124 /dev/md124 /dev/sda5 raid0 ext2 3G /other
| `-/dev/mapper/vg15gib-lvroot /dev/dm-0 /dev/md126 lvm ext4 13G /
| `-/dev/mapper/vg15gib-lvroot /dev/dm-0 /dev/md127 lvm ext4 13G /
| |-/dev/mapper/vg15gib-lvroot /dev/dm-0 /dev/md125 lvm ext4 13G /
| `-/dev/mapper/vg15gib-lvhome /dev/dm-1 /dev/md125 lvm ext3 1G /home
| `-/dev/md124 /dev/md124 /dev/sdb5 raid0 ext2 3G /other
so neither NAME (like /dev/mapper/vg15gib-lvroot
)
nor KNAME (like /dev/dm-0
) match disk_device[0-9]*
(I assume disk_device
is not dm-
but only dm
).
gozora commented at 2020-02-18 09:03:¶
Perhaps in migration mode it should be "some kind of error"
when there is no $VAR_DIR/layout/xfs/$XFS_PARTITION_DEVICE.xfs.
With "some kind of error" I mean there should be at least
a LogPrintError message that tells about it or
might there perhaps be even a hard Error exit
or is it perhaps almost always better to create the XFS
with defaults in migration mode when there is no
$VAR_DIR/layout/xfs/$XFS_PARTITION_DEVICE.xfs?
Lets try with "just a log message" for start.
V.
gozora commented at 2020-02-18 09:05:¶
@jsmeix thanks for your https://github.com/rear/rear/issues/2333#issuecomment-587351021, Like I said, it need some more testing ;-). I'll look into that for sure.
V.
jsmeix commented at 2020-02-18 09:25:¶
@rmetrich
according to
https://github.com/rear/rear/blob/master/usr/share/rear/layout/prepare/GNU/Linux/131_include_filesystem_code.sh#L183
# In case of fallback to mkfs.xfs defaults xfs_opts is empty:
contains_visible_char "$xfs_opts" && Log "Using $xfs_device_basename mkfs.xfs options: $xfs_opts" || LogPrint "Using $xfs_device_basename mkfs.xfs defaults"
I would assume there was a message on the terminal where "rear recover" runs
Using vdb mkfs.xfs defaults
before the
Confirm disk recreation script and continue 'rear recover'
user confirmation dialog that one gets in migration mode?
So I assume there was at least a small info shown to the user
that could make him aware about such XFS issues?
Or was there no info and things went "silently wrong"?
jsmeix commented at 2020-02-18 09:31:¶
@gozora
the initial comment here shows already an example
mkfs.xfs -d sunit=1024,swidth=1024 /dev/sdb
where the device node name where XFS is created
does not match disk_device[0-9]*
.
Here it seems the XFS should be created on the raw disk.
gozora commented at 2020-02-18 09:39:¶
@jsmeix
yep, fully agree!
I wanted to have at least some code that could be improved ;-), That is
how I usually work.
I'll try to make it better before official PR is created.
V.
gozora commented at 2020-02-18 09:44:¶
@jsmeix @rmetrich if you want, you can of course take over and fix this problem by your self (I'll not be angry at all), I've assigned this issue to my self mainly because its my code that is not working properly ;-).
V.
rmetrich commented at 2020-02-18 09:49:¶
@gozora no no, assign it to yourself, I really don't have any time to
deal with this now.
And thank you already for your involvement.
gozora commented at 2020-02-18 09:51:¶
@jsmeix
My code gets even funnier:
RESCUE suse-efi:/var/lib/rear/layout/xfs # ll
total 48
-rw-r--r-- 1 root root 673 Feb 18 10:55 sdavgdata-lv_data1.xfs.restore
-rw-r--r-- 1 root root 673 Feb 18 10:55 sdavgdata-lv_data2.xfs.restore
-rw-r--r-- 1 root root 673 Feb 18 10:55 sdavgdata-lv_data3.xfs.restore
-rw-r--r-- 1 root root 673 Feb 18 10:55 sdcsdavgdata-lv_data1.xfs.restore.restore
-rw-r--r-- 1 root root 673 Feb 18 10:55 sdcsdavgdata-lv_data2.xfs.restore.restore
-rw-r--r-- 1 root root 673 Feb 18 10:55 sdcsdavgdata-lv_data3.xfs.restore.restore
-rw-r--r-- 1 root root 673 Feb 18 10:55 sdcvgdata-lv_data1.xfs.restore
-rw-r--r-- 1 root root 673 Feb 18 10:55 sdcvgdata-lv_data2.xfs.restore
-rw-r--r-- 1 root root 673 Feb 18 10:55 sdcvgdata-lv_data3.xfs.restore
-rw-r--r-- 1 root root 673 Feb 18 10:34 vgdata-lv_data1.xfs
-rw-r--r-- 1 root root 673 Feb 18 10:34 vgdata-lv_data2.xfs
-rw-r--r-- 1 root root 673 Feb 18 10:34 vgdata-lv_data3.xfs
It will need much more improvement :-)
V.
jsmeix commented at 2020-02-18 10:55:¶
@gozora
my offhanded idea is to not mess around with file names
but to use a restore
sub-directory instead.
gozora commented at 2020-02-18 10:57:¶
Do you mean to copy relevant (*.xfs) files into /var/lib/rear/layout/xfs/restore?
jsmeix commented at 2020-02-18 13:17:¶
Yes, something like that - but it is only an offhanded idea.
gozora commented at 2020-04-15 10:58:¶
Due lack of time I'm assigning "ReaR future" label.
V.
gozora commented at 2020-06-15 21:14:¶
Just a status update after couple of months :-).
Now I have some seemingly working code in
gozora/xfs_new.
So far I've tested migration on standalone (/dev/sd) and dm-multipath
devices, which worked well.
I'd like to test with software RAID in upcoming days.
Code comments will of course follow soon.
Any comments, tests, recommendations are more then welcome!
V.
gozora commented at 2020-06-18 21:15:¶
I just successfully migrated VM from VirtualBox to Qemu. Code in gozora/xfs_new is renaming source configuration files to destination just fine. I'll add comments to new code and create PR soon.
V.
gozora commented at 2020-06-22 15:35:¶
With #2431 merged, this issue can be closed.
V.
[Export of Github issue for rear/rear.]