#3521 Issue open: "rear -v mkbackup" failed due to raid50 (2 x raid5 (2+1))

Labels: enhancement, support / question

AntonHPE opened issue at 2025-09-15 13:02:

ReaR version

Relax-and-Recover 2.9 / 2025-01-31

Describe the ReaR bug in detail

There is Fedora Server 42 configured with two raid5 (2+1)
There are no issues during backup (rear -v mkbackup), destroy original disk (dd if=/dev/urandom ...) and restore from rear backup.

However, if I create raid50 on top of two raid5, i. e. raid0 on top of two raid5, I have next issue during rear backup

[root@localhost ~]# cat /proc/mdstat
Personalities : [raid0] [raid4] [raid5] [raid6] [raid10]
md125 : active raid0 md127[1] md126[0]
      12501673984 blocks super 1.2 512k chunks

md126 : active raid5 nvme5n1[3] nvme4n1[1] nvme3n1[0]
      6250969088 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/24 pages [0KB], 65536KB chunk

md127 : active raid5 nvme0n1[3] nvme1n1[1] nvme2n1[0]
      6250969088 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
      bitmap: 0/24 pages [0KB], 65536KB chunk

unused devices: <none>
[root@localhost ~]#
[root@localhost ~]# rear -v mkbackup
Relax-and-Recover 2.9 / 2025-01-31
Running rear mkbackup (PID 99834 date 2025-09-15 15:58:12)
Using log file: /var/log/rear/rear-localhost.log
Running workflow mkbackup on the normal/original system
Using backup archive '/var/tmp/rear.K1XsW8niaF1J4qu/outputfs/localhost/backup.tar.zst'
Using UEFI Boot Loader for Linux (USING_UEFI_BOOTLOADER=1)
Using autodetected kernel '/boot/vmlinuz-6.16.7-200.fc42.x86_64' as kernel in the recovery system
Creating disk layout
Overwriting existing disk layout file /var/lib/rear/layout/disklayout.conf
Ignoring nvme0c0n1: /dev/nvme0c0n1 is not a block device
Ignoring nvme1c1n1: /dev/nvme1c1n1 is not a block device
Ignoring nvme2c2n1: /dev/nvme2c2n1 is not a block device
Ignoring nvme3c3n1: /dev/nvme3c3n1 is not a block device
Ignoring nvme4c4n1: /dev/nvme4c4n1 is not a block device
Ignoring nvme5c5n1: /dev/nvme5c5n1 is not a block device
Ignoring nvme6c6n1: /dev/nvme6c6n1 is not a block device
ERROR: No component devices for RAID /dev/md125
Some latest log messages since the last called script 210_raid_layout.sh:
  2025-09-15 15:58:13.703530951 Saving Software RAID configuration
Some messages from /var/tmp/rear.K1XsW8niaF1J4qu/tmp/rear.mkbackup.stdout_stderr since the last called script 210_raid_layout.sh:
  /usr/share/rear/layout/save/GNU/Linux/210_raid_layout.sh: line 298: test: -gt: unary operator expected
  Error: /dev/md127: unrecognised disk label
  /usr/share/rear/layout/save/GNU/Linux/210_raid_layout.sh: line 298: test: -gt: unary operator expected
  Error: /dev/md126: unrecognised disk label
Use debug mode '-d' for some debug messages or debugscript mode '-D' for full debug messages with 'set -x' output
Error exit of rear mkbackup (PID 99834) and its descendant processes
Exiting subshell 1 (where the actual error happened)
Aborting due to an error, check /var/log/rear/rear-localhost.log for details
Exiting rear mkbackup (PID 99834) and its descendant processes ...
Running exit tasks
Terminated
[root@localhost ~]#

Platform

Linux x64

OS version

[root@localhost ~]# cat /etc/os-release
NAME="Fedora Linux" VERSION="42 (Server Edition)"
RELEASE_TYPE=stable ID=fedora VERSION_ID=42
VERSION_CODENAME="" 
PLATFORM_ID="platform:f42" 
PRETTY_NAME="Fedora Linux 42 (Server Edition)"

Backup

NETFS

Storage layout

[root@localhost ~]# lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,LABEL,SIZE,MOUNTPOINT
NAME                        KNAME          PKNAME         TRAN   TYPE  FSTYPE            LABEL                          SIZE MOUNTPOINT
/dev/zram0                  /dev/zram0                           disk  swap              zram0                            8G [SWAP]
/dev/nvme6n1                /dev/nvme6n1                  nvme   disk                                                 447.1G
|-/dev/nvme6n1p1            /dev/nvme6n1p1 /dev/nvme6n1   nvme   part  vfat                                             600M /boot/efi
|-/dev/nvme6n1p2            /dev/nvme6n1p2 /dev/nvme6n1   nvme   part  xfs                                                1G /boot
`-/dev/nvme6n1p3            /dev/nvme6n1p3 /dev/nvme6n1   nvme   part  LVM2_member                                    445.5G
  `-/dev/mapper/fedora-root /dev/dm-0      /dev/nvme6n1p3        lvm   xfs                                               15G /
/dev/nvme3n1                /dev/nvme3n1                  nvme   disk  linux_raid_member localhost.localdomain:raid52   2.9T
`-/dev/md126                /dev/md126     /dev/nvme3n1          raid5 linux_raid_member localhost.localdomain:raid50   5.8T
  `-/dev/md125              /dev/md125     /dev/md126            raid0                                                 11.6T
/dev/nvme2n1                /dev/nvme2n1                  nvme   disk  linux_raid_member localhost.localdomain:raid51   2.9T
`-/dev/md127                /dev/md127     /dev/nvme2n1          raid5 linux_raid_member localhost.localdomain:raid50   5.8T
  `-/dev/md125              /dev/md125     /dev/md127            raid0                                                 11.6T
/dev/nvme1n1                /dev/nvme1n1                  nvme   disk  linux_raid_member localhost.localdomain:raid51   2.9T
`-/dev/md127                /dev/md127     /dev/nvme1n1          raid5 linux_raid_member localhost.localdomain:raid50   5.8T
  `-/dev/md125              /dev/md125     /dev/md127            raid0                                                 11.6T
/dev/nvme4n1                /dev/nvme4n1                  nvme   disk  linux_raid_member localhost.localdomain:raid52   2.9T
`-/dev/md126                /dev/md126     /dev/nvme4n1          raid5 linux_raid_member localhost.localdomain:raid50   5.8T
  `-/dev/md125              /dev/md125     /dev/md126            raid0                                                 11.6T
/dev/nvme5n1                /dev/nvme5n1                  nvme   disk  linux_raid_member localhost.localdomain:raid52   2.9T
`-/dev/md126                /dev/md126     /dev/nvme5n1          raid5 linux_raid_member localhost.localdomain:raid50   5.8T
  `-/dev/md125              /dev/md125     /dev/md126            raid0                                                 11.6T
/dev/nvme0n1                /dev/nvme0n1                  nvme   disk  linux_raid_member localhost.localdomain:raid51   2.9T
`-/dev/md127                /dev/md127     /dev/nvme0n1          raid5 linux_raid_member localhost.localdomain:raid50   5.8T
  `-/dev/md125              /dev/md125     /dev/md127            raid0                                                 11.6T
[root@localhost ~]#

What steps will reproduce the bug?

create raid50 or raid60 on top of two raid5 or raid6 and run "rear -v mkbackup"

Workaround, if any

No response

Additional information

No response

jsmeix commented at 2025-09-19 09:32:

As far as I remember we never had a use case
where basic RAID arrays are "stacked"
to create a higher level RAID array.

I didn't investigate the details
but from what I see at first glance
in the RAID related ReaR scripts
usr/share/rear/layout/save/GNU/Linux/210_raid_layout.sh
which is run during "rear mkrescue/mkbackup"
and its counterpart
usr/share/rear/layout/prepare/GNU/Linux/120_include_raid_code.sh
which is run during "rear recover"
I think that ReaR does not support RAID levels
which are not directly supported by a single mdadm command
so I think ReaR does not support "stacked RAID arrays"
which are created by stacking some basic RAID arrays.

You may investigate what you get
after "rear mkrescue/mkbackup" in your
var/lib/rear/layout/disklayout.conf
and
var/lib/rear/layout/diskdeps.conf
files.

If what you get in disklayout.conf looks correct
but the dependencies are not properly listed in diskdeps.conf
then ReaR's dependency tracker may fail to track the
dependencies between higher and lower level RAID arrays.

From my current point of view this issue is not a bug
but not yet implemented support for special functionality.

jsmeix commented at 2025-09-19 09:43:

Or in other words:

The current error exit of "rear mkrescue/mkbackup"
basically proves that currently ReaR does not support
when RAID array component devices are no "normal" disks.

AntonHPE commented at 2025-09-20 18:30:

Up to 8 disks recommended for md raid5.
However even 2 unit servers can be equipped with tens of disks, so stacked raid50 or raid60 are very useful.

Ok, I agree that this is not a bug.

I'm new to github.
Could you please move this thread to the "new feature request" branch ?


[Export of Github issue for rear/rear.]