#2768 PR merged: Fixed the RAID10 layout support code

Labels: cleanup, fixed / solved / done

jsmeix opened issue at 2022-03-08 15:03:

In layout/save/GNU/Linux/210_raid_layout.sh
fixed the RAID10 layout support code as far as possible for now.
Currently only 'near=...' support is (and was) implemented initially via
https://github.com/rear/rear/commit/4dbbb288057189076d82cb8e477cd994a4ce1851

This fix was triggered by ShellCheck
SC2034: param appears unused. Verify use (or export if used externally)
https://github.com/koalaman/shellcheck/wiki/SC2034
SC2066: Since you double quoted this, it will not word split, and the loop will only run once
https://github.com/koalaman/shellcheck/wiki/SC2066
cf. https://github.com/rear/rear/issues/1040#issuecomment-1034870262

jsmeix commented at 2022-03-14 13:56:

@rear/contributors
I cannot really test it because I don't have a RAID10 test system right now.
I only did some checks with the new code commands on command line.
So I would appreciate if someone could at least have a look.
Perhaps you see something obviously wrong.

pcahyna commented at 2022-03-14 14:09:

@jsmeix I think I will be able to get a RAID10 system for tests, stay tuned please...

jsmeix commented at 2022-03-16 11:48:

@pcahyna
a question regarding your (very much appreciated!) RAID testing:
Because I have also
https://github.com/rear/rear/issues/2759
Rename disklayout.conf keyword 'raid' into 'raidarray'
on my todo list
I would like to ask you if I should implement that right now
so you could (by the way) also test that
or if you prefer that I implement that later
so we do not mix up two different things?

pcahyna commented at 2022-03-16 14:14:

@jsmeix
here is disklayout generated by rear savelayout from ReaR 2.6 release.

# Disk layout dated 20220314131456 (YYYYmmddHHMMSS)
# NAME                                       KNAME      PKNAME    TRAN TYPE   FSTYPE            SIZE MOUNTPOINT
# /dev/vda                                   /dev/vda                  disk                      10G 
# |-/dev/vda1                                /dev/vda1  /dev/vda       part   xfs                 1G /boot
# `-/dev/vda2                                /dev/vda2  /dev/vda       part   LVM2_member         9G 
#   |-/dev/mapper/rhel_kvm--08--guest16-root /dev/dm-0  /dev/vda2      lvm    xfs                 8G /
#   `-/dev/mapper/rhel_kvm--08--guest16-swap /dev/dm-1  /dev/vda2      lvm    swap                1G [SWAP]
# /dev/vdb                                   /dev/vdb                  disk                      10G 
# `-/dev/vdb1                                /dev/vdb1  /dev/vdb       part   linux_raid_member  10G 
#   `-/dev/md127                             /dev/md127 /dev/vdb1      raid10 xfs                20G /home
# /dev/vdc                                   /dev/vdc                  disk                      10G 
# `-/dev/vdc1                                /dev/vdc1  /dev/vdc       part   linux_raid_member  10G 
#   `-/dev/md127                             /dev/md127 /dev/vdc1      raid10 xfs                20G /home
# /dev/vdd                                   /dev/vdd                  disk                      10G 
# `-/dev/vdd1                                /dev/vdd1  /dev/vdd       part   linux_raid_member  10G 
#   `-/dev/md127                             /dev/md127 /dev/vdd1      raid10 xfs                20G /home
# /dev/vde                                   /dev/vde                  disk                      10G 
# `-/dev/vde1                                /dev/vde1  /dev/vde       part   linux_raid_member  10G 
#   `-/dev/md127                             /dev/md127 /dev/vde1      raid10 xfs                20G /home
# /dev/vdf                                   /dev/vdf                  disk                      10G 
# Disk /dev/vda
# Format: disk <devname> <size(bytes)> <partition label type>
disk /dev/vda 10737418240 msdos
# Partitions on /dev/vda
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
part /dev/vda 1073741824 1048576 primary boot /dev/vda1
part /dev/vda 9662627840 1074790400 primary lvm /dev/vda2
# Disk /dev/vdb
# Format: disk <devname> <size(bytes)> <partition label type>
disk /dev/vdb 10737418240 msdos
# Partitions on /dev/vdb
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
part /dev/vdb 10736369664 1048576 primary none /dev/vdb1
# Disk /dev/vdc
# Format: disk <devname> <size(bytes)> <partition label type>
disk /dev/vdc 10737418240 msdos
# Partitions on /dev/vdc
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
part /dev/vdc 10736369664 1048576 primary none /dev/vdc1
# Disk /dev/vdd
# Format: disk <devname> <size(bytes)> <partition label type>
disk /dev/vdd 10737418240 msdos
# Partitions on /dev/vdd
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
part /dev/vdd 10736369664 1048576 primary none /dev/vdd1
# Disk /dev/vde
# Format: disk <devname> <size(bytes)> <partition label type>
disk /dev/vde 10737418240 msdos
# Partitions on /dev/vde
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
part /dev/vde 10736369664 1048576 primary none /dev/vde1
# Disk /dev/vdf
# Format: disk <devname> <size(bytes)> <partition label type>
#disk /dev/vdf 10737418240 unknown
# Partitions on /dev/vdf
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
raid /dev/md127 metadata=1.2 level=raid10 raid-devices=4 uuid=ed2fc124:cb694549:a8802ad3:11da529b name=127_0 layout=n2 chunk=512 devices=/dev/vde1,/dev/vdb1,/dev/vdc1,/dev/vdd1
# Format for LVM PVs
# lvmdev <volume_group> <device> [<uuid>] [<size(bytes)>]
lvmdev /dev/rhel_kvm-08-guest16 /dev/vda2 OntpJo-QT6x-9lwd-dcAA-hzLe-X0IU-FiudhS 18872320
# Format for LVM VGs
# lvmgrp <volume_group> <extentsize> [<size(extents)>] [<size(bytes)>]
lvmgrp /dev/rhel_kvm-08-guest16 4096 2303 9433088
# Format for LVM LVs
# lvmvol <volume_group> <name> <size(bytes)> <layout> [key:value ...]
lvmvol /dev/rhel_kvm-08-guest16 root 8585740288b linear 
lvmvol /dev/rhel_kvm-08-guest16 swap 1073741824b linear 
# Filesystems (only ext2,ext3,ext4,vfat,xfs,reiserfs,btrfs are supported).
# Format: fs <device> <mountpoint> <fstype> [uuid=<uuid>] [label=<label>] [<attributes>]
fs /dev/mapper/rhel_kvm--08--guest16-root / xfs uuid=822a6936-2a38-4e28-9707-7bed38d1e3d8 label=  options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
fs /dev/md127 /home xfs uuid=1f0561d9-eecc-4d0f-87ee-4dc8bf2ccaa5 label=  options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=1024,swidth=2048,noquota
fs /dev/vda1 /boot xfs uuid=8dcc2dcb-66a9-4317-b47a-92aae301be09 label=  options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
# Swap partitions or swap files
# Format: swap <filename> uuid=<uuid> label=<label>
swap /dev/mapper/rhel_kvm--08--guest16-swap uuid=33452b6f-eaf6-4fbc-995c-543f6dee49f2 label=

I will try your patched version as well and compare the output. I don't see anything obviously wrong that needs fixing, though, do you?

jsmeix commented at 2022-03-17 09:41:

Only from looking at the old code I think

raid ... level=raid10 ... layout=n2 ...

is supported with the old code
and should be (hopefully) still work with my new code
which is all what this pull request is about at least for now.

According to "man mdadm" (at least for mdadm v4.1 in openSUSE Leap 15.3)
layout=n2 is the default for RAID10 ("man mdadm" excerpts):

-p, --layout=
...
The layout options for RAID10 are one of 'n', 'o' or 'f'
followed by a small number.
The default is 'n2'.
The supported options are:
'n' signals 'near' copies...
'o' signals 'offset' copies...
'f' signals 'far' copies...
See md(4) for more detail about 'near', 'offset', and 'far'.

So a real test would be a RAID10 layout that is not n2.

But I think (again: only from looking at the old code)
that something like

raid ... level=raid10 ... layout=f2 ...

is not supported with the old code
and is still not (yet) supported with my new code
i.e.
I think when the "Layout :" line in the detailed mdadm output looks like

          Layout : near=2

it is supported but when it looks like

          Layout : near=2, far=2

or

          Layout : near=2, far=1

the far part is not supported.

I am wondering if it is possible to set up a RAID10 array
so that the "Layout :" line in the detailed mdadm output looks like

          Layout : far=2

i.e. without a near value.
This would be also not supported (from plain looking at the code).

What seems to be also not supported (from plain looking at the code) is
when the "Layout :" line in the detailed mdadm output looks like

          Layout : near=1

see my TODO comment in my new code.

What is also not supported is a RAID10 layout with 'offset' copies
i.e. what is set with something like mdadm ... --layout=o2.

Perhaps some of the not supported cases can be ignored
because they cannot appear in practice or are plain wrong
but for wrong setups ReaR should at least report to the user
that such a setup is not supported when ReaR detects it.

But I think some support is likely still missing for RAID10 layout
and if missing support can be easily added I would like to add it
via this pull requests or via a separated subsequent pull request.

pcahyna commented at 2022-03-18 10:30:

So a real test would be a RAID10 layout that is not n2.

I have done many such tests and recorded the results

But I think (again: only from looking at the old code) that something like

raid ... level=raid10 ... layout=f2 ...

is not supported with the old code and is still not (yet) supported with my new code i.e. I think when the "Layout :" line in the detailed mdadm output looks like

          Layout : near=2

it is supported but when it looks like

          Layout : near=2, far=2

or

          Layout : near=2, far=1

the far part is not supported.

I have never been able to get in a situation where mdadm prints this, though. I have tried RHEL 6.10, 7.9, 8.5. According to the replies to the (quite old) stackexchange post, this had not been printed anymore for some time before the post was written, so I suspect it referred to some very old mdadm version. And apparently it is not possible to create an array with both "near" and "far" copies either, except by editing the disk metadata by hand. So, I was not able to test this case.

I am wondering if it is possible to set up a RAID10 array so that the "Layout :" line in the detailed mdadm output looks like

          Layout : far=2

i.e. without a near value. This would be also not supported (from plain looking at the code).

Yes, that's actually the only thing I was able to get mdadm to print when using the "far" layout, so it is very common. And, surprisingly, ReaR (version 2.6) saves and recreates this just fine.

What seems to be also not supported (from plain looking at the code) is when the "Layout :" line in the detailed mdadm output looks like

          Layout : near=1

see my TODO comment in my new code.

I was not able to get mdadm to create this, so I was not able to test it:

# mdadm --create --verbose /dev/md127 --level=raid10 --layout=n1 --raid-devices=4 /dev/vde1 /dev/vdf1 /dev/vdc1 /dev/vdd1
mdadm: Defaulting to version 1.2 metadata
mdadm: RUN_ARRAY failed: Invalid argument

What is also not supported is a RAID10 layout with 'offset' copies i.e. what is set with something like mdadm ... --layout=o2.

That works for me, just as with the f2 layout. (ReaR 2.6, again.) I tested full backup and recovery and examined the storage layout after.

Perhaps some of the not supported cases can be ignored because they cannot appear in practice or are plain wrong but for wrong setups ReaR should at least report to the user that such a setup is not supported when ReaR detects it.

The challenge is to create such setups for testing. To summarize, I was not able to find a single setup that could be realistically created and posed a problem for ReaR (version 2.6)

I have detailed information saved from these ReaR and mdadm runs in various situations and can provide details if you want.

But I think some support is likely still missing for RAID10 layout and if missing support can be easily added I would like to add it via this pull requests or via a separated subsequent pull request.

If we find some unsupported setups, sure. Can you perhaps try some old versions of SLES to see whether they allow creation of "less standard" layouts? (Perhaps with loop devices.)

jsmeix commented at 2022-03-18 11:35:

@pcahyna
thank you so much for your tests.
Off the top of my head I think I have now an idea how the old code worked.
My new code does no longer support the "far" layout and/or the "offset" layout.
I will right now enhance my code...

jsmeix commented at 2022-03-18 11:46:

@pcahyna
could you please post here the detailed mdadm output
for a "far" layout and for an "offset" layout
and the matching "raid" entry in disklayout.conf

Is it possible to combine any of "near" "far" and "offset" layouts?
It seems "near" and "far" exclude each other.
But what about "near" plus "offset" or "far" plus "offset"?
Do the latter also exclude each other?

pcahyna commented at 2022-03-18 12:23:

@pcahyna could you please post here the detailed mdadm output for a "far" layout and for an "offset" layout and the matching "raid" entry in disklayout.conf

/dev/md127:
           Version : 1.2
     Creation Time : Thu Mar 17 08:20:43 2022
        Raid Level : raid10
        Array Size : 20951040 (19.98 GiB 21.45 GB)
     Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Mar 17 12:24:29 2022
             State : clean 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : offset=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : kvm-08-guest23:127
              UUID : 62a70b90:01229daa:37bbf5de:40646c97
            Events : 29

    Number   Major   Minor   RaidDevice State
       0     252       65        0      active sync   /dev/vde1
       1     252       81        1      active sync   /dev/vdf1
       2     252       33        2      active sync   /dev/vdc1
       3     252       49        3      active sync   /dev/vdd1

raid /dev/md127 metadata=1.2 level=raid10 raid-devices=4 uuid=62a70b90:01229daa:37bbf5de:40646c97 layout=o2 chunk=512 devices=/dev/vde1,/dev/vdf1,/dev/vdc1,/dev/vdd1

Is it possible to combine any of "near" "far" and "offset" layouts? It seems "near" and "far" exclude each other. But what about "near" plus "offset" or "far" plus "offset"? Do the latter also exclude each other?

I don't know - what would be the mdadm command to create them?

pcahyna commented at 2022-03-18 12:25:

far layout:

/dev/md127:
           Version : 1.2
     Creation Time : Thu Mar 17 08:19:29 2022
        Raid Level : raid10
        Array Size : 20951040 (19.98 GiB 21.45 GB)
     Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
      Raid Devices : 4
     Total Devices : 4
       Persistence : Superblock is persistent

       Update Time : Thu Mar 17 12:23:01 2022
             State : clean 
    Active Devices : 4
   Working Devices : 4
    Failed Devices : 0
     Spare Devices : 0

            Layout : far=2
        Chunk Size : 512K

Consistency Policy : resync

              Name : kvm-08-guest22:127
              UUID : 4a723d0e:7e4e5c5e:a07ac57c:d33fd39e
            Events : 32

    Number   Major   Minor   RaidDevice State
       0     252       65        0      active sync   /dev/vde1
       1     252       81        1      active sync   /dev/vdf1
       2     252       33        2      active sync   /dev/vdc1
       3     252       49        3      active sync   /dev/vdd1

raid /dev/md127 metadata=1.2 level=raid10 raid-devices=4 uuid=4a723d0e:7e4e5c5e:a07ac57c:d33fd39e layout=f2 chunk=512 devices=/dev/vde1,/dev/vdf1,/dev/vdc1,/dev/vdd1

jsmeix commented at 2022-03-18 13:00:

I did not test my recent commit here
https://github.com/rear/rear/pull/2768/commits/55e38dc76df7c1d8e145fd530465ddd8b71cdd73
This is something for next week - as time permits.

But now I do at least understand what the code should do
so I could now even fix things in that code.

jsmeix commented at 2022-03-18 13:11:

@pcahyna
if you think my current code is needlessly overcomplicated
because it tries to somehow deal even with things like

          Layout : near=2, far=3

or

          Layout : near=2, far=3, offset=4

I could simplify it so that it only works for (mutually exclusive) things like

          Layout : near=2
          Layout : far=3
          Layout : offset=4

and only leave a comment in the code that explains it.

pcahyna commented at 2022-03-18 13:29:

I am not sure. It appears that a long time ago it used to be possible to create such arrays, so someone might have an old array still in operation. We need to find a reproducer.

jsmeix commented at 2022-03-18 13:41:

By the way FYI:

Regarding code that tinkers with IFS like

OIFS=$IFS
IFS=","
for layout_option in $layout ; do
    ...
done
IFS=$OIFS

Somehow I do not like tinkering with IFS.

Here in particular not because IFS is changed in the whole for loop body

# ( IFS="," ; for w in word ; do echo $w ; echo -n "$IFS" | od -a ; done )
word
0000000   ,
0000001

and when the for loop body is longer things can easily go wrong
in particular when commands or functions are called in the for loop body
that assume the normal IFS is set.

Recently I learned about code like

while IFS="..." read a b c ; do ... ; done

which is much cleaner
because IFS is only changed for the read command
so one does not have to save and restore IFS.

But this cannot work for the for command in bash
because "man bash" reads (excerpts)

       for name [ [ in [ word ... ] ] ; ] do list ; done
...
       for (( expr1 ; expr2 ; expr3 )) ; do list ; done
...
       while list-1; do list-2; done

so for does not run a command list but only expands words.

This means for loops with changed IFS
need to be converted into while loops with changed IFS
for example like

# string="foo,bar,baz"

# unset A B C

# while IFS="," read a b c ; do A=$a ; B=$b ; C=$c ; echo $A $B $C ; done <<< "$string"
foo bar baz

# echo $A $B $C
foo bar baz

or for better readability (easier to see what 'read' gets as input)
but with "useless use of echo" and a pipe

# string="foo,bar,baz"

# unset A B C

# echo "$string" | while IFS="," read a b c ; do A=$a ; B=$b ; C=$c ; echo $A $B $C ; done
foo bar baz

# echo $A $B $C
[no output]

But the drawback of using a pipe is that the "while read ... do ... done" part
is run as separated process (in a subshell) so that e.g. one cannot set variables
in the "while read ... do ... done" part that are meant to be used after the pipe,
cf. the "General explanation" in 220_lvm_layout.sh currently starting at
https://github.com/rear/rear/blob/master/usr/share/rear/layout/save/GNU/Linux/220_lvm_layout.sh#L25

jsmeix commented at 2022-03-18 14:03:

Regarding things like "Layout : near=2, far=3, offset=4":

I think the old code did not support it.
I think the old code only suppoted mutually exclusive things like
"Layout : near=2" XOR "Layout : far=3" XOR "Layout : offset=4"
because I did some tests on command line with the old code
and that made me falsely assume the old code did not support
"Layout : far=3" XOR "Layout : offset=4"
because I had tired out what the old code results for
"Layout : near=2, far=3"
where the "far=3" part was ignored - if I remember correctly.

So if the old code really only suppoted mutually exclusive things like
"Layout : near=2" XOR "Layout : far=3" XOR "Layout : offset=4"
and because we did not get any issue from a user that
things like "Layout : near=2, far=3, offset=4" do not work
I think we are safe to ignore multiple RAID10 layouts
for one same RAID10 array.

Furthermore
https://github.com/rear/rear/blob/master/doc/rear-release-notes.txt
shows (excerpt)

ReaR-2.6 dropped official support for the following Linux based operating
systems:

  o Fedora < 29
  o RHEL < 6
  o CentOS < 6
  o Scientific Linux < 6
  o SLES < 12
  o openSUSE Leap 42.x and before (i.e. openSUSE <= 13)
  o openSUSE Tumbleweed
  o Debian < 8
  o Ubuntu < 16

and you wrote in your
https://github.com/rear/rear/pull/2768#issuecomment-1072278733
"I have tried RHEL 6.10, 7.9, 8.5"
so we should be rather safe when we silently ignore
multiple RAID10 layouts for one same RAID10 array.

jsmeix commented at 2022-03-18 14:06:

But now there is weekend time - at least for me :-)

@pcahyna @rear/contributors
I wish you a relaxed and recovering weekend!

jsmeix commented at 2022-03-29 12:32:

@pcahyna
do you think you find time to test my changes in this pull request?

pcahyna commented at 2022-03-30 08:42:

@jsmeix, sure. I was trying to find a system that allows to create some of the more complicated layouts and got distracted by other things.

I think we are safe to ignore multiple RAID10 layouts
for one same RAID10 array.

ReaR-2.6 dropped official support for the following Linux based operating
systems:

My worry is that one might have an old array created on a system version that supported the unusual layouts, but still used on a system with newer layouts. So I will still try some of the older, now unsupported releases.

pcahyna commented at 2022-04-04 09:19:

@jsmeix I tested RHEL 5 and even there I was not able to recreate a configuration that would print anything else than in newer versions (i.e. it was not printing near=2,far=1, which was reported in the Ubuntu forums 10 years ago, for example).
I think the risk of encountering a "weird" configuration is therefore minimal and so I will abandon further attempts of creating it and focus on testing your changes in usual circumstances.

pcahyna commented at 2022-04-05 17:12:

Hi @jsmeix

your new code saves and recreates correctly a RAID created with

mdadm --create --verbose /dev/md127 --level=raid10 --layout=o2 --raid-devices=4 /dev/vde1 /dev/vdf1 /dev/vdc1 /dev/vdd1

Here is the relevant part of disklayout.conf

# Software RAID devices (mdadm --detail --scan --config=partitions)
# ARRAY /dev/md127 metadata=1.2 name=127 UUID=cb09fcd3:99a70a82:cf98c4bf:027581fa
# Software RAID  device /dev/md127 (mdadm --misc --detail /dev/md127)
# /dev/md127:
#            Version : 1.2
#      Creation Time : Tue Apr  5 07:44:13 2022
#         Raid Level : raid10
#         Array Size : 20951040 (19.98 GiB 21.45 GB)
#      Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
#       Raid Devices : 4
#      Total Devices : 4
#        Persistence : Superblock is persistent
#        Update Time : Tue Apr  5 07:46:02 2022
#              State : clean, resyncing 
#     Active Devices : 4
#    Working Devices : 4
#     Failed Devices : 0
#      Spare Devices : 0
#             Layout : offset=2
#         Chunk Size : 512K
# Consistency Policy : resync
#      Resync Status : 95% complete
#               Name : 127
#               UUID : cb09fcd3:99a70a82:cf98c4bf:027581fa
#             Events : 23
#     Number   Major   Minor   RaidDevice State
#        0     252       65        0      active sync   /dev/vde1
#        1     252       81        1      active sync   /dev/vdf1
#        2     252       33        2      active sync   /dev/vdc1
#        3     252       49        3      active sync   /dev/vdd1
# RAID device /dev/md127
# Format: raid /dev/<kernel RAID device> level=<RAID level> raid-devices=<nr of active devices> devices=<component device1,component device2,...> [name=<array name>] [metadata=<metadata style>] [uuid=<UUID>] [layout=<data layout>] [chunk=<chunk size>] [spare-devices=<nr of spare devices>] [size=<container size>]
raid /dev/md127 level=raid10 raid-devices=4 devices=/dev/vde1,/dev/vdf1,/dev/vdc1,/dev/vdd1 metadata=1.2 uuid=cb09fcd3:99a70a82:cf98c4bf:027581fa layout=o2 chunk=512
# RAID disk /dev/md127
# Format: raiddisk <devname> <size(bytes)> <partition label type>
raiddisk /dev/md127 21453864960 loop

I can test other RAID10 layouts, and code without your change to see if there is a difference in output.

pcahyna commented at 2022-04-05 18:05:

and here is the layout with the Git master version

# Software RAID devices (mdadm --detail --scan --config=partitions)
# ARRAY /dev/md127 metadata=1.2 name=127 UUID=71d25b13:d80ea84b:fab43358:73b78180
# Software RAID  device /dev/md127 (mdadm --misc --detail /dev/md127)
# /dev/md127:
#            Version : 1.2
#      Creation Time : Tue Apr  5 13:49:34 2022
#         Raid Level : raid10
#         Array Size : 20951040 (19.98 GiB 21.45 GB)
#      Used Dev Size : 10475520 (9.99 GiB 10.73 GB)
#       Raid Devices : 4
#      Total Devices : 4
#        Persistence : Superblock is persistent
#        Update Time : Tue Apr  5 13:51:09 2022
#              State : clean, resyncing 
#     Active Devices : 4
#    Working Devices : 4
#     Failed Devices : 0
#      Spare Devices : 0
#             Layout : offset=2
#         Chunk Size : 512K
# Consistency Policy : resync
#      Resync Status : 61% complete
#               Name : 127
#               UUID : 71d25b13:d80ea84b:fab43358:73b78180
#             Events : 17
#     Number   Major   Minor   RaidDevice State
#        0     252       65        0      active sync   /dev/vde1
#        1     252       81        1      active sync   /dev/vdf1
#        2     252       33        2      active sync   /dev/vdc1
#        3     252       49        3      active sync   /dev/vdd1
# RAID device /dev/md127
# Format: raid /dev/<kernel RAID device> level=<RAID level> raid-devices=<nr of active devices> devices=<component device1,component device2,...> [name=<array name>] [metadata=<metadata style>] [uuid=<UUID>] [layout=<data layout>] [chunk=<chunk size>] [spare-devices=<nr of spare devices>] [size=<container size>]
raid /dev/md127 level=raid10 raid-devices=4 devices=/dev/vde1,/dev/vdf1,/dev/vdc1,/dev/vdd1 metadata=1.2 uuid=71d25b13:d80ea84b:fab43358:73b78180 layout=o2 chunk=512
# RAID disk /dev/md127
# Format: raiddisk <devname> <size(bytes)> <partition label type>
raiddisk /dev/md127 21453864960 loop

pcahyna commented at 2022-04-06 08:13:

Hi @jsmeix , do you want more tests with different layouts? So far, the output of old and changed code has looked identical.

jsmeix commented at 2022-04-06 09:02:

@pcahyna
thank you so much for all your testing!
I think you tested more than enough.

jsmeix commented at 2022-04-06 09:03:

@rear/contributors
when there are no objections from you
I would like to merge it tomorrow afternoon.

pcahyna commented at 2022-04-06 09:06:

@jsmeix I will have a look at the code changes, until now I have focused on functional testing

jsmeix commented at 2022-04-06 09:08:

No longer a 'bug' but a 'cleanup' because the old code had
somehow worked sufficiently for its intended use case, cf.
https://github.com/rear/rear/pull/2768#issuecomment-1072278733

pcahyna commented at 2022-04-07 09:33:

@jsmeix thanks, from what I have read it is better to avoid the -a and -o test operators entirely.

jsmeix commented at 2022-04-07 10:04:

I agree one should avoid the -a and -o operators of test if possible
because those two operators have each duplicate meanings
according to # help test that reads (excerpts):

      -a FILE        True if file exists.
...
      -o OPTION      True if the shell option OPTION is enabled.
...
      EXPR1 -a EXPR2 True if both expr1 AND expr2 are true.
      EXPR1 -o EXPR2 True if either expr1 OR expr2 is true.

Currently we have 67 code lines where test is used with -a or -o:

# find usr/sbin/rear usr/share/rear/ -type f | xargs grep 'test .* -[ao] ' | wc -l
67

(that lists also some false positives).

jsmeix commented at 2022-04-07 11:48:

Only for completeness (because I mentioned it in the commit message)
here the old code

if test "$level" = "raid10" ; then
    OIFS=$IFS
    IFS=","
    for param in "$layout" ; do
        copies=${layout%=*}
        number=${layout#*=}
        test "$number" -gt 1 && layout="${copies:0:1}$number"
    done
    IFS=$OIFS
fi

and what shellcheck reports

for param in "$layout" ; do
^-^ SC2034: param appears unused. Verify use (or export if used externally).
             ^-------^ SC2066: Since you double quoted this, it will not word split, and the loop will only run once.

[Export of Github issue for rear/rear.]