#3400 Issue open
: Falsely "automatically excluding disk /dev/nvme4n1 (not used by any mounted filesystem)"¶
Labels: bug
, fixed / solved / done
gdha opened issue at 2025-02-13 14:33:¶
ReaR version¶
2.9
Describe the ReaR bug in detail¶
Some real mount points listed in the /etc/fstab
are commented out in
the disklayout.conf
file, e.g.
line:
UUID=89142993-8500-42fe-8458-e8ccda4a7113 /var/lib/rancher xfs defaults 0 0
in /etc/fstab
is
/dev/nvme4n1 167731200 78233564 89497636 47% /var/lib/rancher
in the df
output.
However, in the disklayout.conf
file this line is commented:
#fs /dev/nvme4n1 /var/lib/rancher xfs uuid=89142993-8500-42fe-8458-e8ccda4a7113 label= options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
Platform¶
Linux x64
OS version¶
RHEL 9.5
Backup¶
No response
Storage layout¶
#-> lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,LABEL,SIZE,MOUNTPOINT
NAME KNAME PKNAME TRAN TYPE FSTYPE LABEL SIZE MOUNTPOINT
/dev/sda /dev/sda iscsi disk ext4 60G /var/lib/kubelet/pods/2c4faad8-2f2b-45f3-bb83-229059f1ec84/volumes/kubernetes.io~csi/pvc-d2566b87-3cad-4458-9175-72076d342aee/
/dev/sdb /dev/sdb iscsi disk 50G /var/lib/kubelet/pods/64153360-6afb-4f1d-8854-db7ca58d9326/volumes/kubernetes.io~csi/pvc-89ba1477-4a18-4af1-8534-e67fb916c187/
/dev/sdc /dev/sdc iscsi disk 1G /var/log
/dev/sdd /dev/sdd iscsi disk 5G /var/lib/kubelet/pods/d086b3b4-dadb-4da7-8db1-f672f5d3b4b8/volumes/kubernetes.io~csi/pvc-e53f8225-6e04-4d24-b9db-e8fc7dea3f51/
/dev/nvme1n1 /dev/nvme1n1 nvme disk xfs 550G /app/elasticsearch
/dev/nvme0n1 /dev/nvme0n1 nvme disk 30G
|-/dev/nvme0n1p1 /dev/nvme0n1p1 /dev/nvme0n1 nvme part 1M
`-/dev/nvme0n1p2 /dev/nvme0n1p2 /dev/nvme0n1 nvme part xfs 30G /
/dev/nvme2n1 /dev/nvme2n1 nvme disk LVM2_member 20G
`-/dev/mapper/vg_app-lg_app
/dev/dm-0 /dev/nvme2n1 lvm xfs lg_app 20G /app
/dev/nvme4n1 /dev/nvme4n1 nvme disk xfs 160G /var/lib/rancher
/dev/nvme3n1 /dev/nvme3n1 nvme disk 4G
/dev/nvme6n1 /dev/nvme6n1 nvme disk xfs 200G /app/data/longhorn
/dev/nvme5n1 /dev/nvme5n1 nvme disk xfs 20G /var/log
/dev/nvme7n1 /dev/nvme7n1 nvme disk 10G
|-/dev/nvme7n1p1 /dev/nvme7n1p1 /dev/nvme7n1 nvme part vfat REAR-EFI 1G
`-/dev/nvme7n1p2 /dev/nvme7n1p2 /dev/nvme7n1 nvme part ext3 REAR-000 9G
The /etc/fstab
file contains:
UUID=c9aa25ee-e65c-4818-9b2f-fa411d89f585 / xfs defaults 0 0
/dev/vg_app/lg_app /app xfs auto,nofail,defaults 1 2
UUID=6281fd1f-d107-427a-a22c-2f575fb7a58c /app/elasticsearch xfs defaults 0 0
UUID=0f12a909-7cbf-487d-8bcb-663eeaabf3c2 /var/log xfs defaults,nodev,nosuid,noexec 0 0
UUID=89142993-8500-42fe-8458-e8ccda4a7113 /var/lib/rancher xfs defaults 0 0
UUID=5a2a74b7-9c6f-4af6-bbc6-9e2ecd52a314 /app/data/longhorn xfs defaults 0 0
What steps will reproduce the bug?¶
#-> sbin/rear -D savelayout
Relax-and-Recover 2.9 / 2025-01-31
Running rear savelayout (PID 1032097 date 2025-02-13 15:23:10)
Command line options: sbin/rear -D savelayout
Using log file: /home/gdhaese1/projects/rear/var/log/rear/rear-AWSABLIRLL000K.log
Using build area: /var/tmp/rear.9rUmpk3qW9R3ff1
Setting TMPDIR to ReaR's '/var/tmp/rear.9rUmpk3qW9R3ff1/tmp' (was unset when ReaR was launched)
Running 'init' stage ======================
Running workflow savelayout on the normal/original system
Running 'layout/save' stage ======================
Creating disk layout
Overwriting existing disk layout file /home/gdhaese1/projects/rear/var/lib/rear/layout/disklayout.conf
Automatically excluding disk /dev/nvme3n1 (not used by any mounted filesystem)
Marking component '/dev/nvme3n1' as done in /home/gdhaese1/projects/rear/var/lib/rear/layout/disktodo.conf
Automatically excluding disk /dev/nvme4n1 (not used by any mounted filesystem)
Marking component '/dev/nvme4n1' as done in /home/gdhaese1/projects/rear/var/lib/rear/layout/disktodo.conf
Dependent component fs:/var/lib/rancher is a child of component /dev/nvme4n1
Marking component 'fs:/var/lib/rancher' as done in /home/gdhaese1/projects/rear/var/lib/rear/layout/disktodo.conf
Automatically excluding disk /dev/nvme5n1 (not used by any mounted filesystem)
Marking component '/dev/nvme5n1' as done in /home/gdhaese1/projects/rear/var/lib/rear/layout/disktodo.conf
Dependent component fs:/var/log is a child of component /dev/nvme5n1
Marking component 'fs:/var/log' as done in /home/gdhaese1/projects/rear/var/lib/rear/layout/disktodo.conf
Automatically excluding disk /dev/nvme7n1 (not used by any mounted filesystem)
Marking component '/dev/nvme7n1' as done in /home/gdhaese1/projects/rear/var/lib/rear/layout/disktodo.conf
Dependent component /dev/nvme7n1p1 is a child of component /dev/nvme7n1
Marking component '/dev/nvme7n1p1' as done in /home/gdhaese1/projects/rear/var/lib/rear/layout/disktodo.conf
Dependent component /dev/nvme7n1p2 is a child of component /dev/nvme7n1
Marking component '/dev/nvme7n1p2' as done in /home/gdhaese1/projects/rear/var/lib/rear/layout/disktodo.conf
Automatically excluding disk /dev/sda (not used by any mounted filesystem)
Marking component '/dev/sda' as done in /home/gdhaese1/projects/rear/var/lib/rear/layout/disktodo.conf
Automatically excluding disk /dev/nvme3n1 (not used by any mounted filesystem)
Component '/dev/nvme3n1' is marked as 'done /dev/nvme3n1' in /home/gdhaese1/projects/rear/var/lib/rear/layout/disktodo.conf
Automatically excluding disk /dev/nvme5n1 (not used by any mounted filesystem)
Component '/dev/nvme5n1' is marked as 'done /dev/nvme5n1' in /home/gdhaese1/projects/rear/var/lib/rear/layout/disktodo.conf
Dependent component fs:/var/log is a child of component /dev/nvme5n1
Component 'fs:/var/log' is marked as 'done fs:/var/log' in /home/gdhaese1/projects/rear/var/lib/rear/layout/disktodo.conf
Automatically excluding disk /dev/nvme4n1 (not used by any mounted filesystem)
Component '/dev/nvme4n1' is marked as 'done /dev/nvme4n1' in /home/gdhaese1/projects/rear/var/lib/rear/layout/disktodo.conf
Dependent component fs:/var/lib/rancher is a child of component /dev/nvme4n1
Component 'fs:/var/lib/rancher' is marked as 'done fs:/var/lib/rancher' in /home/gdhaese1/projects/rear/var/lib/rear/layout/disktodo.conf
Disabling excluded components in /home/gdhaese1/projects/rear/var/lib/rear/layout/disklayout.conf
Disabling component 'disk /dev/nvme3n1' in /home/gdhaese1/projects/rear/var/lib/rear/layout/disklayout.conf
Disabling component 'disk /dev/nvme4n1' in /home/gdhaese1/projects/rear/var/lib/rear/layout/disklayout.conf
Disabling component 'disk /dev/nvme5n1' in /home/gdhaese1/projects/rear/var/lib/rear/layout/disklayout.conf
Disabling component 'disk /dev/nvme7n1' in /home/gdhaese1/projects/rear/var/lib/rear/layout/disklayout.conf
Disabling component 'part /dev/nvme7n1' in /home/gdhaese1/projects/rear/var/lib/rear/layout/disklayout.conf
Component 'part /dev/nvme7n1' is disabled in /home/gdhaese1/projects/rear/var/lib/rear/layout/disklayout.conf
Disabling component 'disk /dev/sda' in /home/gdhaese1/projects/rear/var/lib/rear/layout/disklayout.conf
Component 'disk /dev/nvme3n1' is disabled in /home/gdhaese1/projects/rear/var/lib/rear/layout/disklayout.conf
Component 'disk /dev/nvme5n1' is disabled in /home/gdhaese1/projects/rear/var/lib/rear/layout/disklayout.conf
Component 'disk /dev/nvme4n1' is disabled in /home/gdhaese1/projects/rear/var/lib/rear/layout/disklayout.conf
Disabling component 'fs ... /var/lib/rancher' in /home/gdhaese1/projects/rear/var/lib/rear/layout/disklayout.conf
Disabling component 'fs ... /var/log' in /home/gdhaese1/projects/rear/var/lib/rear/layout/disklayout.conf
GRUB found in first bytes on /dev/nvme0n1 and GRUB 2 is installed, using GRUB2 as a guessed bootloader for 'rear recover'
Skip saving storage layout as 'barrel' devicegraph (no 'barrel' command)
Verifying that the entries in /home/gdhaese1/projects/rear/var/lib/rear/layout/disklayout.conf are correct
Created disk layout (check the results in /home/gdhaese1/projects/rear/var/lib/rear/layout/disklayout.conf)
Exiting rear savelayout (PID 1032097) and its descendant processes ...
Running exit tasks
To remove the build area you may use (with caution): rm -Rf --one-file-system /var/tmp/rear.9rUmpk3qW9R3ff1
#-> grep -v \# ../var/lib/rear/layout/disklayout.conf
disk /dev/nvme0n1 32212254720 gpt
part /dev/nvme0n1 1048576 1048576 rear-noname bios_grub /dev/nvme0n1p1
part /dev/nvme0n1 32210140672 2097152 rear-noname none /dev/nvme0n1p2
disk /dev/nvme1n1 590558003200 loop
disk /dev/nvme2n1 21474836480 unknown
disk /dev/nvme6n1 214748364800 loop
lvmdev /dev/vg_app /dev/nvme2n1 5jSbmk-ooLT-rpTk-A3nB-cum8-FF1v-fmCRG6 21470642176
lvmgrp /dev/vg_app 4096 5119 20967424
lvmvol /dev/vg_app lg_app 21470642176b linear
fs /dev/longhorn/pvc-89ba1477-4a18-4af1-8534-e67fb916c187 /var/lib/kubelet/plugins/kubernetes.io/csi/driver.longhorn.io/5f9dffb77504aaa81768b25cdfc050f65d7128106e43a32e3576b6a80068957a/globalmount ext4 uuid=8d878d19-ffe5-46e2-ad10-00c68af1a855 label= blocksize=4096 reserved_blocks=0% max_mounts=-1 check_interval=0d bytes_per_inode=16384 default_mount_options=user_xattr,acl options=rw,relatime
fs /dev/longhorn/pvc-d2566b87-3cad-4458-9175-72076d342aee /var/lib/kubelet/plugins/kubernetes.io/csi/driver.longhorn.io/3fa3d3d63dba018d962e3388b8ea4f64eba99189f524247debac6f414f1e4b09/globalmount ext4 uuid=f5f54428-765b-42e3-a20f-cfa5e476de36 label= blocksize=4096 reserved_blocks=0% max_mounts=-1 check_interval=0d bytes_per_inode=16384 default_mount_options=user_xattr,acl options=rw,relatime
fs /dev/longhorn/pvc-e53f8225-6e04-4d24-b9db-e8fc7dea3f51 /var/lib/kubelet/plugins/kubernetes.io/csi/driver.longhorn.io/23150dd0f272a1b4923866721c5800414602ece2645428d106e7a8cdfb8ff657/globalmount ext4 uuid=6110a429-e28f-4288-a7b7-285e4958827d label= blocksize=4096 reserved_blocks=0% max_mounts=-1 check_interval=0d bytes_per_inode=16384 default_mount_options=user_xattr,acl options=rw,relatime
fs /dev/mapper/vg_app-lg_app /app xfs uuid=beb48049-5b75-46c2-8e75-a6f8d3881d0e label=lg_app options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
fs /dev/nvme0n1p2 / xfs uuid=c9aa25ee-e65c-4818-9b2f-fa411d89f585 label= options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
fs /dev/nvme1n1 /app/elasticsearch xfs uuid=6281fd1f-d107-427a-a22c-2f575fb7a58c label= options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
fs /dev/nvme6n1 /app/data/longhorn xfs uuid=5a2a74b7-9c6f-4af6-bbc6-9e2ecd52a314 label= options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
Workaround, if any¶
Not yet found.
Additional information¶
The debug log file of the savelayout
run:
rear-AWSABLIRLL000K.log
jsmeix commented at 2025-02-13 15:56:¶
@gdha
only a first vague analysis (I could be easily wrong here
because of a "too deeply nested things" mental overflow):
I think it happens in
layout/save/default/320_autoexclude.sh
in this code part
# looking for parent in disk AND multipath device for a given fs:mountpoint
disks=$(find_disk_and_multipath fs:$mountpoint)
for disk in $disks ; do
if ! IsInArray "$disk" "${used_disks[@]}" ; then
used_disks+=( "$disk" )
fi
done
therein inparticular what the call
find_disk_and_multipath fs:/var/lib/rancher
results because in your rear-AWSABLIRLL000K.log
# cat rear-AWSABLIRLL000K.log \
| sed -n '/ source .*320_autoexclude.sh/,/ source /p' \
| sed -n '/find_disk_and_multipath fs:\/var\/lib\/rancher/,/find_disk_and_multipath fs/p'
shows (excerpts)
+++ find_disk_and_multipath fs:/var/lib/rancher
+++ find_disk fs:/var/lib/rancher
+++ get_parent_components fs:/var/lib/rancher disk
...
+++ echo /dev/nvme0n1
...
++ disks=/dev/nvme0n1
++ for disk in $disks
++ IsInArray /dev/nvme0n1 /dev/nvme0n1 /dev/nvme2n1 /dev/nvme1n1
so it seems the call
get_parent_components fs:/var/lib/rancher disk
falsely results '/dev/nvme0n1'
as parent 'disk' component of 'fs:/var/lib/rancher'
(and that disk is already in the used_disks array)
so the true parent 'disk' component of 'fs:/var/lib/rancher'
which is '/dev/nvme4n1' gets never added to the used_disks array
so that in the end the subsequent code in
layout/save/default/320_autoexclude.sh
while read disk name junk ; do
if ! IsInArray "$name" "${used_disks[@]}" ; then
DebugPrint "Automatically excluding disk $name (not used by any mounted filesystem)"
does
++ read disk name junk
++ IsInArray /dev/nvme4n1 /dev/nvme0n1 /dev/nvme2n1 /dev/nvme1n1 /dev/nvme6n1
++ return 1
++ DebugPrint 'Automatically excluding disk /dev/nvme4n1 (not used by any mounted filesystem)'
gdha commented at 2025-02-14 08:34:¶
The message:
2025-02-14 09:16:26.707378591 No partitions found on /dev/nvme4n1
In script 200_partition_layout.sh
function extract_partitions
doesn't find any partition for device nvme4n1:
#-> ls /sys/block/nvme4n1/nvme*
ls: cannot access '/sys/block/nvme4n1/nvme*': No such file or directory
However, there is one partition available:
#-> parted /dev/nvme4n1 print
Model: Amazon Elastic Block Store (nvme)
Disk /dev/nvme4n1: 172GB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Disk Flags:
Number Start End Size File system Flags
1 0.00B 172GB 172GB xfs
On the other hand fdisk
sees nothing, but we are not relying anymore
on fdisk
,
#-> fdisk -l /dev/nvme4n1
Disk /dev/nvme4n1: 160 GiB, 171798691840 bytes, 335544320 sectors
Disk model: Amazon Elastic Block Store
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Which is correct? Could it be that our function extract_partitions
is
incomplete?
gdha commented at 2025-02-14 09:19:¶
The reason is most likely that the file system is mounted on the device itself and not on a partition:
#-> lsblk /dev/nvme4n1
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme4n1 259:5 0 160G 0 disk /var/lib/rancher
#-> blkid -s UUID -o value /dev/nvme4n1
89142993-8500-42fe-8458-e8ccda4a7113
#-> blkid -s UUID -o value /dev/nvme4n1p1
In the /etc/fstab
file we have:
UUID=89142993-8500-42fe-8458-e8ccda4a7113 /var/lib/rancher xfs defaults 0 0
Search goes on...
gdha commented at 2025-02-14 11:07:¶
#-> cat /home/gdhaese1/projects/rear/var/lib/rear/layout/disktodo.conf
todo /dev/nvme0n1 disk
todo /dev/nvme0n1p1 part
todo /dev/nvme0n1p2 part
todo /dev/nvme1n1 disk
todo /dev/nvme2n1 disk
todo /dev/nvme3n1 disk
todo /dev/nvme4n1 disk
todo /dev/nvme5n1 disk
todo /dev/nvme6n1 disk
todo /dev/nvme7n1 disk
todo /dev/nvme7n1p1 part
todo /dev/nvme7n1p2 part
todo /dev/sda disk
todo /dev/nvme3n1 disk
todo /dev/nvme5n1 disk
todo /dev/nvme4n1 disk
todo pv:/dev/nvme2n1 lvmdev
todo /dev/vg_app lvmgrp
todo /dev/mapper/vg_app-lg_app lvmvol
todo fs:/app fs
todo fs:/ fs
todo fs:/app/elasticsearch fs
todo fs:/var/lib/rancher fs
todo fs:/var/log fs
todo fs:/app/data/longhorn fs
We noticed that disks are mentioned twice, and therefore, function
get_component_type
get_component_type() {
grep -E "^[^ ]+ $1 " $LAYOUT_TODO | cut -d " " -f 3
}
returns twice "disk"
++++ get_component_type /dev/nvme4n1
++++ grep -E '^[^ ]+ /dev/nvme4n1 ' /home/gdhaese1/projects/rear/var/lib/rear/layout/disktodo.conf
++++ cut -d ' ' -f 3
+++ type='disk
disk'
+++ [[ disk
disk != \d\i\s\k ]]
+++ continue
To fix this: we could append uniq
to the fucntion:
grep -E "^[^ ]+ $1 " $LAYOUT_TODO | cut -d " " -f 3 | uniq
jsmeix commented at 2025-02-14 12:28:¶
Plain 'uniq' needs sorted input e.g.
# echo -e 'one\ntwo\nthree\nthree\ntwo\none'
one
two
three
three
two
one
# echo -e 'one\ntwo\nthree\nthree\ntwo\none' | uniq
one
two
three
two
one
# echo -e 'one\ntwo\nthree\nthree\ntwo\none' | sort | uniq
one
three
two
But sorting disktodo.conf may break how that code is meant to work
when the entries in disktodo.conf need to be in the given order.
It seems the entries in disktodo.conf need to be in the given order
i.e. in the order how things must be recreated (first partitions
on disks, then higher level volumes like LVM, finally filesystems).
To help with such issues I made the function 'unique_unsorted'
see usr/share/rear/lib/global-functions.sh
https://github.com/rear/rear/blob/rear-2.9/usr/share/rear/lib/global-functions.sh#L37
gdha commented at 2025-02-14 12:44:¶
@jsmeix Thank you for a golden tip, I removed the uniq
from the
function get_component_type
and created a small script to do what you proposed:
#-> cat save/default/305_uniq_disktodo.sh
# make LAYOUT_TODO file uniq but not changing the order in any way
unique_unsorted $LAYOUT_TODO >${LAYOUT_TODO}.new
mv -f ${LAYOUT_TODO}.new $LAYOUT_TODO
By doing this the problem is not yet fixed, but at least the input files are better without duplicates.
jsmeix commented at 2025-02-14 12:50:¶
Hmmm...
I think when
# Get the type of a layout component
get_component_type() {
grep -E "^[^ ]+ $1 " $LAYOUT_TODO | cut -d " " -f 3
}
results more than one word (i.e. more than one line)
and those results are not all identical,
then it is a BugError in ReaR because then
one same layout component is stored with several
different component types in disktodo.conf
On the other hand when all results are identical
things are probably OK because then one same layout component
is stored one or more times with one same component type
in disktodo.conf
So plain 'uniq' could be the exact right thing here
to distinguish the OK case from the BugError case
for example something like (untested code proposal):
# Get the type of a layout component:
# It is OK when one same layout component is stored in disktodo.conf
# several times with one same component type (then 'uniq' results one value)
# but it is a Bug in ReaR when one same layout component is stored
# in disktodo.conf several times with different component types
# (then 'uniq' results more than one value).
get_component_type() {
local component="$1"
local component_types=()
component_types=( $( grep -E "^[^ ]+ $component " $LAYOUT_TODO | cut -d " " -f 3 | uniq ) )
test ${#component_types[@]} -lt 1 && return 1
test ${#component_types[@]} -gt 1 && BugError "Layout component '$component' has more than one type in $LAYOUT_TODO"
echo "${component_types[0]}"
}
gdha commented at 2025-02-14 12:55:¶
It does work with the new script save/default/305_uniq_disktodo.sh
(I
had an exit to test so I couldn't tell):
#-> grep -v \# /home/gdhaese1/projects/rear/var/lib/rear/layout/disklayout.conf
disk /dev/nvme0n1 32212254720 gpt
part /dev/nvme0n1 1048576 1048576 rear-noname bios_grub /dev/nvme0n1p1
part /dev/nvme0n1 32210140672 2097152 rear-noname none /dev/nvme0n1p2
disk /dev/nvme1n1 590558003200 loop
disk /dev/nvme2n1 21474836480 unknown
disk /dev/nvme4n1 171798691840 loop
disk /dev/nvme5n1 21474836480 loop
disk /dev/nvme6n1 214748364800 loop
disk /dev/nvme5n1 21474836480 loop
disk /dev/nvme4n1 171798691840 loop
lvmdev /dev/vg_app /dev/nvme2n1 5jSbmk-ooLT-rpTk-A3nB-cum8-FF1v-fmCRG6 21470642176
lvmgrp /dev/vg_app 4096 5119 20967424
lvmvol /dev/vg_app lg_app 21470642176b linear
fs /dev/mapper/vg_app-lg_app /app xfs uuid=beb48049-5b75-46c2-8e75-a6f8d3881d0e label=lg_app options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
fs /dev/nvme0n1p2 / xfs uuid=c9aa25ee-e65c-4818-9b2f-fa411d89f585 label= options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
fs /dev/nvme1n1 /app/elasticsearch xfs uuid=6281fd1f-d107-427a-a22c-2f575fb7a58c label= options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
fs /dev/nvme4n1 /var/lib/rancher xfs uuid=89142993-8500-42fe-8458-e8ccda4a7113 label= options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
fs /dev/nvme5n1 /var/log xfs uuid=0f12a909-7cbf-487d-8bcb-663eeaabf3c2 label= options=rw,nosuid,nodev,noexec,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
fs /dev/nvme6n1 /app/data/longhorn xfs uuid=5a2a74b7-9c6f-4af6-bbc6-9e2ecd52a314 label= options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
However, your proposal in function get_component_type
looks promising
too. I can test it...with and with the new script!
The updates in function get_component_type
work both ways = excellent
news.
However, without my new script save/default/305_uniq_disktodo.sh
the
duplicate entries stay, therefore, I would propose to keep both updates.
jsmeix commented at 2025-02-14 13:31:¶
The whole layout related code is often very hard for me
to understand it, in particular I often fail to see
how all those layout related code parts work together.
I basically fail to see how all the layout related
code parts are meant / intended to work together.
So usually I don't dare to touch layout related code
because I fail to imagine / foresee what will actually
happen when I change something.
I assume this is because the layout related code is rather
old code from times when 'lsblk' was not yet available
or not available with nowadays functionality.
I mean:
I think nowadays 'lsblk' could be sufficient to handle
all needed layout components and their dependencies
i.e. parent(s) <-> child(ren) dependencies, e.g.
one disk with several partitions (parent <-> children)
versus two disks for one RAID1 (parents <-> child).
In the end what I like to tell is:
For layout related code changes I fail to make my own
proper opinion if some change looks OK to me or not
because I don't have sufficient overview in this area :-(
[Export of Github issue for rear/rear.]