#3175 PR merged
: Automatically include mounted btrfs subvolumes in NETFS backups¶
Labels: enhancement
lzaoral opened issue at 2024-03-07 12:00:¶
Pull Request Details:¶
-
Type: Enhancement
-
Impact: High
-
Reference to related issue (URL): https://github.com/rear/rear/issues/2928
-
How was this pull request tested?
rear savelayout
and manual inspection of generated files and backup/restore of a Fedora Rawhide machine -
Description of the changes in this pull request:
- automatically include mounted btrfs subvolumes in NETFS backups
- improve generation of
$RESTORE_EXCLUDE_FILE
jsmeix commented at 2024-03-07 13:24:¶
@lzaoral
thank you for this enhancement!
I will test how it behaves on SLES systems
with their rather complicated default btrfs structure.
Offhandedly I think the main problem is
possibly mounted btrfs snapshot subvolumes, see
"When btrfs is used with snapshots ...
then usual backup and restore cannot work." in
https://en.opensuse.org/SDB:Disaster_Recovery#btrfs
I.e. when there are mounted btrfs "thingies"
listed as 'btrfsmountedsubvol' in disklayout.conf
that are also listed as '#btrfssnapshotsubvol'.
For example a disklayout.conf on SLES15-SP5 (excerpts)
btrfsdefaultsubvol /dev/sda2 / 268 @/.snapshots/1/snapshot
...
#btrfssnapshotsubvol /dev/sda2 / 272 @/.snapshots/2/snapshot
#btrfssnapshotsubvol /dev/sda2 / 273 @/.snapshots/3/snapshot
#btrfssnapshotsubvol /dev/sda2 / 274 @/.snapshots/4/snapshot
...
btrfsmountedsubvol /dev/sda2 / rw,relatime,space_cache,subvolid=268,subvol=/@/.snapshots/1/snapshot @/.snapshots/1/snapshot
btrfsmountedsubvol /dev/sda2 /.snapshots rw,relatime,space_cache,subvolid=267,subvol=/@/.snapshots @/.snapshots
btrfsmountedsubvol /dev/sda2 /boot/grub2/x86_64-efi rw,relatime,space_cache,subvolid=265,subvol=/@/boot/grub2/x86_64-efi @/boot/grub2/x86_64-efi
btrfsmountedsubvol /dev/sda2 /root rw,relatime,space_cache,subvolid=262,subvol=/@/root @/root
btrfsmountedsubvol /dev/sda2 /opt rw,relatime,space_cache,subvolid=263,subvol=/@/opt @/opt
btrfsmountedsubvol /dev/sda2 /home rw,relatime,space_cache,subvolid=264,subvol=/@/home @/home
btrfsmountedsubvol /dev/sda2 /boot/grub2/i386-pc rw,relatime,space_cache,subvolid=266,subvol=/@/boot/grub2/i386-pc @/boot/grub2/i386-pc
btrfsmountedsubvol /dev/sda2 /srv rw,relatime,space_cache,subvolid=261,subvol=/@/srv @/srv
btrfsmountedsubvol /dev/sda2 /tmp rw,relatime,space_cache,subvolid=260,subvol=/@/tmp @/tmp
btrfsmountedsubvol /dev/sda2 /usr/local rw,relatime,space_cache,subvolid=259,subvol=/@/usr/local @/usr/local
btrfsmountedsubvol /dev/sda2 /var rw,relatime,space_cache,subvolid=258,subvol=/@/var @/var
For example assume in addition to @/.snapshots/1/snapshot
that is the current system snapshot which is mounted at /
also
some other snapshots of the system like
@/.snapshots/2/snapshot and @/.snapshots/4/snapshot
are mounted e.g. at /snapshot2 and /snapshot4
Then a 'tar' backup would contain the system files
basically three times:
- the files of what is mounted at /
- the files of what is mounted at /snapshot2
- the files of what is mounted at /snapshot4
So the backup would be basically about three times
as big as if only what is mounted at / was backed up.
But during 'tar' restore there is no deduplication
so the restore would basically need about three times
the disk space as the original system needed.
pcahyna commented at 2024-03-07 13:32:¶
I think the main problem is
possibly mounted btrfs snapshot subvolumes
I am not a btrfs expert at all, but is it possible to distinguish
snapshot subvolumes from "normal" (non-snapshot) subvolumes? Then we
could save this subvolume metadata (snapshot yes/no) and do something
based on the information when recreating and restoring. First we would
probably just skip snapshots, later we could do something more
intelligent if possible.
It is definitely possible to distinguish snapshots from regular
filesystems (filessytems are equivalent to btrfs subvolumes) in ZFS. It
is also possible to recognize snapshots from regular volumes in LVM.
jsmeix commented at 2024-03-07 13:49:¶
Only an offhanded thought:
I fear btrfs normal subvolumes versus btrfs snapshot subvolumes
is only one example of a very generic problem when by default
every mounted "thingy" is included in the 'tar' backup:
I think it is in general possible that one same
"mountable thingy" can be mounted at the same time
at different mount points e.g. at '/here' and '/there'.
When '/here' and '/there' are included in a 'tar' backup
things may get restored twice as distinct sets of files
and not as one same set of files that is mounted two times
under the mountpoint directories '/here' and '/there'.
Tomorrow I will experiment a bit with that.
jsmeix commented at 2024-03-07 13:52:¶
Perhaps we can by default have every mounted "thingy"
included in the 'tar' backup
BUT
we may need some check for duplicates in the 'tar' backup
i.e. something that detects when one same "mountable thingy"
will become included in the 'tar' backup more than once.
jsmeix commented at 2024-03-08 08:49:¶
Currently I am exploring how 'tar' behaves in general
when exact same files are provided as 'tar' arguments
to be archived.
It seems 'tar' behaves well forgiving in this case:
# mkdir test
# cd test
# echo foo >foo
# tar -cvvvf test.tar foo foo
-rw-r--r-- root/root 4 2024-03-08 09:41 foo
hrw-r--r-- root/root 0 2024-03-08 09:41 foo link to foo
# tar -tvvvf test.tar
-rw-r--r-- root/root 4 2024-03-08 09:41 foo
hrw-r--r-- root/root 0 2024-03-08 09:41 foo link to foo
# mkdir untartest
# cd untartest/
# tar -xvvvf ../test.tar
-rw-r--r-- root/root 4 2024-03-08 09:41 foo
hrw-r--r-- root/root 0 2024-03-08 09:41 foo link to foo
# ls -l
total 4
-rw-r--r-- 1 root root 4 Mar 8 09:41 foo
So perhaps only mounted btrfs snapshot subvolumes
added to 'tar' arcives cause real problems in practice.
In this case btrfs snapshot subvolumes should be excluded
by default from being added to what 'tar' should archive.
I think it is OK if a user mounts the same stuff
at different mount points and ReaR includes those
mount points by default in the 'tar' backup
then it is up to the user to manually exclude things
as needed from his backup.
pcahyna commented at 2024-03-08 08:59:¶
@jsmeix
I fear btrfs normal subvolumes versus btrfs snapshot subvolumes is only one example of a very generic problem when by default every mounted "thingy" is included in the 'tar' backup:
I think it is in general possible that one same "mountable thingy" can be mounted at the same time at different mount points
I disagree that it is an example of this problem. Snapshots are not the same thing mounted at different places. They are different things mounted at different places - snapshots exist because their content is (at least in principle) different.
One filesystem mounted at more places can occur as well, and it will result in an explosion of backup data, but it restoring it twice then should not result in an increase of the size of the restored system, only in a slower restore, because you keep restoring to the same filesystem.
I would not try to solve these two problems in the same way (cf. RFC 1925 item 5).
pcahyna commented at 2024-03-08 09:51:¶
It seems 'tar' behaves well forgiving in this case:
# mkdir test # cd test # echo foo >foo # tar -cvvvf test.tar foo foo -rw-r--r-- root/root 4 2024-03-08 09:41 foo hrw-r--r-- root/root 0 2024-03-08 09:41 foo link to foo # tar -tvvvf test.tar -rw-r--r-- root/root 4 2024-03-08 09:41 foo hrw-r--r-- root/root 0 2024-03-08 09:41 foo link to foo # mkdir untartest # cd untartest/ # tar -xvvvf ../test.tar -rw-r--r-- root/root 4 2024-03-08 09:41 foo hrw-r--r-- root/root 0 2024-03-08 09:41 foo link to foo # ls -l total 4 -rw-r--r-- 1 root root 4 Mar 8 09:41 foo
It thinks that the doubled file names are different names for the same files (i.e. hardlinks), which is not entirely correct - not sure if it can have some unwanted consequences or not.
jsmeix commented at 2024-03-08 12:16:¶
Yes.
The whole point of my experiments with 'tar' here
is to find out if my "fear" above is true or not and
in general to better understand what we have to deal with.
If it is actually only one root problem
then this root problem should be solved
(instead of solving each of its instances).
If it is actually several separated problems then
each separated problem should be solved separately.
https://github.com/rear/rear/pull/3175#issuecomment-1985290555
indicates that it is several separated problems
(but this is only my very first test in this area).
From my experiments with 'tar' in the past I know that
'tar' behaves deterministically (i.e. as programmed and
documented when reading the whole 'tar' manual carefully)
but that could appear rather often 'unexpectedly'
(i.e. different than what one may expect offhandedly), e.g.
https://github.com/rear/rear/issues/2911#issuecomment-1398346148
pcahyna commented at 2024-03-08 12:45:¶
If it is actually only one root problem then this root problem should be solved (instead of solving each of its instances).
If it is actually several separated problems then each separated problem should be solved separatedly.
That's an interesting idea. For multiple identical arguments to tar
,
tar duplicates the backup and considers the secondary copy as a hardlink
to the first copy. I checked that the same happens when there is a
filesystem mounted multiple times:
# mkdir /mnt/mount1
# mkdir /mnt/mount2
# mount /dev/vdb /mnt/mount1
# mount /dev/vdb /mnt/mount2
# touch /mnt/mount1/foo
# ls -l /mnt/mount2/foo
-rwxr-xr-x. 1 root root 0 Mar 8 07:36 /mnt/mount2/foo
# tar cvvf /dev/null /mnt/mount1 /mnt/mount2
tar: Removing leading `/' from member names
drwxr-xr-x root/root 0 1969-12-31 19:00 /mnt/mount1/
-rwxr-xr-x root/root 0 2024-03-08 07:36 /mnt/mount1/foo
tar: Removing leading `/' from hard link targets
drwxr-xr-x root/root 0 1969-12-31 19:00 /mnt/mount2/
hrwxr-xr-x root/root 0 2024-03-08 07:36 /mnt/mount2/foo link to mnt/mount1/foo
Is the same happening with different btrfs snapshots mounted at different mountpoints? I.e. does tar consider files in different snapshots (originally same, but possibly different when they have been modified since the snapshot was taken) as hardlinks to the same file?
lzaoral commented at 2024-03-08 12:48:¶
Thank you for the feedback, @jsmeix! I'll amend the code to skip backup of all mounted btrfs snapshot subvolumes.
The duplication of files in backup when a filesystem/btrfs subvolume is mounted more than once is a different (though related) issue, therefore, I suggest to resolve it separately.
jsmeix commented at 2024-03-08 15:35:¶
I tested it with SLES15-SP5
with the default btrfs structure
on a QEMU/KVM test VM:
# lsblk -ipo NAME,TRAN,TYPE,FSTYPE,SIZE,MOUNTPOINTS
NAME TRAN TYPE FSTYPE SIZE MOUNTPOINTS
/dev/sda ata disk 15G
|-/dev/sda1 part 8M
|-/dev/sda2 part btrfs 13G /var
| /usr/local
| /root
| /tmp
| /srv
| /boot/grub2/i386-pc
| /opt
| /home
| /boot/grub2/x86_64-efi
| /.snapshots
| /
`-/dev/sda3 part swap 2G [SWAP]
I was in particular interested how things behave
with the "well known" (to SLES users) SUSE specific
BACKUP_PROG_INCLUDE=( $( findmnt -n -r -o TARGET -t btrfs | grep -v '^/$' | egrep -v 'snapshots|crash' ) )
manual setting in etc/rear/local.conf
cf. conf/examples/SLE12-SP2-btrfs-example.conf
so I have
OUTPUT=ISO
BACKUP=NETFS
BACKUP_OPTIONS="nfsvers=3,nolock"
BACKUP_URL=nfs://192.168.178.66/nfs
REQUIRED_PROGS+=( snapper chattr )
PROGS+=( lsattr )
COPY_AS_IS+=( /usr/lib/snapper/installation-helper /etc/snapper/config-templates/default )
BACKUP_PROG_INCLUDE=( /boot/grub2/i386-pc /boot/grub2/x86_64-efi /home /opt /root /srv /tmp /usr/local /var )
With that I got duplicated things in the backup.tar.gz
To make ReaR behave backward compatible for SLES users
and because it seems to be "the right thing" in general
I implemented
https://github.com/rear/rear/pull/3177
With this additional changes I get no longer
duplicated things in the backup.tar.gz
BUT
I did not yet test "rear recover".
This will be done next week.
@lzaoral @pcahyna @rear/contributors
I wish you a relaxed and recovering weekend!
pcahyna commented at 2024-03-08 16:19:¶
@jsmeix if you have snapshots, can you please test https://github.com/rear/rear/pull/3175#issuecomment-1985632419 : "does tar consider files in different snapshots (originally same, but possibly different when they have been modified since the snapshot was taken) as hardlinks to the same file?" ?
pcahyna commented at 2024-03-10 13:34:¶
I was in particular interested how things behave with the "well known" (to SLES users) SUSE specific
BACKUP_PROG_INCLUDE=( $( findmnt -n -r -o TARGET -t btrfs | grep -v '^/$' | egrep -v 'snapshots|crash' ) )
manual setting in etc/rear/local.conf cf. conf/examples/SLE12-SP2-btrfs-example.conf so I have
OUTPUT=ISO BACKUP=NETFS BACKUP_OPTIONS="nfsvers=3,nolock" BACKUP_URL=nfs://192.168.178.66/nfs REQUIRED_PROGS+=( snapper chattr ) PROGS+=( lsattr ) COPY_AS_IS+=( /usr/lib/snapper/installation-helper /etc/snapper/config-templates/default ) BACKUP_PROG_INCLUDE=( /boot/grub2/i386-pc /boot/grub2/x86_64-efi /home /opt /root /srv /tmp /usr/local /var )
With that I got duplicated things in the backup.tar.gz
Is it a regression with this PR, or did you get duplicated entries in
backup.tar.gz even before? What are the duplicated entries? Aren't you
missing BACKUP_ONLY_INCLUDE="yes"
? (but then you should probably add
/boot
to BACKUP_PROG_INCLUDE
unless on SLES you have /boot
as part
of /
).
pcahyna commented at 2024-03-11 13:23:¶
@jsmeix if you have snapshots, can you please test #3175 (comment) : "does tar consider files in different snapshots (originally same, but possibly different when they have been modified since the snapshot was taken) as hardlinks to the same file?" ?
I tested ZFS and the same files in a snapshot and in the original
filesystem do not show up as hardlinks to the same file in the tar
output. Of course, although Btrfs is in many ways analogous to ZFS, it
can behave differently in details like that.
pcahyna commented at 2024-03-26 16:43:¶
Hi all, reviewing what needs to be done there.
- [ ] what are the duplicated entries in backup.tar.gz currently? https://github.com/rear/rear/pull/3175#issuecomment-1985907542
- [ ] avoid
sort -u
, to be replaced byuniq_unsorted
in #3177 - [ ] test recovery with the SUSE-specific BACKUP_PROG_INCLUDE setting https://github.com/rear/rear/pull/3175#issuecomment-1985907542
- [ ] test snapshots: does tar consider files in different snapshots (originally same, but possibly different when they have been modified since the snapshot was taken) as hardlinks to the same file? https://github.com/rear/rear/pull/3175#issuecomment-1985986432
- [ ] exclude snapshots from backup: https://github.com/rear/rear/pull/3175#issuecomment-1985635686
pcahyna commented at 2024-04-09 10:04:¶
Hi @jsmeix can you please have a look? I believe the first four items in
the checklist above are for you (the second only partially, you
implement uniq_unsorted
in #3177 and then @lzaoral will use it here).
Is the task list ok?
jsmeix commented at 2024-04-09 12:48:¶
@pcahyna
I will have a look.
First I would like to implement uniq_unsorted
or perhaps even better named unique_unsorted
cf.
https://github.com/rear/rear/pull/3177#issuecomment-2045095158
so that @lzaoral could use it here.
lzaoral commented at 2024-04-29 11:23:¶
@jsmeix Thank you for implementing unique_unsorted
! Hopefully, I'll
get to the exclusion of snapshot volumes this week.
lzaoral commented at 2024-05-14 07:59:¶
@jsmeix @pcahyna The automatic exclusion of snapshots from backups is implemented in 027785f6e9f1cabee334b43ac9afa0fc8291dc75.
jsmeix commented at 2024-05-14 08:13:¶
@lzaoral
see my
https://github.com/rear/rear/commit/47429060f749f6c2968ca867c529f995be053f0a#r141983908
which I copy here to be safe that it is not lost because
https://github.com/rear/rear/commit/027785f6e9f1cabee334b43ac9afa0fc8291dc75
shows
This commit does not belong to any branch on this repository,
and may belong to a fork outside of the repository.
Copy of my
https://github.com/rear/rear/commit/47429060f749f6c2968ca867c529f995be053f0a#r141983908
follows here:
echo "Mounted btrfs snapshot subvolumes are autoexcluded.
Here you must use
echo "# ..."
because here STDOUT gets written into disklayout.conf
because that code is within the
# Begin of group command that appends its stdout to DISKLAYOUT_FILE:
{
...
} 1>>$DISKLAYOUT_FILE
# End of group command that appends its stdout to DISKLAYOUT_FILE
Yes - I know - that is horrible coding style
which needs to be cleaned up - at some time -
as time permits - i.e. "never in practice" :-(
jsmeix commented at 2024-05-14 08:47:¶
The automatic exclusion of snapshots from backups
with fixed echo "# ..."
is implemented in
https://github.com/rear/rear/commit/018c5281a96b33cff49dd23c6a22428554d189e2
jsmeix commented at 2024-05-14 12:50:¶
@lzaoral
if time permits please have a look at my
https://github.com/rear/rear/pull/3221
if it fits together with your changes here
in particular regarding your changed
layout/save/default/340_generate_mountpoint_device.sh
jsmeix commented at 2024-05-15 07:26:¶
I tested "rear -D mkbackup"
for this pull request here
together with my changes in my
https://github.com/rear/rear/pull/3221
on my SLES15 SP5 tests VM with the
SUSE default btrfs structure
and as far as I see up to now
all looks perfectly well
except one possible issue that I described
near the end of my "Details" at "BUT".
Details:
Disk layout:
# lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,SIZE,MOUNTPOINTS /dev/sda
NAME KNAME PKNAME TRAN TYPE FSTYPE SIZE MOUNTPOINTS
/dev/sda /dev/sda ata disk 15G
|-/dev/sda1 /dev/sda1 /dev/sda part 8M
|-/dev/sda2 /dev/sda2 /dev/sda part btrfs 13G /var
| /usr/local
| /root
| /tmp
| /boot/grub2/i386-pc
| /srv
| /boot/grub2/x86_64-efi
| /opt
| /home
| /.snapshots
| /
`-/dev/sda3 /dev/sda3 /dev/sda part swap 2G [SWAP]
SUSE default btrfs structure:
# findmnt -t btrfs
TARGET SOURCE FSTYPE OPTIONS
/ /dev/sda2[/@/.snapshots/1/snapshot] btrfs rw,relatime,space_cache,subvolid=268,subvol=/@/.snapshots/1/snapshot
├─/home /dev/sda2[/@/home] btrfs rw,relatime,space_cache,subvolid=264,subvol=/@/home
├─/tmp /dev/sda2[/@/tmp] btrfs rw,relatime,space_cache,subvolid=260,subvol=/@/tmp
├─/root /dev/sda2[/@/root] btrfs rw,relatime,space_cache,subvolid=262,subvol=/@/root
├─/opt /dev/sda2[/@/opt] btrfs rw,relatime,space_cache,subvolid=263,subvol=/@/opt
├─/boot/grub2/x86_64-efi /dev/sda2[/@/boot/grub2/x86_64-efi] btrfs rw,relatime,space_cache,subvolid=265,subvol=/@/boot/grub2/x86_64-efi
├─/var /dev/sda2[/@/var] btrfs rw,relatime,space_cache,subvolid=258,subvol=/@/var
├─/.snapshots /dev/sda2[/@/.snapshots] btrfs rw,relatime,space_cache,subvolid=267,subvol=/@/.snapshots
├─/boot/grub2/i386-pc /dev/sda2[/@/boot/grub2/i386-pc] btrfs rw,relatime,space_cache,subvolid=266,subvol=/@/boot/grub2/i386-pc
├─/srv /dev/sda2[/@/srv] btrfs rw,relatime,space_cache,subvolid=261,subvol=/@/srv
└─/usr/local /dev/sda2[/@/usr/local] btrfs rw,relatime,space_cache,subvolid=259,subvol=/@/usr/local
etc/rear/local.conf
# grep -v '^#' etc/rear/local.conf
OUTPUT=ISO
BACKUP=NETFS
BACKUP_OPTIONS="nfsvers=3,nolock"
BACKUP_URL=nfs://192.168.178.66/nfs
REQUIRED_PROGS+=( snapper chattr )
PROGS+=( lsattr su )
COPY_AS_IS+=( /usr/lib/snapper/installation-helper /etc/snapper/config-templates/default )
BACKUP_PROG_INCLUDE=( /boot/grub2/i386-pc
/boot/grub2/x86_64-efi
/home
/opt
/root
/srv
/tmp
/usr/local
/var
/
/boot/grub2/i386-pc
/boot/grub2/x86_64-efi
/boot/grub2/i386-pc
/home
/
/opt )
BACKUP_PROG_EXCLUDE=( /var/tmp
/qqq
/var/tmp
/tmp )
POST_RECOVERY_SCRIPT=( 'if snapper --no-dbus -r $TARGET_FS_ROOT get-config | grep -q "^QGROUP.*[0-9]/[0-9]" ; then snapper --no-dbus -r $TARGET_FS_ROOT set-config QGROUP= ; snapper --no-dbus -r $TARGET_FS_ROOT setup-quota && echo snapper setup-quota done || echo snapper setup-quota failed ; else echo snapper setup-quota not used ; fi' )
SSH_ROOT_PASSWORD='rear'
USE_DHCLIENT="yes"
PROGRESS_MODE="plain"
PROGRESS_WAIT_SECONDS="5"
MODULES=( loaded_modules )
FIRMWARE_FILES=( no )
The duplicates in BACKUP_PROG_INCLUDE and BACKUP_PROG_EXCLUDE
are intentional to test that subsequent duplicates are ignored.
disklayout.conf
# grep -v '^#' var/lib/rear/layout/disklayout.conf
disk /dev/sda 16106127360 gpt
part /dev/sda 8388608 1048576 rear-noname bios_grub /dev/sda1
part /dev/sda 13949206528 9437184 rear-noname legacy_boot /dev/sda2
part /dev/sda 2147466752 13958643712 rear-noname swap /dev/sda3
fs /dev/sda2 / btrfs uuid=bdec53c2-1ee8-4268-90f9-5ec523774035 label= options=rw,relatime,space_cache,subvolid=268,subvol=/@/.snapshots/1/snapshot
btrfsdefaultsubvol /dev/sda2 / 268 @/.snapshots/1/snapshot
btrfsnormalsubvol /dev/sda2 / 256 @
btrfsnormalsubvol /dev/sda2 / 258 @/var
btrfsnormalsubvol /dev/sda2 / 259 @/usr/local
btrfsnormalsubvol /dev/sda2 / 260 @/tmp
btrfsnormalsubvol /dev/sda2 / 261 @/srv
btrfsnormalsubvol /dev/sda2 / 262 @/root
btrfsnormalsubvol /dev/sda2 / 263 @/opt
btrfsnormalsubvol /dev/sda2 / 264 @/home
btrfsnormalsubvol /dev/sda2 / 265 @/boot/grub2/x86_64-efi
btrfsnormalsubvol /dev/sda2 / 266 @/boot/grub2/i386-pc
btrfsmountedsubvol /dev/sda2 / rw,relatime,space_cache,subvolid=268,subvol=/@/.snapshots/1/snapshot @/.snapshots/1/snapshot
btrfsmountedsubvol /dev/sda2 /.snapshots rw,relatime,space_cache,subvolid=267,subvol=/@/.snapshots @/.snapshots
btrfsmountedsubvol /dev/sda2 /home rw,relatime,space_cache,subvolid=264,subvol=/@/home @/home
btrfsmountedsubvol /dev/sda2 /opt rw,relatime,space_cache,subvolid=263,subvol=/@/opt @/opt
btrfsmountedsubvol /dev/sda2 /boot/grub2/x86_64-efi rw,relatime,space_cache,subvolid=265,subvol=/@/boot/grub2/x86_64-efi @/boot/grub2/x86_64-efi
btrfsmountedsubvol /dev/sda2 /srv rw,relatime,space_cache,subvolid=261,subvol=/@/srv @/srv
btrfsmountedsubvol /dev/sda2 /boot/grub2/i386-pc rw,relatime,space_cache,subvolid=266,subvol=/@/boot/grub2/i386-pc @/boot/grub2/i386-pc
btrfsmountedsubvol /dev/sda2 /tmp rw,relatime,space_cache,subvolid=260,subvol=/@/tmp @/tmp
btrfsmountedsubvol /dev/sda2 /root rw,relatime,space_cache,subvolid=262,subvol=/@/root @/root
btrfsmountedsubvol /dev/sda2 /usr/local rw,relatime,space_cache,subvolid=259,subvol=/@/usr/local @/usr/local
btrfsmountedsubvol /dev/sda2 /var rw,relatime,space_cache,subvolid=258,subvol=/@/var @/var
btrfsnocopyonwrite @/var
swap /dev/sda3 uuid=921157bc-e4d6-4869-8796-7e09207e49a9 label=
What I get in the backup included and excluded
# grep '^2024-05-15 08:42:44.4' var/log/rear/rear-localhost.log
2024-05-15 08:42:44.431801741 Including backup/NETFS/default/500_make_backup.sh
2024-05-15 08:42:44.433571019 Entering debugscript mode via 'set -x'.
2024-05-15 08:42:44.441702528 Making backup (using backup method NETFS)
2024-05-15 08:42:44.444339934 Backup include list (backup-include.txt contents without subsequent duplicates):
2024-05-15 08:42:44.446633014 /
2024-05-15 08:42:44.448576168 /.snapshots
2024-05-15 08:42:44.450386272 /home
2024-05-15 08:42:44.452304609 /opt
2024-05-15 08:42:44.454133219 /boot/grub2/x86_64-efi
2024-05-15 08:42:44.455978840 /srv
2024-05-15 08:42:44.457852313 /boot/grub2/i386-pc
2024-05-15 08:42:44.459744581 /tmp
2024-05-15 08:42:44.462067798 /root
2024-05-15 08:42:44.464037722 /usr/local
2024-05-15 08:42:44.466416713 /var
2024-05-15 08:42:44.468387840 Backup exclude list (backup-exclude.txt contents):
2024-05-15 08:42:44.470268486 /var/tmp
2024-05-15 08:42:44.472209757 /qqq
2024-05-15 08:42:44.474048124 /tmp
2024-05-15 08:42:44.476293264 Creating tar archive '/var/tmp/rear.cxFuYt1rYYpLGtB/outputfs/localhost/backup.tar.gz'
2024-05-15 08:42:44.485411273 tar --warning=no-xdev --sparse --block-number --totals --verbose --no-wildcards-match-slash --one-file-system --ignore-failed-read --anchored --xattrs --xattrs-include=security.capability --xattrs-include=security.selinux --acls --gzip -X /var/tmp/rear.cxFuYt1rYYpLGtB/tmp/backup-exclude.txt -C / -c -f - / /.snapshots /home /opt /boot/grub2/x86_64-efi /srv /boot/grub2/i386-pc /tmp /root /usr/local /var /root/rear.lzaoral-backup-mounted-btrfs-subvolumes/var/log/rear/rear-localhost.log | dd of=/var/tmp/rear.cxFuYt1rYYpLGtB/outputfs/localhost/backup.tar.gz bs=1M
The only thing that worried me is /.snapshots
so I had a closer look what it means
when /.snapshots gets included in the backup:
# find /.snapshots | wc -l
304850
looks scaring - more than 300 thousand files
but intentionally we use
tar ... --one-file-system ...
so what actually matters is
# find /.snapshots -xdev
/.snapshots
/.snapshots/1
/.snapshots/1/snapshot
/.snapshots/1/info.xml
/.snapshots/2
/.snapshots/2/snapshot
/.snapshots/2/info.xml
/.snapshots/2/grub-snapshot.cfg
/.snapshots/3
/.snapshots/3/snapshot
/.snapshots/3/grub-snapshot.cfg
/.snapshots/3/info.xml
/.snapshots/4
/.snapshots/4/snapshot
/.snapshots/4/info.xml
/.snapshots/4/grub-snapshot.cfg
/.snapshots/4/filelist-3.txt
/.snapshots/grub-snapshot.cfg
# du -hs --one-file-system /.snapshots
40K /.snapshots
which looks much better
so I verfied what I got actually in the backup
NFS-server # tar -tvzf /nfs/localhost/backup.tar.gz | grep snapshots
drwxr-x--- root/root 0 2024-02-14 13:17 .snapshots/
drwxr-x--- root/root 0 2024-02-14 13:17 .snapshots/
drwxr-xr-x root/root 0 2024-02-14 13:03 .snapshots/1/
drwxr-xr-x root/root 0 2024-03-19 12:41 .snapshots/1/snapshot/
-rw------- root/root 168 2024-02-14 13:03 .snapshots/1/info.xml
drwxr-xr-x root/root 0 2024-02-14 13:07 .snapshots/2/
drwxr-xr-x root/root 0 2024-02-14 13:04 .snapshots/2/snapshot/
-rw------- root/root 268 2024-02-14 13:07 .snapshots/2/info.xml
-rw-r--r-- root/root 504 2024-02-14 13:17 .snapshots/2/grub-snapshot.cfg
drwxr-xr-x root/root 0 2024-02-14 13:17 .snapshots/3/
drwxr-xr-x root/root 0 2024-02-14 13:04 .snapshots/3/snapshot/
-rw-r----- root/root 502 2024-02-14 13:17 .snapshots/3/grub-snapshot.cfg
-rw------- root/root 258 2024-02-14 13:17 .snapshots/3/info.xml
drwxr-xr-x root/root 0 2024-02-14 13:17 .snapshots/4/
drwxr-xr-x root/root 0 2024-02-14 13:04 .snapshots/4/snapshot/
-rw------- root/root 240 2024-02-14 13:17 .snapshots/4/info.xml
-rw-r----- root/root 503 2024-02-14 13:17 .snapshots/4/grub-snapshot.cfg
-rw------- root/root 4939 2024-02-14 13:17 .snapshots/4/filelist-3.txt
-rw-r----- root/root 508 2024-02-14 13:17 .snapshots/grub-snapshot.cfg
so when /.snapshots gets included in the backup
it does not matter regarding the backup size.
BUT
when /.snapshots gets restored from the backup
during "rear recover" those old files likely mess up
the snapshot configuration of the recreated system
because during "rear recover" the whole
SUSE btrfs structure with its snapshot stuff
gets recreated anew from scratch so I assume
when that is recreated from scratch at least
/.snapshots
/.snapshots/1
/.snapshots/1/snapshot
/.snapshots/1/info.xml
got recreated during disk layout recreation
so that later during backup restore
those snapshot configuration files
must not be overwritten by outdated files
from the backup e.g. like
# cat /.snapshots/1/info.xml
<?xml version="1.0"?>
<snapshot>
<type>single</type>
<num>1</num>
<date>2024-02-14 12:03:57</date>
<description>first root filesystem</description>
</snapshot>
I will test what happens when I do "rear recover"
with that backup here which has /.snapshots included...
lzaoral commented at 2024-05-15 11:01:¶
Thank you for the feedback, @jsmeix! In that case, it might be a good
idea to autoexclude snapper_base_subvolume
(@/.snapshots
) from the
backup as well so that it is still recreated but not restored so that we
do not overwrite its metadata by stale information from the backup.
edit: The snapper btfrs subvolume is handled here: https://github.com/rear/rear/blob/a86b68e347e6457e40d5a0cb36bf38159396ad09/usr/share/rear/layout/save/GNU/Linux/230_filesystem_layout.sh#L363
jsmeix commented at 2024-05-15 13:24:¶
Test what happens when I do "rear recover"
with that backup here which has /.snapshots included:
RESCUE localhost:~ # export MIGRATION_MODE='true'
RESCUE localhost:~ # rear -D recover
...
Start system layout restoration.
Disk '/dev/sda': creating 'gpt' partition table
Disk '/dev/sda': creating partition number 1 with name ''sda1''
Disk '/dev/sda': creating partition number 2 with name ''sda2''
Disk '/dev/sda': creating partition number 3 with name ''sda3''
Creating filesystem of type btrfs with mount point / on /dev/sda2.
Mounting filesystem /
Running snapper/installation-helper
Creating swap on /dev/sda3
Disk layout created.
Recreated storage layout:
NAME KNAME TRAN TYPE FSTYPE LABEL SIZE MOUNTPOINTS
/dev/sda /dev/sda ata disk 15G
|-/dev/sda1 /dev/sda1 part 8M
|-/dev/sda2 /dev/sda2 part btrfs 13G /mnt/local/var
| /mnt/local/usr/local
| /mnt/local/root
| /mnt/local/tmp
| /mnt/local/boot/grub2/i386-pc
| /mnt/local/srv
| /mnt/local/boot/grub2/x86_64-efi
| /mnt/local/opt
| /mnt/local/home
| /mnt/local/.snapshots
| /mnt/local
`-/dev/sda3 /dev/sda3 part swap 2G
/dev/sr0 /dev/sr0 ata rom iso9660 REAR-ISO 77.7M
UserInput -I LAYOUT_MIGRATED_CONFIRMATION needed in /usr/share/rear/layout/recreate/default/200_run_layout_code.sh line 168
Confirm the recreated disk layout or go back one step
1) Confirm recreated disk layout and continue 'rear recover'
2) Go back one step to redo disk layout recreation
3) Use Relax-and-Recover shell and return back to here
4) Abort 'rear recover'
(default '1' timeout 300 seconds)
3
UserInput: Valid choice number result 'Use Relax-and-Recover shell and return back to here'
Welcome to Relax-and-Recover.
rear> find /mnt/local/.snapshots
/mnt/local/.snapshots
/mnt/local/.snapshots/1
/mnt/local/.snapshots/1/snapshot
/mnt/local/.snapshots/1/snapshot/etc
/mnt/local/.snapshots/1/snapshot/etc/snapper
/mnt/local/.snapshots/1/snapshot/etc/snapper/configs
/mnt/local/.snapshots/1/snapshot/etc/snapper/configs/root
/mnt/local/.snapshots/1/snapshot/.snapshots
/mnt/local/.snapshots/1/snapshot/home
/mnt/local/.snapshots/1/snapshot/opt
/mnt/local/.snapshots/1/snapshot/boot
/mnt/local/.snapshots/1/snapshot/boot/grub2
/mnt/local/.snapshots/1/snapshot/boot/grub2/x86_64-efi
/mnt/local/.snapshots/1/snapshot/boot/grub2/i386-pc
/mnt/local/.snapshots/1/snapshot/srv
/mnt/local/.snapshots/1/snapshot/tmp
/mnt/local/.snapshots/1/snapshot/root
/mnt/local/.snapshots/1/snapshot/usr
/mnt/local/.snapshots/1/snapshot/usr/local
/mnt/local/.snapshots/1/snapshot/var
/mnt/local/.snapshots/1/info.xml
rear> mv /mnt/local/.snapshots/1 /mnt/local/.snapshots/1.recreated
rear> exit
...
Running 'restore' stage ======================
Restoring from '/var/tmp/rear.tKYATuV8uCkXC3G/outputfs/localhost/backup.tar.gz' (restore log in /var/lib/rear/restore/recover.backup.tar.gz.760.restore.log) ...
Backup restore program 'tar' started in subshell (PID=5671)
Restored 467 MiB [avg. 95775 KiB/sec]
Restored 933 MiB [avg. 95617 KiB/sec]
Restored 1481 MiB [avg. 101156 KiB/sec]
Restored 1917 MiB [avg. 98151 KiB/sec]
Restored 2314 MiB [avg. 94792 KiB/sec]
Restored 2665 MiB [avg. 90992 KiB/sec]
Restored 3021 MiB [avg. 88389 KiB/sec]
Restored 3416 MiB [avg. 87452 KiB/sec]
OK
...
RESCUE localhost:~ # find /mnt/local/.snapshots -xdev -ls
256 0 drwxr-x--- 1 root root 64 Feb 14 13:17 /mnt/local/.snapshots
257 0 drwxr-xr-x 1 root root 32 May 15 14:57 /mnt/local/.snapshots/1.recreated
256 0 drwxr-xr-x 1 root root 188 May 15 15:00 /mnt/local/.snapshots/1.recreated/snapshot
258 4 -rw------- 1 root root 168 May 15 14:57 /mnt/local/.snapshots/1.recreated/info.xml
259 0 drwxr-xr-x 1 root root 32 Feb 14 13:03 /mnt/local/.snapshots/1
260 0 drwxr-xr-x 1 root root 0 Mar 19 12:41 /mnt/local/.snapshots/1/snapshot
261 4 -rw------- 1 root root 168 Feb 14 13:03 /mnt/local/.snapshots/1/info.xml
262 0 drwxr-xr-x 1 root root 66 Feb 14 13:07 /mnt/local/.snapshots/2
263 0 drwxr-xr-x 1 root root 0 Feb 14 13:04 /mnt/local/.snapshots/2/snapshot
264 4 -rw------- 1 root root 268 Feb 14 13:07 /mnt/local/.snapshots/2/info.xml
265 4 -rw-r--r-- 1 root root 504 Feb 14 13:17 /mnt/local/.snapshots/2/grub-snapshot.cfg
266 0 drwxr-xr-x 1 root root 66 Feb 14 13:17 /mnt/local/.snapshots/3
267 0 drwxr-xr-x 1 root root 0 Feb 14 13:04 /mnt/local/.snapshots/3/snapshot
268 4 -rw-r----- 1 root root 502 Feb 14 13:17 /mnt/local/.snapshots/3/grub-snapshot.cfg
269 4 -rw------- 1 root root 258 Feb 14 13:17 /mnt/local/.snapshots/3/info.xml
270 0 drwxr-xr-x 1 root root 94 Feb 14 13:17 /mnt/local/.snapshots/4
271 0 drwxr-xr-x 1 root root 0 Feb 14 13:04 /mnt/local/.snapshots/4/snapshot
272 4 -rw------- 1 root root 240 Feb 14 13:17 /mnt/local/.snapshots/4/info.xml
273 4 -rw-r----- 1 root root 503 Feb 14 13:17 /mnt/local/.snapshots/4/grub-snapshot.cfg
274 8 -rw------- 1 root root 4939 Feb 14 13:17 /mnt/local/.snapshots/4/filelist-3.txt
275 4 -rw-r----- 1 root root 508 Feb 14 13:17 /mnt/local/.snapshots/grub-snapshot.cfg
So /.snapshots must be excluded from the backup restore
to avoid that after "rear recover" one has /.snapshots
messed up with old files - in particular the new created
and only correct one .snapshots/1 would get overwritten
with old files from the backup.
@lzaoral
That /.snapshots must be excluded from the backup restore
is a separated task for me so you could merge this one
and then I will care about /.snapshots via a separated
issue and/or pull request.
jsmeix commented at 2024-05-15 13:48:¶
Mainly for my own information here
an addendum that is unrelated to this pull request
only because I noticed it here during my above
https://github.com/rear/rear/pull/3175#issuecomment-2112519793
Excerpts from /var/log/rear/rear-localhost.log
2024-05-15 14:56:33.854911738 Relax-and-Recover 2.7 / Git
2024-05-15 14:56:33.857055807 Running rear recover (PID 760 date 2024-05-15 14:56:33)
2024-05-15 14:56:33.859021352 Command line options: /bin/rear -D recover
2024-05-15 14:56:33.861149993 Using log file: /var/log/rear/rear-localhost.log
2024-05-15 14:56:33.863542518 Using build area: /var/tmp/rear.tKYATuV8uCkXC3G
2024-05-15 14:56:33.866203857 Setting TMPDIR to '/var/tmp' (was unset when ReaR was launched)
...
2024-05-15 15:02:33.797818313 Recreating initrd with /usr/bin/dracut...
++ chroot /mnt/local /bin/bash -c 'PATH=/sbin:/usr/sbin:/usr/bin:/bin /usr/bin/dracut --force'
realpath: /var/tmp: No such file or directory
dracut: Invalid tmpdir '/var/tmp'.
++ LogPrintError 'Warning:
Failed to recreate initrd with /usr/bin/dracut.
Check '\''/var/log/rear/rear-localhost.log'\'' why /usr/bin/dracut failed
and decide if the recreated system will boot
with the initrd '\''as is'\'' from the backup restore.
'
Indeed there is no var/tmp in the recreated system
RESCUE localhost:~ # ls -l /mnt/local/var
total 16
drwxr-xr-x 1 root root 106 Feb 14 13:08 adm
lrwxrwxrwx 1 root root 11 Jan 9 2023 agentx -> /run/agentx
drwxr-xr-x 1 root root 86 Feb 14 13:06 cache
drwxr-xr-x 1 root root 0 Mar 15 2022 crash
drwxr-xr-x 1 root root 402 May 15 15:00 lib
lrwxrwxrwx 1 root root 9 Feb 14 13:04 lock -> /run/lock
drwxr-xr-x 1 root root 716 May 15 15:02 log
lrwxrwxrwx 1 root root 10 Mar 15 2022 mail -> spool/mail
drwxr-xr-x 1 root root 0 Mar 15 2022 opt
lrwxrwxrwx 1 root root 4 Feb 14 13:04 run -> /run
drwxr-xr-x 1 root root 108 Feb 14 13:06 spool
RESCUE localhost:~ # ls -ld /mnt/local/tmp
drwxr-xr-x 1 root root 0 May 15 15:02 /mnt/local/tmp
My current guess is that there is no var/tmp
in the recreated system because I have
BACKUP_PROG_EXCLUDE=( /var/tmp
/qqq
/var/tmp
/tmp )
but what is a real bug is that
Setting TMPDIR to '/var/tmp' (was unset when ReaR was launched)
because during "rear recover" TMPDIR should not be set at all
so I added 'set -x' to /bin/rear to see what goes on
RESCUE localhost:~ # rear -D recover
...
+ readonly TMPDIR_ORIG=
+ TMPDIR_ORIG=
+ source /usr/share/rear/conf/default.conf
++ export TMPDIR=/var/tmp
++ TMPDIR=/var/tmp
...
+ test -e /etc/rear-release
+ RECOVERY_MODE=y
+ readonly RECOVERY_MODE
+ test recover '!=' help
++ readlink -e /var/tmp
+ export TMPDIR=/var/tmp
+ TMPDIR=/var/tmp
+ test -d /var/tmp
++ mktemp -d -t rear.XXXXXXXXXXXXXXX
+ BUILD_DIR=/var/tmp/rear.awoCzxeHag4tpr9
+ QuietAddExitTask cleanup_build_area_and_end_program
+ EXIT_TASKS=("$*" "${EXIT_TASKS[@]}")
+ ROOTFS_DIR=/var/tmp/rear.awoCzxeHag4tpr9/rootfs
+ mkdir -p /var/tmp/rear.awoCzxeHag4tpr9/rootfs
+ TMP_DIR=/var/tmp/rear.awoCzxeHag4tpr9/tmp
+ mkdir -p /var/tmp/rear.awoCzxeHag4tpr9/tmp
+ [[ -n y ]]
+ test ''
+ tmpdir_debug_info='Setting TMPDIR to '\''/var/tmp'\'' (was unset when ReaR was launched)'
+ mkdir -p /var/tmp/rear.awoCzxeHag4tpr9/rootfs/var/tmp
+ BACKUP_PROG_EXCLUDE+=("$BUILD_DIR")
+ saved_tmpdir=/var/tmp
I think I found the root cause why there is no var/tmp
in the recreated system
(excerpt from /var/log/rear/rear-localhost.log)
+ source /usr/share/rear/restore/default/900_create_missing_directories.sh
++ local directories_permissions_owner_group_file=/var/lib/rear/recovery/directories_permissions_owner_group
++ test ''
++ pushd /mnt/local
/mnt/local ~
++ test -f /var/lib/rear/recovery/directories_permissions_owner_group
++ popd
~
+ source_return_code=0
i.e. restore/default/900_create_missing_directories.sh
that should recreate var/tmp in the recreated system
if it is missing does noting in my case because
in my case there is no
/var/lib/rear/recovery/directories_permissions_owner_group
because during "rear -D mkbackup" I had
(except from var/log/rear/rear-localhost.log
in a git clone directory)
2024-05-15 08:42:19.453890604 Including prep/default/400_save_directories.sh
2024-05-15 08:42:19.455467932 Entering debugscript mode via 'set -x'.
+ source /root/rear.lzaoral-backup-mounted-btrfs-subvolumes/usr/share/rear/prep/default/400_save_directories.sh
++ local directories_permissions_owner_group_file=/root/rear.lzaoral-backup-mounted-btrfs-subvolumes/var/lib/rear/recovery/directories_permissions_owner_group
++ :
/root/rear.lzaoral-backup-mounted-btrfs-subvolumes/usr/share/rear/prep/default/400_save_directories.sh: line 12: /root/rear.lzaoral-backup-mounted-btrfs-subvolumes/var/
lib/rear/recovery/directories_permissions_owner_group: No such file or directory
which is true at that point in time
when "rear mkrescue/mkbackup" is run for the first time
because $VAR_DIR/recovery/
is not yet created
when prep/default/400_save_directories.sh runs
because $VAR_DIR/recovery/
gets created later in
layout/save/GNU/Linux/100_create_layout_file.sh
(the 'prep' stage runs before the 'layout/save' stage).
jsmeix commented at 2024-05-15 15:18:¶
I re-did "rear -D mkbackup"
but now additionally with the change in
https://github.com/rear/rear/pull/3223
and additionally with /.snapshots in BACKUP_PROG_EXCLUDE
BACKUP_PROG_EXCLUDE=( /var/tmp /qqq /var/tmp /tmp /.snapshots )
I also did
# rm -rf var/lib/rear/*
to verify that the change in
https://github.com/rear/rear/pull/3223
works.
After "rear -D mkbackup" I got in particular
# cat /var/tmp/rear.ml6GAML0RZ4F8e4/rootfs/root/rear.lzaoral-backup-mounted-btrfs-subvolumes/var/lib/rear/recovery/directories_permissions_owner_group
/.snapshots 750 root root
/boot/grub2/x86_64-efi 755 root root
/boot/grub2/i386-pc 755 root root
/home 755 root root
/opt 755 root root
/srv 755 root root
/tmp 1777 root root
/usr/local 755 root root
/root 700 root root
/var 755 root root
/bin 755 root root
/boot 755 root root
/dev 755 root root
/etc 755 root root
/etc/opt 755 root root
/etc/X11 755 root root
/lib 755 root root
/lib64 755 root root
/mnt 755 root root
/proc 555 root root
/run 755 root root
/sbin 755 root root
/sys 555 root root
/usr 755 root root
/usr/bin 755 root root
/usr/include 755 root root
/usr/lib 755 root root
/usr/lib64 755 root root
/usr/libexec 755 root root
/usr/sbin 755 root root
/usr/share 755 root root
/usr/src 755 root root
/var/cache 755 root root
/var/lib 755 root root
/var/lock -> /run/lock
/var/log 755 root root
/var/mail -> spool/mail
/var/opt 755 root root
/var/run -> /run
/var/spool 755 root root
/var/spool/mail 1777 root root
/var/tmp 1777 root root
note therein /.snapshots (because it is a mountpoint)
and /tmp and /var/tmp (because they are FHS directories).
With that I re-did "rear -D recover"
RESCUE localhost:~ # rear -D recover
Relax-and-Recover 2.7 / Git
Running rear recover (PID 758 date 2024-05-15 17:04:53)
Command line options: /bin/rear -D recover
Using log file: /var/log/rear/rear-localhost.log
Using build area: /var/tmp/rear.OAs2m4kG2emMuph
Setting TMPDIR to '/var/tmp' (was unset when ReaR was launched)
...
Recreating initrd with /usr/bin/dracut...
Recreated initrd with /usr/bin/dracut
...
Finished 'recover'. The target system is mounted at '/mnt/local'.
In the recreated sytem I got in particular
RESCUE localhost:~ # find /mnt/local/.snapshots/ -xdev -ls
256 0 drwxr-x--- 1 root root 2 May 15 17:05 /mnt/local/.snapshots/
257 0 drwxr-xr-x 1 root root 32 May 15 17:05 /mnt/local/.snapshots/1
256 0 drwxr-xr-x 1 root root 188 May 15 17:05 /mnt/local/.snapshots/1/snapshot
258 4 -rw------- 1 root root 168 May 15 17:05 /mnt/local/.snapshots/1/info.xml
which looks perfectly right now.
I rebooted the recreated system
and things look OK as far as I currently see
localhost:~ # snapper ls
# | Type | Pre # | Date | User | Used Space | Cleanup | Description | Userdata
---+--------+-------+----------------------------------+------+------------+---------+-----------------------+---------
0 | single | | | root | | | current |
1* | single | | Wed 15 May 2024 05:05:03 PM CEST | root | 2.26 GiB | | first root filesystem |
# cat /.snapshots/1/info.xml
<?xml version="1.0"?>
<snapshot>
<type>single</type>
<num>1</num>
<date>2024-05-15 15:05:03</date>
<description>first root filesystem</description>
</snapshot>
(15:05:03 UTC equals 05:05:03 PM CEST)
jsmeix commented at 2024-05-15 15:28:¶
@lzaoral
I think you cannot merge it yourself,
so perhaps @pcahyna could merge it for you?
Therefore I also assigned this pull request to him
so he could have another look and
merge it if it looks OK to him.
I could also merge it but I already reviewed it and
I would prefer when another ReaR upstream maintainer
gets also involved when a pull request gets merged.
Perhaps @pcahyna may spot something?
jsmeix commented at 2024-05-15 15:53:¶
Oops!
I forgot to test when one has other snapshot subvolumes mounted
(i.e. snapshot subvolumes that are not mounted at '/').
I will test that tomorrow.
jsmeix commented at 2024-05-16 08:18:¶
Test when one has other snapshot subvolumes mounted
(i.e. snapshot subvolumes that are not mounted at '/')
on the same SLES15 SP5 tests VM as above in
https://github.com/rear/rear/pull/3175#issuecomment-2111776529
I mount btrfs snapshot 2 at /snapshot2 and
snapshot 3 two times at /snapshot3 and /snapshot3again
# btrfs subvolume list -a / | grep snapshots
ID 267 gen 3472 top level 256 path <FS_TREE>/@/.snapshots
ID 268 gen 3485 top level 267 path <FS_TREE>/@/.snapshots/1/snapshot
ID 272 gen 43 top level 267 path <FS_TREE>/@/.snapshots/2/snapshot
ID 273 gen 64 top level 267 path <FS_TREE>/@/.snapshots/3/snapshot
ID 274 gen 65 top level 267 path <FS_TREE>/@/.snapshots/4/snapshot
# mkdir /snapshot2
# mkdir /snapshot3
# mkdir /snapshot3again
# mount -t btrfs -o subvolid=272 /dev/sda2 /snapshot2
# mount -t btrfs -o subvolid=273 /dev/sda2 /snapshot3
# mount -t btrfs -o subvolid=273 /dev/sda2 /snapshot3again
# findmnt -at btrfs
TARGET SOURCE FSTYPE OPTIONS
/ /dev/sda2[/@/.snapshots/1/snapshot] btrfs rw,relatime,space_cache,subvolid=268,subvol=/@/.snapshots/1/snapshot
|-/boot/grub2/x86_64-efi /dev/sda2[/@/boot/grub2/x86_64-efi] btrfs rw,relatime,space_cache,subvolid=265,subvol=/@/boot/grub2/x86_64-efi
|-/srv /dev/sda2[/@/srv] btrfs rw,relatime,space_cache,subvolid=261,subvol=/@/srv
|-/root /dev/sda2[/@/root] btrfs rw,relatime,space_cache,subvolid=262,subvol=/@/root
|-/home /dev/sda2[/@/home] btrfs rw,relatime,space_cache,subvolid=264,subvol=/@/home
|-/boot/grub2/i386-pc /dev/sda2[/@/boot/grub2/i386-pc] btrfs rw,relatime,space_cache,subvolid=266,subvol=/@/boot/grub2/i386-pc
|-/.snapshots /dev/sda2[/@/.snapshots] btrfs rw,relatime,space_cache,subvolid=267,subvol=/@/.snapshots
|-/var /dev/sda2[/@/var] btrfs rw,relatime,space_cache,subvolid=258,subvol=/@/var
|-/opt /dev/sda2[/@/opt] btrfs rw,relatime,space_cache,subvolid=263,subvol=/@/opt
|-/tmp /dev/sda2[/@/tmp] btrfs rw,relatime,space_cache,subvolid=260,subvol=/@/tmp
|-/usr/local /dev/sda2[/@/usr/local] btrfs rw,relatime,space_cache,subvolid=259,subvol=/@/usr/local
|-/snapshot2 /dev/sda2[/@/.snapshots/2/snapshot] btrfs rw,relatime,space_cache,subvolid=272,subvol=/@/.snapshots/2/snapshot
|-/snapshot3 /dev/sda2[/@/.snapshots/3/snapshot] btrfs rw,relatime,space_cache,subvolid=273,subvol=/@/.snapshots/3/snapshot
`-/snapshot3again /dev/sda2[/@/.snapshots/3/snapshot] btrfs rw,relatime,space_cache,subvolid=273,subvol=/@/.snapshots/3/snapshot
Did again "rear -D mkbackup" as before in
https://github.com/rear/rear/pull/3175#issuecomment-2112838240
in particular as before with
BACKUP_PROG_EXCLUDE=( /var/tmp /qqq /var/tmp /tmp /.snapshots )
i.e. without explicitly excluding the mounted btrfs snapshots.
backup.tar.gz size from before in
https://github.com/rear/rear/pull/3175#issuecomment-2112838240
was 2.1G and is now the same
disklayout.conf is now
# grep -v '^#' var/lib/rear/layout/disklayout.conf
disk /dev/sda 16106127360 gpt
part /dev/sda 8388608 1048576 rear-noname bios_grub /dev/sda1
part /dev/sda 13949206528 9437184 rear-noname legacy_boot /dev/sda2
part /dev/sda 2147466752 13958643712 rear-noname swap /dev/sda3
fs /dev/sda2 / btrfs uuid=bdec53c2-1ee8-4268-90f9-5ec523774035 label= options=rw,relatime,space_cache,subvolid=268,subvol=/@/.snapshots/1/snapshot
btrfsdefaultsubvol /dev/sda2 / 268 @/.snapshots/1/snapshot
btrfsnormalsubvol /dev/sda2 / 256 @
btrfsnormalsubvol /dev/sda2 / 258 @/var
btrfsnormalsubvol /dev/sda2 / 259 @/usr/local
btrfsnormalsubvol /dev/sda2 / 260 @/tmp
btrfsnormalsubvol /dev/sda2 / 261 @/srv
btrfsnormalsubvol /dev/sda2 / 262 @/root
btrfsnormalsubvol /dev/sda2 / 263 @/opt
btrfsnormalsubvol /dev/sda2 / 264 @/home
btrfsnormalsubvol /dev/sda2 / 265 @/boot/grub2/x86_64-efi
btrfsnormalsubvol /dev/sda2 / 266 @/boot/grub2/i386-pc
btrfsmountedsubvol /dev/sda2 / rw,relatime,space_cache,subvolid=268,subvol=/@/.snapshots/1/snapshot @/.snapshots/1/snapshot
btrfsmountedsubvol /dev/sda2 /.snapshots rw,relatime,space_cache,subvolid=267,subvol=/@/.snapshots @/.snapshots
btrfsmountedsubvol /dev/sda2 /boot/grub2/x86_64-efi rw,relatime,space_cache,subvolid=265,subvol=/@/boot/grub2/x86_64-efi @/boot/grub2/x86_64-efi
btrfsmountedsubvol /dev/sda2 /home rw,relatime,space_cache,subvolid=264,subvol=/@/home @/home
btrfsmountedsubvol /dev/sda2 /opt rw,relatime,space_cache,subvolid=263,subvol=/@/opt @/opt
btrfsmountedsubvol /dev/sda2 /boot/grub2/i386-pc rw,relatime,space_cache,subvolid=266,subvol=/@/boot/grub2/i386-pc @/boot/grub2/i386-pc
btrfsmountedsubvol /dev/sda2 /srv rw,relatime,space_cache,subvolid=261,subvol=/@/srv @/srv
btrfsmountedsubvol /dev/sda2 /root rw,relatime,space_cache,subvolid=262,subvol=/@/root @/root
btrfsmountedsubvol /dev/sda2 /tmp rw,relatime,space_cache,subvolid=260,subvol=/@/tmp @/tmp
btrfsmountedsubvol /dev/sda2 /usr/local rw,relatime,space_cache,subvolid=259,subvol=/@/usr/local @/usr/local
btrfsmountedsubvol /dev/sda2 /var rw,relatime,space_cache,subvolid=258,subvol=/@/var @/var
btrfsnocopyonwrite @/var
swap /dev/sda3 uuid=921157bc-e4d6-4869-8796-7e09207e49a9 label=
in particular the disabled btrfs entries
# grep '^#btrfs' var/lib/rear/layout/disklayout.conf
#btrfssnapshotsubvol /dev/sda2 / 272 @/.snapshots/2/snapshot
#btrfssnapshotsubvol /dev/sda2 / 273 @/.snapshots/3/snapshot
#btrfssnapshotsubvol /dev/sda2 / 274 @/.snapshots/4/snapshot
#btrfsnormalsubvol /dev/sda2 / 267 @/.snapshots
#btrfsnormalsubvol /dev/sda2 / 268 @/.snapshots/1/snapshot
#btrfsmountedsubvol /dev/sda2 /snapshot2 rw,relatime,space_cache,subvolid=272,subvol=/@/.snapshots/2/snapshot @/.snapshots/2/snapshot
#btrfsmountedsubvol /dev/sda2 /snapshot3 rw,relatime,space_cache,subvolid=273,subvol=/@/.snapshots/3/snapshot @/.snapshots/3/snapshot
#btrfsmountedsubvol /dev/sda2 /snapshot3again rw,relatime,space_cache,subvolid=273,subvol=/@/.snapshots/3/snapshot @/.snapshots/3/snapshot
The only things that backup.tar.gz contains
regarding btrfs snapshots is (on the NFS server)
# tar -tvzf /nfs/localhost/backup.tar.gz
...
drwxr-xr-x root/root ... snapshot2/
drwxr-xr-x root/root ... snapshot3/
drwxr-xr-x root/root ... snapshot3again/
After "rear -D recover"
all looks well in particular
RESCUE localhost:~ # find /mnt/local/snapshot*
/mnt/local/snapshot2
/mnt/local/snapshot3
/mnt/local/snapshot3again
Also 'df -h' looks well
RESCUE localhost:~ # df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 4.0M 8.0K 4.0M 1% /dev
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 783M 8.5M 775M 2% /run
tmpfs 4.0M 0 4.0M 0% /sys/fs/cgroup
/dev/sda2 13G 3.7G 9.1G 29% /mnt/local
/dev/sda2 13G 3.7G 9.1G 29% /mnt/local/.snapshots
/dev/sda2 13G 3.7G 9.1G 29% /mnt/local/boot/grub2/x86_64-efi
/dev/sda2 13G 3.7G 9.1G 29% /mnt/local/home
/dev/sda2 13G 3.7G 9.1G 29% /mnt/local/opt
/dev/sda2 13G 3.7G 9.1G 29% /mnt/local/boot/grub2/i386-pc
/dev/sda2 13G 3.7G 9.1G 29% /mnt/local/srv
/dev/sda2 13G 3.7G 9.1G 29% /mnt/local/root
/dev/sda2 13G 3.7G 9.1G 29% /mnt/local/tmp
/dev/sda2 13G 3.7G 9.1G 29% /mnt/local/usr/local
/dev/sda2 13G 3.7G 9.1G 29% /mnt/local/var
for comparison on the original system
# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 4.0M 0 4.0M 0% /dev
tmpfs 983M 4.0K 983M 1% /dev/shm
tmpfs 394M 5.9M 388M 2% /run
tmpfs 4.0M 0 4.0M 0% /sys/fs/cgroup
/dev/sda2 13G 4.5G 7.9G 37% /
/dev/sda2 13G 4.5G 7.9G 37% /.snapshots
/dev/sda2 13G 4.5G 7.9G 37% /boot/grub2/x86_64-efi
/dev/sda2 13G 4.5G 7.9G 37% /home
/dev/sda2 13G 4.5G 7.9G 37% /opt
/dev/sda2 13G 4.5G 7.9G 37% /boot/grub2/i386-pc
/dev/sda2 13G 4.5G 7.9G 37% /srv
/dev/sda2 13G 4.5G 7.9G 37% /root
/dev/sda2 13G 4.5G 7.9G 37% /tmp
/dev/sda2 13G 4.5G 7.9G 37% /usr/local
/dev/sda2 13G 4.5G 7.9G 37% /var
tmpfs 197M 4.0K 197M 1% /run/user/0
/dev/sda2 13G 4.5G 7.9G 37% /snapshot2
/dev/sda2 13G 4.5G 7.9G 37% /snapshot3
jsmeix commented at 2024-05-16 08:31:¶
Mainly for my own information here
an addendum that is unrelated to this pull request
only because I noticed it here during my above
https://github.com/rear/rear/pull/3175#issuecomment-2114420830
In the recreated system '/tmp/' has wrong permissions
but '/var/tmp/' has right permissions
RESCUE localhost:~ # ls -ld /mnt/local/tmp/ /mnt/local/var/tmp/
drwxr-xr-x 1 root root 0 May 16 10:08 /mnt/local/tmp/
drwxrwxrwt 1 root root 0 May 16 10:09 /mnt/local/var/tmp/
for comparison on the original system
# ls -ld /tmp /var/tmp
drwxrwxrwt 1 root root 254 May 16 10:21 /tmp
drwxrwxrwt 1 root root 236 May 16 09:19 /var/tmp
Excepts from the "rear -D recover" log file
+ source /usr/share/rear/restore/default/900_create_missing_directories.sh
...
++ read directory mode owner group junk
++ test /tmp
++ directory=tmp
++ test '->' = 1777
++ test -e tmp
++ continue
...
++ read directory mode owner group junk
++ test /var/tmp
++ directory=var/tmp
++ test '->' = 1777
++ test -e var/tmp
++ test -L var/tmp
++ mkdir -v -p var/tmp
mkdir: created directory 'var/tmp'
++ test 1777
++ chmod -v 1777 var/tmp
mode of 'var/tmp' changed from 0755 (rwxr-xr-x) to 1777 (rwxrwxrwt)
++ test root
++ test root
++ chroot /mnt/local /bin/bash --login -c 'chown -v root:root var/tmp'
ownership of 'var/tmp' retained as root:root
In backup.tar.gz there is neither 'tmp' nor 'var/tmp'
so those directories are not restored from the backup
nevertheless /mnt/local/tmp/ gets somehow created
during "rear recover" but /mnt/local/var/tmp/ is not
created during "rear recover" so that
restore/default/900_create_missing_directories.sh
skips to create /mnt/local/tmp/
but creates /mnt/local/var/tmp/
I think I found the root cause why /mnt/local/tmp
is created during "rear recover"
(excerpts from /var/log/rear/rear-localhost.log)
++ for snapshot_subvolume_device_and_path in $snapshot_subvolumes_devices_and_paths
++ snapshot_subvolume_device=/dev/sda2
++ snapshot_subvolume_path=@/.snapshots/4/snapshot
++ test /dev/sda2 = /dev/sda2 -a @/tmp = @/.snapshots/4/snapshot
++ target_system_mountpoint=/mnt/local/tmp
++ test / = /tmp
++ Log 'Mounting btrfs normal subvolume @/tmp on /dev/sda2 at /tmp (if not something is already mounted there).'
2024-05-16 10:08:32.542234436 Mounting btrfs normal subvolume @/tmp on /dev/sda2 at /tmp (if not something is already mounted there).
++ echo '# Mounting btrfs normal subvolume @/tmp on /dev/sda2 at /mnt/local/tmp (if not something is already mounted there):'
++ echo 'if ! mount -t btrfs | tr -s '\''[:blank:]'\'' '\'' '\'' | grep -q '\'' on /mnt/local/tmp '\'' ; then'
++ echo ' if ! test -d /mnt/local/tmp ; then'
++ echo ' mkdir -p /mnt/local/tmp'
++ echo ' fi'
++ echo ' mount -t btrfs -o rw,relatime,space_cache -o subvol=@/tmp /dev/sda2 /mnt/local/tmp'
++ echo fi
so in .../var/lib/rear/layout/diskrestore.sh there is
# Mounting btrfs normal subvolume @/tmp on /dev/sda2 at /mnt/local/tmp (if not something is already mounted there):
if ! mount -t btrfs | tr -s '[:blank:]' ' ' | grep -q ' on /mnt/local/tmp ' ; then
if ! test -d /mnt/local/tmp ; then
mkdir -p /mnt/local/tmp
fi
mount -t btrfs -o rw,relatime,space_cache -o subvol=@/tmp /dev/sda2 /mnt/local/tmp
fi
This happens because on the original system there is
# findmnt -at btrfs
TARGET SOURCE FSTYPE OPTIONS
...
|-/tmp /dev/sda2[/@/tmp] btrfs rw,relatime,space_cache,subvolid=260,subvol=/@/tmp
The cause is my
BACKUP_PROG_EXCLUDE=( /var/tmp /qqq /var/tmp /tmp /.snapshots )
and the root cause is my ignorance
to not read our fine documentation
in default.conf (excerpt)
# In /etc/rear/local.conf use BACKUP_PROG_EXCLUDE+=( '/this/*' '/that/*' )
# to specify your particular items that should be excluded from the backup in addition to what
# gets excluded from the backup by default here (see also BACKUP_ONLY_EXCLUDE below):
BACKUP_PROG_EXCLUDE=( '/tmp/*' '/dev/shm/*' "$VAR_DIR/output/*" )
so with
BACKUP_PROG_EXCLUDE+=( '/var/tmp/rear.*' /.snapshots )
all works reasonably well
and after "rear recover" I have
RESCUE localhost:~ # ls -ld /mnt/local/tmp/ /mnt/local/var/tmp/
drwxrwxrwt 1 root root 0 May 16 11:07 /mnt/local/tmp/
drwxrwxrwt 1 root root 156 May 16 11:15 /mnt/local/var/tmp/
because now in my backup.tar.gz there is
# tar -tvzf /nfs/localhost/backup.tar.gz | grep '[0-9] tmp'
drwxrwxrwt root/root 0 2024-05-16 11:07 tmp/
drwxrwxrwt root/root 0 2024-05-16 11:07 tmp/
# tar -tvzf /nfs/localhost/backup.tar.gz | grep '[0-9] var/tmp'
drwxrwxrwt root/root 0 2024-05-16 11:06 var/tmp/
drwx------ root/root 0 2024-05-16 08:31 var/tmp/systemd-private-c8620fb31f694e2ebf200b321b40aa8f-systemd-logind.service-9xVRrj/
drwxrwxrwt root/root 0 2024-05-16 08:31 var/tmp/systemd-private-c8620fb31f694e2ebf200b321b40aa8f-systemd-logind.service-9xVRrj/tmp/
so /tmp/ and /var/tmp/ get restored
with right permissions from the backup
and restore/default/900_create_missing_directories.sh
skips both /tmp/ and /var/tmp/
But because of this I found out that /var/tmp/rear.*
should be added to BACKUP_PROG_EXCLUDE in default.conf
because since ReaR uses /var/tmp/rear.*
as BUILD_DIR
one gets at least the whole BUILD_DIR of the current
"rear mkbackup" run in the backup by default.
Via
https://github.com/rear/rear/pull/3224
'/var/tmp/rear.*'
will be added to BACKUP_PROG_EXCLUDE
in default.conf
lzaoral commented at 2024-05-17 15:14:¶
@jsmeix No worries, the exclusion of snapper base subvolume is quite simple. Could you please test the following patch on SLES? Thank you!
diff --git a/usr/share/rear/layout/save/GNU/Linux/230_filesystem_layout.sh b/usr/share/rear/layout/save/GNU/Linux/230_filesystem_layout.sh
index cdeca6de..d34a4881 100644
--- a/usr/share/rear/layout/save/GNU/Linux/230_filesystem_layout.sh
+++ b/usr/share/rear/layout/save/GNU/Linux/230_filesystem_layout.sh
@@ -467,12 +467,14 @@ fi
# see https://btrfs.wiki.kernel.org/index.php/Mount_options
test "/" != "$btrfs_subvolume_path" && btrfs_subvolume_path=${btrfs_subvolume_path#/}
+ # Automatically exclude all mounted snapper and snapshot subvolumes from the backup.
+ # See https://github.com/rear/rear/pull/3175#issuecomment-1983498175 and
+ # https://github.com/rear/rear/pull/3175#issuecomment-2111776529
+ if test "$snapper_base_subvolume" = "$btrfs_subvolume_path" || btrfs_snapshot_subvolume_exists "$subvolume_mountpoint" "$btrfs_subvolume_path"; then
+ echo "#btrfsmountedsubvol $device $subvolume_mountpoint $mount_options $btrfs_subvolume_path"
# Finally, test whether the btrfs subvolume listed as mounted actually exists. A running docker
# daemon apparently can convince the system to list a non-existing btrfs volume as mounted.
# See https://github.com/rear/rear/issues/1496
- if btrfs_snapshot_subvolume_exists "$subvolume_mountpoint" "$btrfs_subvolume_path"; then
- # Exclude mounted snapshot subvolumes
- echo "#btrfsmountedsubvol $device $subvolume_mountpoint $mount_options $btrfs_subvolume_path"
elif btrfs_subvolume_exists "$subvolume_mountpoint" "$btrfs_subvolume_path"; then
echo "btrfsmountedsubvol $device $subvolume_mountpoint $mount_options $btrfs_subvolume_path"
else
jsmeix commented at 2024-05-29 12:30:¶
@lzaoral
I'm afraid, currently I don't have time for it
because of maintenance updates for SLES,
a.k.a. "customers first" ;-)
lzaoral commented at 2024-05-29 12:34:¶
@jsmeix No worries, that's completely understandable! Fortunately, this PR is not critical.
jsmeix commented at 2024-06-05 10:37:¶
Our CI automated backup & recovery test
on Fedora 40 and later fails because
this pull request here is needed for that, see
https://github.com/rear/rear-integration-tests/pull/5#issue-2335381474
Unfortunately I still don't have time for this pull request
because of some more maintenance updates for SLES.
@lzaoral @pcahyna
feel free to merge this one "as is" to get your
backup & recovery test on Fedora 40 and later
working again.
If issues appear on SLES when this pull request
was merged "as is" I can care about them later
as time permits (current GitHub master code is under
development and never guaranteed to not have issues).
pcahyna commented at 2024-06-05 13:46:¶
@jsmeix would it make sense to add the patch in https://github.com/rear/rear/pull/3175#issuecomment-2117821632 before merging, even if untested?
pcahyna commented at 2024-06-06 09:54:¶
@jsmeix ok, the CI is still failing anyway ... I got a console log, something is wrong with the backup (can't be restored):
Disk layout created.
Restoring from '/var/tmp/rear.4jIt7b4dUiBF65P/outputfs/backup/backup.tar.gz' (restore log in /var/lib/rear/restore/recover.backup.tar.gz.869.restore.log) ...
[2K
Backup restore program 'tar' started in subshell (PID=3778)78[KOK
Backup restore program tar failed with exit code 2, check /var/log/rear/rear-ip-172-31-17-157.log and /var/lib/rear/restore/recover.backup.tar.gz.869.restore.log and the restored system
Restoring finished (verify backup restore log messages in /var/lib/rear/restore/recover.backup.tar.gz.869.restore.log)
Created SELinux /mnt/local/.autorelabel file : after reboot SELinux will relabel all files
Checking if certain restored files are consistent with the recreated system
Restored files in /mnt/local do not fully match the recreated system
(files in the backup are not same as when the ReaR rescue/recovery system was made)
Manually check if those changed files cause issues in your recreated system
Failed to bind-mount /proc at /mnt/local/proc
Failed to bind-mount /sys at /mnt/local/sys
Failed to bind-mount /run at /mnt/local/run
pcahyna commented at 2024-06-06 10:06:¶
log from backup looks ok:
2024-06-05 13:49:11.553713683 Including backup/NETFS/default/500_make_backup.sh
2024-06-05 13:49:11.562082414 Making backup (using backup method NETFS)
2024-06-05 13:49:11.565502818 Backup include list (backup-include.txt contents without subsequent duplicates):
2024-06-05 13:49:11.571206100 /boot/efi
2024-06-05 13:49:11.574437355 /boot
2024-06-05 13:49:11.577993296 /
2024-06-05 13:49:11.581303954 /home
2024-06-05 13:49:11.584482186 /var
2024-06-05 13:49:11.587748992 Backup exclude list (backup-exclude.txt contents):
2024-06-05 13:49:11.590971757 /tmp/*
2024-06-05 13:49:11.594173340 /dev/shm/*
2024-06-05 13:49:11.597365693 /var/lib/rear/output/*
2024-06-05 13:49:11.600535204 /var/tmp/rear.V6WBOVQcEgUhWvG
2024-06-05 13:49:11.603854423 Creating tar archive '/var/tmp/rear.V6WBOVQcEgUhWvG/tmp/isofs/backup/backup.tar.gz'
2024-06-05 13:49:11.616711219 tar --warning=no-xdev --sparse --block-number --totals --verbose --no-wildcards-match-slash --one-file-system --ignore-failed-read --anchored --xattrs --xattrs-include=security.capability --xattrs-include=security.selinux --acls --gzip -X /var/tmp/rear.V6WBOVQcEgUhWvG/tmp/backup-exclude.txt -C / -c -f - /boot/efi /boot / /home /var /var/log/rear/rear-ip-172-31-17-157.log | dd of=/var/tmp/rear.V6WBOVQcEgUhWvG/tmp/isofs/backup/backup.tar.gz bs=1M
2024-06-05 13:50:36.134079316 Archived 837 MiB in 84 seconds [avg 10203 KiB/sec]
'/var/tmp/rear.V6WBOVQcEgUhWvG/tmp/backup.log' -> '/var/tmp/rear.V6WBOVQcEgUhWvG/tmp/isofs/backup/backup.log'
jsmeix commented at 2024-06-06 10:29:¶
I tested
https://github.com/rear/rear/pull/3175#issuecomment-2117821632
but I implemented it as a more verbose change
--- usr/share/rear/layout/save/GNU/Linux/230_filesystem_layout.sh.unpatched 2024-05-15 08:33:32.752041754 +0200
+++ usr/share/rear/layout/save/GNU/Linux/230_filesystem_layout.sh.patched 2024-06-06 10:10:25.688306824 +0200
@@ -467,12 +467,16 @@ fi
# see https://btrfs.wiki.kernel.org/index.php/Mount_options
test "/" != "$btrfs_subvolume_path" && btrfs_subvolume_path=${btrfs_subvolume_path#/}
+ # Automatically exclude all mounted snapper and snapshot subvolumes from the backup.
+ # See https://github.com/rear/rear/pull/3175#issuecomment-1983498175 and
+ # https://github.com/rear/rear/pull/3175#issuecomment-2111776529
+ if test "$snapper_base_subvolume" = "$btrfs_subvolume_path" || btrfs_snapshot_subvolume_exists "$subvolume_mountpoint" "$btrfs_subvolume_path"; then
+ DebugPrint "Excluded mounted snapper or snapshot subvolume from the backup: $subvolume_mountpoint"
+ echo "# Excluded mounted snapper or snapshot subvolume from the backup:"
+ echo "#btrfsmountedsubvol $device $subvolume_mountpoint $mount_options $btrfs_subvolume_path"
# Finally, test whether the btrfs subvolume listed as mounted actually exists. A running docker
# daemon apparently can convince the system to list a non-existing btrfs volume as mounted.
# See https://github.com/rear/rear/issues/1496
- if btrfs_snapshot_subvolume_exists "$subvolume_mountpoint" "$btrfs_subvolume_path"; then
- # Exclude mounted snapshot subvolumes
- echo "#btrfsmountedsubvol $device $subvolume_mountpoint $mount_options $btrfs_subvolume_path"
elif btrfs_subvolume_exists "$subvolume_mountpoint" "$btrfs_subvolume_path"; then
echo "btrfsmountedsubvol $device $subvolume_mountpoint $mount_options $btrfs_subvolume_path"
else
With the unpatched I got in the backup
# tar -tvzf /nfs/localhost/backup.tar.gz.unpatched | grep snapshots
drwxr-x--- root/root 0 2024-02-14 13:17 .snapshots/
drwxr-x--- root/root 0 2024-02-14 13:17 .snapshots/
drwxr-xr-x root/root 0 2024-02-14 13:03 .snapshots/1/
drwxr-xr-x root/root 0 2024-05-16 09:00 .snapshots/1/snapshot/
-rw------- root/root 168 2024-02-14 13:03 .snapshots/1/info.xml
drwxr-xr-x root/root 0 2024-02-14 13:07 .snapshots/2/
drwxr-xr-x root/root 0 2024-02-14 13:04 .snapshots/2/snapshot/
-rw------- root/root 268 2024-02-14 13:07 .snapshots/2/info.xml
-rw-r--r-- root/root 504 2024-02-14 13:17 .snapshots/2/grub-snapshot.cfg
drwxr-xr-x root/root 0 2024-02-14 13:17 .snapshots/3/
drwxr-xr-x root/root 0 2024-02-14 13:04 .snapshots/3/snapshot/
-rw-r----- root/root 502 2024-02-14 13:17 .snapshots/3/grub-snapshot.cfg
-rw------- root/root 258 2024-02-14 13:17 .snapshots/3/info.xml
drwxr-xr-x root/root 0 2024-02-14 13:17 .snapshots/4/
drwxr-xr-x root/root 0 2024-02-14 13:04 .snapshots/4/snapshot/
-rw------- root/root 240 2024-02-14 13:17 .snapshots/4/info.xml
-rw-r----- root/root 503 2024-02-14 13:17 .snapshots/4/grub-snapshot.cfg
-rw------- root/root 4939 2024-02-14 13:17 .snapshots/4/filelist-3.txt
-rw-r----- root/root 508 2024-02-14 13:17 .snapshots/grub-snapshot.cfg
With the patched I got
# usr/sbin/rear -D mkbackup
...
Excluded mounted snapper or snapshot subvolume from the backup: /.snapshots
...
and in the backup
# tar -tvzf /nfs/localhost/backup.tar.gz.patched | grep snapshots
drwxr-x--- root/root 0 2024-02-14 13:17 .snapshots/
So far things look good in the backup.
BUT:
The patch is at least somehow wrong
because what the patch actually does
is not to exclude .snapshots/ from the backup.
Instead the patch actually excludes .snapshots/
from the disk layout:
# diff -U0 var/lib/rear/layout/disklayout.conf.unpatched var/lib/rear/layout/disklayout.conf.patched
...
-btrfsmountedsubvol /dev/sda2 /.snapshots rw,relatime,space_cache,subvolid=267,subvol=/@/.snapshots @/.snapshots
+# Excluded mounted snapper or snapshot subvolume from the backup:
+#btrfsmountedsubvol /dev/sda2 /.snapshots rw,relatime,space_cache,subvolid=267,subvol=/@/.snapshots @/.snapshots
This results that after "rear recover"
.snapshots/ is not a mounted btrfs subvolume
i.e. .snapshots/ is a btrfs subvolume but it is not mounted:
RESCUE localhost:~ # rear -D recover
...
Marking component '/dev/sda' as done in /var/lib/rear/layout/disktodo.conf
Marking component '/dev/sda1' as done in /var/lib/rear/layout/disktodo.conf
Marking component '/dev/sda2' as done in /var/lib/rear/layout/disktodo.conf
Marking component '/dev/sda3' as done in /var/lib/rear/layout/disktodo.conf
Doing SLES-like btrfs subvolumes setup for /dev/sda2 on / (BTRFS_SUBVOLUME_SLES_SETUP contains /dev/sda2)
SLES12-SP1 (and later) btrfs subvolumes setup needed for /dev/sda2 (default subvolume path contains '@/.snapshots/')
Marking component 'fs:/' as done in /var/lib/rear/layout/disktodo.conf
Marking component 'btrfsmountedsubvol:/' as done in /var/lib/rear/layout/disktodo.conf
Marking component 'btrfsmountedsubvol:/boot/grub2/i386-pc' as done in /var/lib/rear/layout/disktodo.conf
Marking component 'btrfsmountedsubvol:/boot/grub2/x86_64-efi' as done in /var/lib/rear/layout/disktodo.conf
Marking component 'btrfsmountedsubvol:/home' as done in /var/lib/rear/layout/disktodo.conf
Marking component 'btrfsmountedsubvol:/opt' as done in /var/lib/rear/layout/disktodo.conf
Marking component 'btrfsmountedsubvol:/srv' as done in /var/lib/rear/layout/disktodo.conf
Marking component 'btrfsmountedsubvol:/root' as done in /var/lib/rear/layout/disktodo.conf
Marking component 'btrfsmountedsubvol:/tmp' as done in /var/lib/rear/layout/disktodo.conf
Marking component 'btrfsmountedsubvol:/usr/local' as done in /var/lib/rear/layout/disktodo.conf
Marking component 'btrfsmountedsubvol:/var' as done in /var/lib/rear/layout/disktodo.conf
Marking component 'swap:/dev/sda3' as done in /var/lib/rear/layout/disktodo.conf
...
Finished 'recover'. The target system is mounted at '/mnt/local'.
...
RESCUE localhost:~ # lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,LABEL,PARTLABEL,SIZE,MOUNTPOINTS
NAME KNAME PKNAME TRAN TYPE FSTYPE LABEL PARTLABEL SIZE MOUNTPOINTS
/dev/sda /dev/sda ata disk 15G
|-/dev/sda1 /dev/sda1 /dev/sda part sda1 8M
|-/dev/sda2 /dev/sda2 /dev/sda part btrfs sda2 13G /mnt/local/var
| /mnt/local/usr/local
| /mnt/local/tmp
| /mnt/local/root
| /mnt/local/srv
| /mnt/local/opt
| /mnt/local/home
| /mnt/local/boot/grub2/x86_64-efi
| /mnt/local/boot/grub2/i386-pc
| /mnt/local
`-/dev/sda3 /dev/sda3 /dev/sda part swap sda3 2G
/dev/sr0 /dev/sr0 ata rom iso9660 REAR-ISO 77.7M
RESCUE localhost:~ # findmnt -a -t btrfs -o TARGET,SOURCE
TARGET SOURCE
/mnt/local /dev/sda2[/@/.snapshots/1/snapshot]
|-/mnt/local/boot/grub2/i386-pc /dev/sda2[/@/boot/grub2/i386-pc]
|-/mnt/local/boot/grub2/x86_64-efi /dev/sda2[/@/boot/grub2/x86_64-efi]
|-/mnt/local/home /dev/sda2[/@/home]
|-/mnt/local/opt /dev/sda2[/@/opt]
|-/mnt/local/srv /dev/sda2[/@/srv]
|-/mnt/local/root /dev/sda2[/@/root]
|-/mnt/local/tmp /dev/sda2[/@/tmp]
|-/mnt/local/usr/local /dev/sda2[/@/usr/local]
`-/mnt/local/var /dev/sda2[/@/var]
RESCUE localhost:~ # btrfs subvolume list -a /mnt/local
ID 256 gen 20 top level 5 path <FS_TREE>/@
ID 258 gen 26 top level 256 path <FS_TREE>/@/var
ID 259 gen 24 top level 256 path <FS_TREE>/@/usr/local
ID 260 gen 24 top level 256 path <FS_TREE>/@/tmp
ID 261 gen 24 top level 256 path <FS_TREE>/@/srv
ID 262 gen 25 top level 256 path <FS_TREE>/@/root
ID 263 gen 24 top level 256 path <FS_TREE>/@/opt
ID 264 gen 24 top level 256 path <FS_TREE>/@/home
ID 265 gen 24 top level 256 path <FS_TREE>/@/boot/grub2/x86_64-efi
ID 266 gen 26 top level 256 path <FS_TREE>/@/boot/grub2/i386-pc
ID 267 gen 20 top level 256 path <FS_TREE>/@/.snapshots
ID 268 gen 27 top level 267 path <FS_TREE>/@/.snapshots/1/snapshot
Interestingly things look OK
on the rebooted recreated system:
# lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,LABEL,PARTLABEL,SIZE,MOUNTPOINTS
NAME KNAME PKNAME TRAN TYPE FSTYPE LABEL PARTLABEL SIZE MOUNTPOINTS
/dev/sda /dev/sda ata disk 15G
|-/dev/sda1 /dev/sda1 /dev/sda part sda1 8M
|-/dev/sda2 /dev/sda2 /dev/sda part btrfs sda2 13G /var
| /usr/local
| /tmp
| /root
| /srv
| /opt
| /home
| /boot/grub2/i386-pc
| /boot/grub2/x86_64-efi
| /.snapshots
| /
`-/dev/sda3 /dev/sda3 /dev/sda part swap sda3 2G [SWAP]
/dev/sr0 /dev/sr0 ata rom iso9660 REAR-ISO 77.7M
# findmnt -a -t btrfs -o TARGET,SOURCE
TARGET SOURCE
/ /dev/sda2[/@/.snapshots/1/snapshot]
|-/home /dev/sda2[/@/home]
|-/boot/grub2/i386-pc /dev/sda2[/@/boot/grub2/i386-pc]
|-/var /dev/sda2[/@/var]
|-/.snapshots /dev/sda2[/@/.snapshots]
|-/tmp /dev/sda2[/@/tmp]
|-/boot/grub2/x86_64-efi /dev/sda2[/@/boot/grub2/x86_64-efi]
|-/opt /dev/sda2[/@/opt]
|-/srv /dev/sda2[/@/srv]
|-/root /dev/sda2[/@/root]
`-/usr/local /dev/sda2[/@/usr/local]
# btrfs subvolume list -a /
ID 256 gen 20 top level 5 path <FS_TREE>/@
ID 258 gen 29 top level 256 path <FS_TREE>/@/var
ID 259 gen 24 top level 256 path <FS_TREE>/@/usr/local
ID 260 gen 29 top level 256 path <FS_TREE>/@/tmp
ID 261 gen 24 top level 256 path <FS_TREE>/@/srv
ID 262 gen 25 top level 256 path <FS_TREE>/@/root
ID 263 gen 24 top level 256 path <FS_TREE>/@/opt
ID 264 gen 24 top level 256 path <FS_TREE>/@/home
ID 265 gen 24 top level 256 path <FS_TREE>/@/boot/grub2/x86_64-efi
ID 266 gen 26 top level 256 path <FS_TREE>/@/boot/grub2/i386-pc
ID 267 gen 20 top level 256 path <FS_TREE>/@/.snapshots
ID 268 gen 29 top level 267 path <FS_TREE>/@/.snapshots/1/snapshot
On the rebooted recreated system .snapshots/ is
a btrfs subvolume which is mounted.
For comparison how it looks on the original system:
# lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,LABEL,PARTLABEL,SIZE,MOUNTPOINTS
NAME KNAME PKNAME TRAN TYPE FSTYPE LABEL PARTLABEL SIZE MOUNTPOINTS
/dev/sda /dev/sda ata disk 15G
|-/dev/sda1 /dev/sda1 /dev/sda part 8M
|-/dev/sda2 /dev/sda2 /dev/sda part btrfs 13G /var
| /usr/local
| /tmp
| /root
| /srv
| /opt
| /home
| /boot/grub2/x86_64-efi
| /boot/grub2/i386-pc
| /.snapshots
| /
`-/dev/sda3 /dev/sda3 /dev/sda part swap 2G [SWAP]
/dev/sr0 /dev/sr0 ata rom iso9660 SLE-15-SP5-Full-x86_64120.11.001 14.1G
# findmnt -a -t btrfs -o TARGET,SOURCE
TARGET SOURCE
/ /dev/sda2[/@/.snapshots/1/snapshot]
|-/root /dev/sda2[/@/root]
|-/boot/grub2/x86_64-efi /dev/sda2[/@/boot/grub2/x86_64-efi]
|-/.snapshots /dev/sda2[/@/.snapshots]
|-/var /dev/sda2[/@/var]
|-/boot/grub2/i386-pc /dev/sda2[/@/boot/grub2/i386-pc]
|-/home /dev/sda2[/@/home]
|-/opt /dev/sda2[/@/opt]
|-/srv /dev/sda2[/@/srv]
|-/tmp /dev/sda2[/@/tmp]
`-/usr/local /dev/sda2[/@/usr/local]
# btrfs subvolume list -a /
ID 256 gen 32 top level 5 path <FS_TREE>/@
ID 258 gen 12143 top level 256 path <FS_TREE>/@/var
ID 259 gen 12029 top level 256 path <FS_TREE>/@/usr/local
ID 260 gen 12143 top level 256 path <FS_TREE>/@/tmp
ID 261 gen 12028 top level 256 path <FS_TREE>/@/srv
ID 262 gen 12133 top level 256 path <FS_TREE>/@/root
ID 263 gen 12028 top level 256 path <FS_TREE>/@/opt
ID 264 gen 12027 top level 256 path <FS_TREE>/@/home
ID 265 gen 4724 top level 256 path <FS_TREE>/@/boot/grub2/x86_64-efi
ID 266 gen 4724 top level 256 path <FS_TREE>/@/boot/grub2/i386-pc
ID 267 gen 12050 top level 256 path <FS_TREE>/@/.snapshots
ID 268 gen 12027 top level 267 path <FS_TREE>/@/.snapshots/1/snapshot
ID 272 gen 43 top level 267 path <FS_TREE>/@/.snapshots/2/snapshot
ID 273 gen 64 top level 267 path <FS_TREE>/@/.snapshots/3/snapshot
ID 274 gen 65 top level 267 path <FS_TREE>/@/.snapshots/4/snapshot
For comparison how "rear recover" looks
without the patch:
RESCUE localhost:~ # grep snapshot /var/lib/rear/layout/disklayout.conf
...
btrfsmountedsubvol /dev/sda2 / rw,relatime,space_cache,subvolid=268,subvol=/@/.snapshots/1/snapshot @/.snapshots/1/snapshot
btrfsmountedsubvol /dev/sda2 /.snapshots rw,relatime,space_cache,subvolid=267,subvol=/@/.snapshots @/.snapshots
RESCUE localhost:~ # rear -D recover
...
Marking component '/dev/sda' as done in /var/lib/rear/layout/disktodo.conf
Marking component '/dev/sda1' as done in /var/lib/rear/layout/disktodo.conf
Marking component '/dev/sda2' as done in /var/lib/rear/layout/disktodo.conf
Marking component '/dev/sda3' as done in /var/lib/rear/layout/disktodo.conf
Doing SLES-like btrfs subvolumes setup for /dev/sda2 on / (BTRFS_SUBVOLUME_SLES_SETUP contains /dev/sda2)
SLES12-SP1 (and later) btrfs subvolumes setup needed for /dev/sda2 (default subvolume path contains '@/.snapshots/')
Marking component 'fs:/' as done in /var/lib/rear/layout/disktodo.conf
Marking component 'btrfsmountedsubvol:/' as done in /var/lib/rear/layout/disktodo.conf
Marking component 'btrfsmountedsubvol:/.snapshots' as done in /var/lib/rear/layout/disktodo.conf
Marking component 'btrfsmountedsubvol:/boot/grub2/i386-pc' as done in /var/lib/rear/layout/disktodo.conf
Marking component 'btrfsmountedsubvol:/boot/grub2/x86_64-efi' as done in /var/lib/rear/layout/disktodo.conf
Marking component 'btrfsmountedsubvol:/home' as done in /var/lib/rear/layout/disktodo.conf
Marking component 'btrfsmountedsubvol:/opt' as done in /var/lib/rear/layout/disktodo.conf
Marking component 'btrfsmountedsubvol:/srv' as done in /var/lib/rear/layout/disktodo.conf
Marking component 'btrfsmountedsubvol:/root' as done in /var/lib/rear/layout/disktodo.conf
Marking component 'btrfsmountedsubvol:/tmp' as done in /var/lib/rear/layout/disktodo.conf
Marking component 'btrfsmountedsubvol:/usr/local' as done in /var/lib/rear/layout/disktodo.conf
Marking component 'btrfsmountedsubvol:/var' as done in /var/lib/rear/layout/disktodo.conf
Marking component 'swap:/dev/sda3' as done in /var/lib/rear/layout/disktodo.conf
...
Finished 'recover'. The target system is mounted at '/mnt/local'.
...
RESCUE localhost:~ # lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,LABEL,PARTLABEL,SIZE,MOUNTPOINTS
NAME KNAME PKNAME TRAN TYPE FSTYPE LABEL PARTLABEL SIZE MOUNTPOINTS
/dev/sda /dev/sda ata disk 15G
|-/dev/sda1 /dev/sda1 /dev/sda part sda1 8M
|-/dev/sda2 /dev/sda2 /dev/sda part btrfs sda2 13G /mnt/local/var
| /mnt/local/usr/local
| /mnt/local/tmp
| /mnt/local/root
| /mnt/local/srv
| /mnt/local/opt
| /mnt/local/home
| /mnt/local/boot/grub2/x86_64-efi
| /mnt/local/boot/grub2/i386-pc
| /mnt/local/.snapshots
| /mnt/local
`-/dev/sda3 /dev/sda3 /dev/sda part swap sda3 2G
/dev/sr0 /dev/sr0 ata rom iso9660 REAR-ISO 77.7M
RESCUE localhost:~ # findmnt -a -t btrfs -o TARGET,SOURCE
TARGET SOURCE
/mnt/local /dev/sda2[/@/.snapshots/1/snapshot]
|-/mnt/local/.snapshots /dev/sda2[/@/.snapshots]
|-/mnt/local/boot/grub2/i386-pc /dev/sda2[/@/boot/grub2/i386-pc]
|-/mnt/local/boot/grub2/x86_64-efi /dev/sda2[/@/boot/grub2/x86_64-efi]
|-/mnt/local/home /dev/sda2[/@/home]
|-/mnt/local/opt /dev/sda2[/@/opt]
|-/mnt/local/srv /dev/sda2[/@/srv]
|-/mnt/local/root /dev/sda2[/@/root]
|-/mnt/local/tmp /dev/sda2[/@/tmp]
|-/mnt/local/usr/local /dev/sda2[/@/usr/local]
`-/mnt/local/var /dev/sda2[/@/var]
RESCUE localhost:~ # btrfs subvolume list -a /mnt/local
ID 256 gen 20 top level 5 path <FS_TREE>/@
ID 258 gen 28 top level 256 path <FS_TREE>/@/var
ID 259 gen 24 top level 256 path <FS_TREE>/@/usr/local
ID 260 gen 24 top level 256 path <FS_TREE>/@/tmp
ID 261 gen 24 top level 256 path <FS_TREE>/@/srv
ID 262 gen 26 top level 256 path <FS_TREE>/@/root
ID 263 gen 24 top level 256 path <FS_TREE>/@/opt
ID 264 gen 23 top level 256 path <FS_TREE>/@/home
ID 265 gen 23 top level 256 path <FS_TREE>/@/boot/grub2/x86_64-efi
ID 266 gen 28 top level 256 path <FS_TREE>/@/boot/grub2/i386-pc
ID 267 gen 25 top level 256 path <FS_TREE>/@/.snapshots
ID 268 gen 26 top level 267 path <FS_TREE>/@/.snapshots/1/snapshot
From a lot of (painful) personal experience
with the (over)-complicated SUSE btrfs default structure
I know that small or subtle differences
could have scaring (possibly disastrous) consequences.
So I would very much perfer to stay on the safe side
and get things recreated during "rear recover"
as much as possible exactly as it was
on the original system.
jsmeix commented at 2024-06-06 10:35:¶
@lzaoral @pcahyna see my above
https://github.com/rear/rear/pull/3175#issuecomment-2112519793
(excerpt)
That /.snapshots must be excluded from the backup restore
is a separated task for me so you could merge this one
and then I will care about /.snapshots via a separated
issue and/or pull request.
So feel free to merge it "as is" - i.e. without the patch in
https://github.com/rear/rear/pull/3175#issuecomment-2117821632
and later - as time permits - I will care about /.snapshots
pcahyna commented at 2024-06-06 10:37:¶
I see this during recovery, is it expected?
Fallback SLES-like btrfs subvolumes setup for /dev/vda4 on / (no match in BTRFS_SUBVOLUME_GENERIC_SETUP or BTRFS_SUBVOLUME_SLES_SETUP)
lzaoral commented at 2024-06-06 10:38:¶
@jsmeix No worries, the patch is not included in this PR anyway. I'll let you to implement the snapper support properly in a follow-up PR.
jsmeix commented at 2024-06-06 10:54:¶
@pcahyna
welcome to the hell of btrfs!
I had been toasted there ;-)
I implemented BTRFS_SUBVOLUME_SLES_SETUP
and BTRFS_SUBVOLUME_GENERIC_SETUP in
https://github.com/rear/rear/commit/b144e9082511442b6f2426c9006e66d6c611edf9
from
https://github.com/rear/rear/pull/2080
as a follow up of
https://github.com/rear/rear/pull/2079
Offhandedly I do not remember the details
but as far as I remember my basic reasoning was
that from a lot of (painful) personal experience
with the (over)-complicated SUSE btrfs default structure
I knew that small or subtle differences
could have scaring (possibly disastrous) consequences
so I perferred to stay on the safe side
and did not change the existing and tested
SLES-specific btrfs setup in ReaR.
pcahyna commented at 2024-06-06 11:56:¶
I see this error when booting the rescue system on Fedora Rawhide:
Running 40-start-udev-or-load-modules.sh...
/etc/scripts/system-setup.d/40-start-udev-or-load-modules.sh: line 24: [[: 256~rc3: syntax error in expression (error token is "~rc3")
Loading storage modules ...
[ 8.926226] 3ware Storage Controller device driver for Linux v1.26.02.003.
[ 8.950701] 3ware 9000 Storage Controller device driver for Linux v2.26.02.014.
[ 9.134259] db_root: cannot open: /etc/target
Module nbd excluded from being autoloaded.
Running 41-load-special-modules.sh...
Running 42-engage-scsi.sh...
Running 45-serial-console.sh...
Serial console support enabled for ttyS0 at speed 115200
Running 55-migrate-network-devices.sh...
Running 58-start-dhclient.sh...
Attempting to start the DHCP client daemon
read_config: /etc/dhcpcd.conf: No such file or directory
dhcpcd-10.0.6 starting
read_config: /etc/dhcpcd.conf: No such file or directory
script_runreason: No such file or directory
script_runreason: No such file or directory
pcahyna commented at 2024-06-06 13:11:¶
# systemd-notify --version 2>/dev/null | grep systemd | awk '{ print $2; }'
256~rc3
RESCUE default-0:~ # systemd_version=$( systemd-notify --version 2>/dev/null | grep systemd | awk '{ print $2; }' )
RESCUE default-0:~ # echo $systemd_version
256~rc3
RESCUE default-0:~ # test "$systemd_version" || systemd_version=0
RESCUE default-0:~ # [[ $systemd_version -gt 190 ]]
-bash: [[: 256~rc3: syntax error in expression (error token is "~rc3")
pcahyna commented at 2024-06-06 13:14:¶
@jsmeix what worries me is SLES-like btrfs subvolumes setup
. Is it
something that will work on non-SLES distros?
jsmeix commented at 2024-06-07 06:32:¶
@pcahyna
of course (as "SLES-like" indicates)
the "SLES-like btrfs subvolumes setup" method
may not work on non-SLES systems.
Because in
https://github.com/rear/rear-integration-tests/pull/5
you mentioned "Kiwi"
on Fedora 40 and later distributed cloud images,
as the cloud images changed their layout a bit
and have separate /var as a btrfs subvolume
due to the switch to Kiwi
and because Kiwi
https://github.com/OSInside/kiwi
is developed by (open)SUSE, cf.
https://en.wikipedia.org/wiki/KIWI_(openSUSE)
and because a "separate /var as a btrfs subvolume"
indicates that Kiwi may - at least partially - do
a "SLES-like btrfs subvolumes setup",
so it may in your particular case work
(or even be the actually right way)
to recreate your btrfs subvolumes structure
via the "SLES-like btrfs subvolumes setup" method.
You may experiment with BTRFS_SUBVOLUME_SLES_SETUP
versus BTRFS_SUBVOLUME_GENERIC_SETUP what actually
works better in your particular case.
After
https://github.com/rear/rear/pull/2079
and
https://github.com/rear/rear/pull/2080
I never tested BTRFS_SUBVOLUME_GENERIC_SETUP again.
pcahyna commented at 2024-06-11 14:49:¶
Is there any code to actually mount the recovered subvolumes during layout restoration?
After a lot of effort (due to console log from tests missing b/c of Testing Farm poor handling of serial console output), I see this on the console during recovery:
Start system layout restoration.
Disk '/dev/nvme0n1': creating 'gpt' partition table
Disk '/dev/nvme0n1': creating partition number 1 with name ''p.legacy''
Disk '/dev/nvme0n1': creating partition number 2 with name ''p.UEFI''
Disk '/dev/nvme0n1': creating partition number 3 with name ''p.lxboot''
Disk '/dev/nvme0n1': creating partition number 4 with name ''p.lxroot''
Creating filesystem of type btrfs with mount point / on /dev/nvme0n1p4.
Mounting filesystem /
Creating filesystem of type ext4 with mount point /boot on /dev/nvme0n1p3.
Mounting filesystem /boot
Creating filesystem of type vfat with mount point /boot/efi on /dev/nvme0n1p2.
Mounting filesystem /boot/efi
Disk layout created.
Restoring from '/var/tmp/rear.8TwCBQyXhvIcfv1/outputfs/backup/backup.tar.gz' (restore log in /var/lib/rear/restore/recover.backup.tar.gz.596.restore.log) ...
There is nothing about mounting /var
or /home
. I suspect that backup
gets restored into /var
on the /
mount, i.e. not into the mounted
subvolume, and thus gets shadowed by an empty /var
subvolume after
recovery and reboot, and that's why the tests error out.
jsmeix commented at 2024-06-11 15:08:¶
Yes - of course because I need it on SLES since years - there
is code to properly recreate the SUSE btrfs structure
during "rear recover" before the backup gets restored,
e.g. see how things look for me on SLES in my above
https://github.com/rear/rear/pull/3175#issuecomment-2151940835
But I don't know how the btrfs structure looks like
on your Testing Farm systems.
Is there some documentation (rather foolproof for me)
how I could install such a typical Testing Farm system
on one of my local KVM/QEMU virtual machines so that
I could have a direct look at such a system?
@pcahyna
I think it could help you a lot if you could install
such a Testing Farm system locally on a virtual machine
so that you have usual direct access to such a system.
pcahyna commented at 2024-06-11 15:17:¶
I think it could help you a lot if you could install
such a Testing Farm system locally on a virtual machine
so that you have usual direct access to such a system.
That's indeed how I was able to debug the previous problems, but now the
local tests pass, so I am lost. (Which also suggests that not mounting
/var
is not the full story, as this problem should appear even in
local tests, but I don't have any other hints.)
pcahyna commented at 2024-06-11 15:29:¶
@jsmeix
But I don't know how the btrfs structure looks like on your Testing Farm systems. Is there some documentation (rather foolproof for me) how I could install such a typical Testing Farm system on one of my local KVM/QEMU virtual machines so that I could have a direct look at such a system?
I think the easiest is to install testcloud (
https://pypi.org/project/testcloud/
) and run testcloud create fedora:40
.
Here ( https://artifacts.dev.testing-farm.io/c02f01a7-83d0-4187-b69e-91dc043a457a/work-backup-and-restorez5wihv38/tests/plans/backup-and-restore/execute/data/guest/default-0/make-backup-and-restore-iso-1/output.txt ) is the output of lsblk in the system before recovery:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
zram0 252:0 0 3.7G 0 disk [SWAP]
nvme0n1 259:0 0 100G 0 disk
|-nvme0n1p1 259:1 0 2M 0 part
|-nvme0n1p2 259:2 0 100M 0 part /boot/efi
|-nvme0n1p3 259:3 0 1000M 0 part /boot
`-nvme0n1p4 259:4 0 98.9G 0 part /var
/home
/
If that's enough.
pcahyna commented at 2024-06-11 15:49:¶
By the way the image is generated by Kiwi, apparently using the config here: https://pagure.io/fedora-kiwi-descriptions/blob/rawhide/f/teams/cloud/cloud.xml#_92
pcahyna commented at 2024-06-11 15:53:¶
And here's disklayout.conf produced by mkbackup: https://artifacts.dev.testing-farm.io/ce8f73de-fc4e-4e56-85d1-e733a94d3765/work-backup-and-restore32iemcbu/tests/plans/backup-and-restore/execute/data/guest/default-0/make-backup-and-restore-iso-1/data/disklayout.conf
pcahyna commented at 2024-06-11 17:05:¶
@jsmeix sorry for the useless noise / red herring. I believe that the problem lies elsewhere.
jsmeix commented at 2024-06-12 08:01:¶
@pcahyna
no need to be sorry for "noise".
And your "noise" was not useless because now
I became interested to have a look (as time permits)
how such Kiwi made systems look like
in particular regarding their btrfs structure
because their btrfs structure is noticeable
different compared to the SLES btrfs structure.
As far as I see at first glance the main difference is
that on such Kiwi made systems what is mounted at '/'
is not a btrfs subvolume but the whole btrfs itself
or something like that - 'findmnt -t btrfs' would tell.
Perhaps in this case (i.e. when what is mounted at '/'
is the whole btrfs itself) data in btrfs subvolumes
is not accessible via the usual directory path?
I don't remember such btrfs behavioural details.
If my guess here is right, then when e.g. 'var' is
a btrfs subvolume, one needs an entry in /etc/fstab
to get the 'var' btrfs subvolume mounted at /var
or something like that.
Curretly this is only blind guesswork.
I have to see such a Kiwi made system on my own.
FYI
what comments I put into disklayout.conf on SLES
(excerpts):
# Btrfs default subvolume for /dev/sda2 at /
# Format: btrfsdefaultsubvol <device> <mountpoint> <btrfs_subvolume_ID> <btrfs_subvolume_path>
btrfsdefaultsubvol /dev/sda2 / 268 @/.snapshots/1/snapshot
...
# Btrfs normal subvolumes for /dev/sda2 at /
# Format: btrfsnormalsubvol <device> <mountpoint> <btrfs_subvolume_ID> <btrfs_subvolume_path>
# SLES12-SP1 (and later) btrfs subvolumes setup needed for /dev/sda2 (default subvolume path contains '@/.snapshots/')
# Btrfs subvolumes that belong to snapper are listed here only as documentation.
# Snapper's base subvolume '/@/.snapshots' is deactivated here because during 'rear recover'
# it is created by 'snapper/installation-helper --step 1' (which fails if it already exists).
# Furthermore any normal btrfs subvolume under snapper's base subvolume would be wrong.
# See https://github.com/rear/rear/issues/944#issuecomment-238239926
# and https://github.com/rear/rear/issues/963#issuecomment-240061392
# how to create a btrfs subvolume in compliance with the SLES12 default brtfs structure.
# In short: Normal btrfs subvolumes on SLES12 must be created directly below '/@/'
# e.g. '/@/var/lib/mystuff' (which requires that the btrfs root subvolume is mounted)
# and then the subvolume is mounted at '/var/lib/mystuff' to be accessible from '/'
# plus usually an entry in /etc/fstab to get it mounted automatically when booting.
# Because any '@/.snapshots' subvolume would let 'snapper/installation-helper --step 1' fail
# such subvolumes are deactivated here to not let 'rear recover' fail:
#btrfsnormalsubvol /dev/sda2 / 267 @/.snapshots
#btrfsnormalsubvol /dev/sda2 / 268 @/.snapshots/1/snapshot
btrfsnormalsubvol /dev/sda2 / 256 @
btrfsnormalsubvol /dev/sda2 / 258 @/var
...
# All mounted btrfs subvolumes (including mounted btrfs default subvolumes and mounted btrfs snapshot subvolumes).
# Determined by the findmnt command that shows the mounted btrfs_subvolume_path.
# Format: btrfsmountedsubvol <device> <subvolume_mountpoint> <mount_options> <btrfs_subvolume_path>
btrfsmountedsubvol /dev/sda2 / rw,relatime,space_cache,subvolid=268,subvol=/@/.snapshots/1/snapshot @/.snapshots/1/snapshot
btrfsmountedsubvol /dev/sda2 /.snapshots rw,relatime,space_cache,subvolid=267,subvol=/@/.snapshots @/.snapshots
...
btrfsmountedsubvol /dev/sda2 /var rw,relatime,space_cache,subvolid=258,subvol=/@/var @/var
I guess during "rear mkrescue" you don't get this
on such a Kiwi made system as I get it on SLES
# usr/sbin/rear -D mkrescue
Relax-and-Recover 2.7 / Git
...
Creating disk layout
SLES12-SP1 (and later) btrfs subvolumes setup needed for /dev/sda2 (default subvolume path contains '@/.snapshots/')
Added /dev/sda2 to BTRFS_SUBVOLUME_SLES_SETUP in /var/tmp/rear.NtfByEWMhJYsXGQ/rootfs/etc/rear/rescue.conf
...
which would prove that on such a Kiwi made system
the btrfs structure is noticeable different
compared to the SLES btrfs structure.
jsmeix commented at 2024-06-12 14:10:¶
@pcahyna
regarding your above
https://github.com/rear/rear/pull/3175#issuecomment-2161052473
I think the easiest is to install testcloud
( https://pypi.org/project/testcloud/ ) and run
testcloud create fedora:40.
Sigh :-(
It seems I am again hit by a
"too many indirections stack overflow exit".
I wished there is a directly installable image
of such a Kiwi made system somewhere available
that I could download and somehow put onto
a virtual disk of a KVM/QEMU virtual machine.
As far as I remember from some experiments with Kiwi
(long ago) it makes some "image" that one can install.
I think a Kiwi image is some self-extracting and/or
self-installing "thingy" that one somehow "dumps"
onto a disk - but all that is long ago so I may
remember things falsely and/or things may have
very much changed since then.
pcahyna commented at 2024-06-12 14:13:¶
I wished there is a directly installable image
of such a Kiwi made system somewhere available
that I could download and somehow put onto
a virtual disk of a KVM/QEMU virtual machine.
there is: https://download.fedoraproject.org/pub/fedora/linux/releases/40/Cloud/x86_64/images/
jsmeix commented at 2024-06-12 14:35:¶
@pcahyna
thank you!
I downloaded
https://mirror.23m.com/fedora/linux/releases/40/Cloud/x86_64/images/Fedora-Cloud-Base-Generic.x86_64-40-1.14.qcow2
and run that now in a KVM/QEMU virtual machine.
But I do not know the Fedora 40 default root password
so currently I cannot log in.
Do you know what the Fedora 40 default root password is?
jsmeix commented at 2024-06-12 14:59:¶
I mounted the Fedora-Cloud-Base-Generic.x86_64-40-1.14.qcow2
manually directly as described in
https://gist.github.com/shamil/62935d9b456a6f9877b5
# modprobe nbd max_part=8
# qemu-nbd --connect=/dev/nbd0 Fedora-Cloud-Base-Generic.x86_64-40-1.14.qcow2
# fdisk /dev/nbd0 -l
Disk /dev/nbd0: 5 GiB, 5368709120 bytes, 10485760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: EDB052F7-644C-490E-807C-FAD7E596DB80
Device Start End Sectors Size Type
/dev/nbd0p1 2048 6143 4096 2M BIOS boot
/dev/nbd0p2 6144 210943 204800 100M EFI System
/dev/nbd0p3 210944 2258943 2048000 1000M Linux extended boot
/dev/nbd0p4 2258944 10485726 8226783 3.9G Linux root (x86-64)
# mkdir fedora40mp
# mount /dev/nbd0p4 fedora40mp
# cd fedora40mp
# ls -l
total 0
drwxr-xr-x. 1 root root 0 Jan 24 01:00 home
drwxrwxr-x. 1 root root 212 Apr 15 00:56 root
drwxr-xr-x. 1 root root 170 Jun 12 16:33 var
# ls -l root
total 24
dr-xr-xr-x. 1 root root 0 Jan 24 01:00 afs
lrwxrwxrwx. 1 root root 7 Jan 24 01:00 bin -> usr/bin
dr-xr-xr-x. 1 root root 0 Apr 15 00:56 boot
-rw-rw-r--. 1 root root 134 Apr 15 00:56 config.bootoptions
-rw-rw-r--. 1 root root 71 Apr 15 00:56 config.partids
drwxrwxr-x. 1 root root 60 Apr 15 00:55 dev
drwxr-xr-x. 1 root root 2610 Jun 12 16:33 etc
drwxrwxr-x. 1 root root 10 Apr 15 00:56 grub2
drwxrwxr-x. 1 root root 0 Apr 15 00:56 home
lrwxrwxrwx. 1 root root 7 Jan 24 01:00 lib -> usr/lib
lrwxrwxrwx. 1 root root 9 Jan 24 01:00 lib64 -> usr/lib64
drwxr-xr-x. 1 root root 0 Jan 24 01:00 media
drwxr-xr-x. 1 root root 0 Jan 24 01:00 mnt
drwxr-xr-x. 1 root root 0 Jan 24 01:00 opt
drwxrwxr-x. 1 root root 0 Apr 15 00:53 proc
dr-xr-x---. 1 root root 98 Apr 15 00:54 root
drwxr-xr-x. 1 root root 38 Apr 15 00:56 run
lrwxrwxrwx. 1 root root 8 Jan 24 01:00 sbin -> usr/sbin
drwxr-xr-x. 1 root root 0 Jan 24 01:00 srv
drwxrwxr-x. 1 root root 0 Apr 15 00:53 sys
drwxrwxrwt. 1 root root 0 Apr 15 00:56 tmp
drwxr-xr-x. 1 root root 100 Apr 15 00:54 usr
drwxrwxr-x. 1 root root 0 Apr 15 00:56 var
# btrfs subvolume list -a .
ID 256 gen 36 top level 5 path root
ID 257 gen 33 top level 5 path home
ID 258 gen 34 top level 5 path var
Ugh!
That doesn't look normal - at least not on first glance.
It seems what one expects to be in '/' is here
in a btrfs subvolume that is called 'root'.
I thought the btrfs subvolume that is called 'root'
contains what '/root/' contains - i.e. the home directory
of the user 'root' - I do hate such menaingless vague
unclear ambiguous words where one has to reverse-engineer
its actual meaning e.g. like 'root' in this case :-(
pcahyna commented at 2024-06-12 15:01:¶
It seems what one expects to be in '/' is here
in a btrfs subvolume that is called 'root'.
that's indeed how I understand it. Does SLES use @
instead?
jsmeix commented at 2024-06-12 15:19:¶
@pcahyna
let me inverstigate a bit more
how that thingy is set up
and let me try out how "rear mkbackup" and
"rear recover" behave with that,
i.e. please be patient until tomorrow
(or perhaps even later - as time permits).
For the log what I did up to now:
In my manually directly mounted
Fedora-Cloud-Base-Generic.x86_64-40-1.14.qcow2
I changed root/etc/passwd to no password for 'root'
# head root/etc/passwd
root::0:0:Super User:/root:/bin/bash
...
# cd
# umount /dev/nbd0p4
# qemu-nbd --disconnect /dev/nbd0
/dev/nbd0 disconnected
# rmmod nbd
and umounted it and disabled nbd again.
Now I can log in as root when I run that
Fedora-Cloud-Base-Generic.x86_64-40-1.14.qcow2
in a KVM/QEMU virtual machine.
To be able to log in via 'ssh' I had to add
a normal user 'tux' with a (non empty) password
then I can log in as 'tux' via 'ssh' and finally
I can do 'su -' to become 'root' (via 'ssh').
I got:
# lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,LABEL,PARTLABEL,SIZE,MOUNTPOINTS
NAME KNAME PKNAME TRAN TYPE FSTYPE LABEL PARTLABEL SIZE MOUNTPOINTS
/dev/zram0 /dev/zram0 disk 1.9G [SWAP]
/dev/vda /dev/vda virtio disk 5G
|-/dev/vda1 /dev/vda1 /dev/vda virtio part p.legacy 2M
|-/dev/vda2 /dev/vda2 /dev/vda virtio part vfat EFI p.UEFI 100M /boot/efi
|-/dev/vda3 /dev/vda3 /dev/vda virtio part ext4 BOOT p.lxboot 1000M /boot
`-/dev/vda4 /dev/vda4 /dev/vda virtio part btrfs fedora p.lxroot 3.9G /var
/home
/
# findmnt -at btrfs
TARGET SOURCE FSTYPE OPTIONS
/ /dev/vda4[/root] btrfs rw,relatime,seclabel,compress=zstd:1,discard=async,space_cache=v2,subvolid=256,subvol=/root
|-/home /dev/vda4[/home] btrfs rw,relatime,seclabel,compress=zstd:1,discard=async,space_cache=v2,subvolid=257,subvol=/home
`-/var /dev/vda4[/var] btrfs rw,relatime,seclabel,compress=zstd:1,discard=async,space_cache=v2,subvolid=258,subvol=/var
# btrfs subvolume list -a /
ID 256 gen 50 top level 5 path <FS_TREE>/root
ID 257 gen 46 top level 5 path <FS_TREE>/home
ID 258 gen 49 top level 5 path <FS_TREE>/var
# cat /etc/fstab
UUID=8424dfd0-f878-4302-8ee5-b2ec6e5eb868 / btrfs compress=zstd:1,defaults,subvol=root 0 1
UUID=0c1e380a-7f1b-4e86-8fa0-629d10202a44 /boot ext4 defaults 0 0
UUID=8424dfd0-f878-4302-8ee5-b2ec6e5eb868 /home btrfs compress=zstd:1,subvol=home 0 0
UUID=8424dfd0-f878-4302-8ee5-b2ec6e5eb868 /var btrfs compress=zstd:1,subvol=var 0 0
UUID=F011-3319 /boot/efi vfat defaults,umask=0077,shortname=winnt 0 0
# cat /proc/cmdline
BOOT_IMAGE=(hd0,gpt3)/vmlinuz-6.8.5-301.fc40.x86_64 no_timer_check net.ifnames=0 console=tty1 console=ttyS0,115200n8 root=UUID=8424dfd0-f878-4302-8ee5-b2ec6e5eb868 rootflags=subvol=root
jsmeix commented at 2024-06-12 15:35:¶
SLES sets the btrfs default subvolume
to the one that should appear at '/'.
SLES uses the '/@/' directory
in the btrfs root subvolume (again a 'root'!)
to have the other btrfs subvolumes therein.
The btrfs root subvolume is the 'root' of the
whole btrfs filesystem.
The btrfs default subvolume is the btrfs subvolume
that is used when a btrfs filesystem is mounted
without a mount option that specifies a subvolume
i.e. the btrfs default subvolume appears at the mountpoint
when a btrfs filesystem is "just mounted".
This means when one mounts a btrfs filesystem with
a btrfs default subvolume which is not the btrfs root subvolume
one does not get the whole btrfs filesystem visible at
the mountpoint but only the limited view of what is
in and below the btrfs default subvolume.
But you can (as root) always mount a btrfs filesystem
at its btrfs root subvolume at an arbitraty mountpoint
like (here in that running Fedora 40 system)
# mkdir btrfsroot
# mount -t btrfs -o subvolid=0 /dev/vda4 btrfsroot
# ls -l btrfsroot
total 0
drwxr-xr-x. 1 root root 6 Jun 12 15:11 home
drwxrwxr-x. 1 root root 212 Apr 14 22:56 root
drwxr-xr-x. 1 root root 170 Jun 12 14:33 var
# findmnt -at btrfs
TARGET SOURCE FSTYPE OPTIONS
/ /dev/vda4[/root] btrfs rw,relatime,seclabel,compress=zstd:1,discard=async,space_cache=v2,subvolid=256,subvol=/root
|-/home /dev/vda4[/home] btrfs rw,relatime,seclabel,compress=zstd:1,discard=async,space_cache=v2,subvolid=257,subvol=/home
|-/var /dev/vda4[/var] btrfs rw,relatime,seclabel,compress=zstd:1,discard=async,space_cache=v2,subvolid=258,subvol=/var
`-/root/btrfsroot /dev/vda4 btrfs rw,relatime,seclabel,compress=zstd:1,discard=async,space_cache=v2,subvolid=5,subvol=/
so under the 'btrfsroot' mountpoint
now the whole btrfs filesystem appears
in its "natural ordering" of directories,
same as I got above when I directly mounted
Fedora-Cloud-Base-Generic.x86_64-40-1.14.qcow2
in my
https://github.com/rear/rear/pull/3175#issuecomment-2163264761
jsmeix commented at 2024-06-12 15:39:¶
@pcahyna
if you feel a bit confused now,
don't worry - that's normal - and it won't go away ;-)
But now it's time for me to go away :-)
Have a nice evening!
jsmeix commented at 2024-06-13 07:26:¶
I separated the issue
"how to setup ReaR for a Fedora 40 Cloud Base Image"
into
https://github.com/rear/rear/issues/3247
pcahyna commented at 2024-06-13 11:45:¶
Some observations about the CI errors in Fedora 40 and Rawhide, collected by runs in my personal repo https://github.com/pcahyna/rear/pull/16/checks?check_run_id=26139657787
- the backup tarball looks good and seems to contain everything needed
- layout gets restored and backup gets recreated without error
- btrfs subvolumes look properly mounted after layout restoration:
/dev/nvme0n1p4 on /mnt/local/home type btrfs (rw,relatime,compress=zstd:1,ssd,space_cache=v2,subvolid=257,subvol=/home)
/dev/nvme0n1p4 on /mnt/local/var type btrfs (rw,relatime,compress=zstd:1,ssd,space_cache=v2,subvolid=258,subvol=/var)
/dev/nvme0n1p3 on /mnt/local/boot type ext4 (rw,relatime)
/dev/nvme0n1p2 on /mnt/local/boot/efi type vfat (rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,errors=remount-ro)
- the recovered system reboots, but TMT then tries to rsync something to it and complains that it can not find the target directory:
Command 'rsync -s -p --chmod=755 -e 'ssh -oForwardX11=no -oStrictHostKeyChecking=no -oUserKnownHostsFile=/dev/null -oConnectionAttempts=5 -oConnectTimeout=60 -oServerAliveInterval=5 -oServerAliveCountMax=60 -oIdentitiesOnly=yes -p22 -i /etc/citool.d/id_rsa_artemis -oPasswordAuthentication=no -S/tmp/tmpdsgkztm6' /var/ARTIFACTS/work-backup-and-restoresmh43m1c/tests/plans/backup-and-restore/discover/default-0/tests/tests/make-backup-and-restore-iso/tmt-test-wrapper.sh-default-0-default-0 root@18.191.153.93:/var/ARTIFACTS/work-backup-and-restoresmh43m1c/tests/plans/backup-and-restore/discover/default-0/tests/tests/make-backup-and-restore-iso/tmt-test-wrapper.sh-default-0-default-0' returned 3.
stderr (3 lines)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Warning: Permanently added '18.191.153.93' (ED25519) to the list of known hosts.
rsync: [Receiver] change_dir#3 "/var/ARTIFACTS/work-backup-and-restoresmh43m1c/tests/plans/backup-and-restore/discover/default-0/tests/tests/make-backup-and-restore-iso" failed: No such file or directory (2)
rsync error: errors selecting input/output files, dirs (code 3) at main.c(829) [Receiver=3.3.0]
- the directory
/var/ARTIFACTS/work-backup-and-restoresmh43m1c/tests/plans/backup-and-restore/discover/default-0/tests/tests/make-backup-and-restore-iso/tmt-test-wrapper.sh-default-0-default-0
exists in the system at the end of recovery though:
ls -R /mnt/local//var/ARTIFACTS/work-backup-and-restoresmh43m1c/tests/plans/backup-and-restore/discover
...
/mnt/local//var/ARTIFACTS/work-backup-and-restoresmh43m1c/tests/plans/backup-and-restore/discover/default-0/tests/tests/make-backup-and-restore-iso:
Makefile main.fmf tmt-test-wrapper.sh-default-0-default-0
PURPOSE runtest.sh
I am afraid that further attempts at debugging are a waste of time until Testing Farm can collect console logs from the last reboot (this is missing in the log currently) or let us login to the problematic test VM after the test errors out to examine what's wrong there. I am thus going to turn the Fedora 40 and Rawhide tests off in order to avoid permanently broken test results.
pcahyna commented at 2024-06-13 12:15:¶
@jsmeix
of course (as "SLES-like" indicates)
the "SLES-like btrfs subvolumes setup" method
may not work on non-SLES systems.
my problem is that BTRFS_SUBVOLUME_SLES_SETUP
and
BTRFS_SUBVOLUME_GENERIC_SETUP
are not documented in default.conf
and
thus I do not know what they are supposed to do, so I do not know how to
determine which one is more suitable to our case (other than blind
experimenting or reading the source code).
jsmeix commented at 2024-06-13 13:07:¶
BTRFS_SUBVOLUME_SLES_SETUP and BTRFS_SUBVOLUME_GENERIC_SETUP
are deliberately not (yet) documented in default.conf
see
https://github.com/rear/rear/commit/b144e9082511442b6f2426c9006e66d6c611edf9
Currently it is not documented because it is
work in progress where arbitrary further changes will happen
so one has to inspect the current code
and its comments to see how things currently work.
As I wrote above in
https://github.com/rear/rear/pull/3175#issuecomment-2154181600
After https://github.com/rear/rear/pull/2079
and https://github.com/rear/rear/pull/2080
I never tested BTRFS_SUBVOLUME_GENERIC_SETUP again
so that "work in progress" state
did not change since then - in particular
regarding BTRFS_SUBVOLUME_GENERIC_SETUP.
pcahyna commented at 2024-06-13 16:33:¶
I tried BTRFS_SUBVOLUME_GENERIC_SETUP
and the result is the same as
before - Fedora 39 test passes (the recovered layout appears to be
identical to the original layout) and Fedora 40 fails.
pcahyna commented at 2024-06-13 16:47:¶
There is at least one difference resulting from it.
Original mount
output:
/dev/nvme0n1p5 on / type btrfs (rw,relatime,seclabel,compress=zstd:1,ssd,space_cache=v2,subvolid=256,subvol=/root)
/dev/nvme0n1p5 on /home type btrfs (rw,relatime,seclabel,compress=zstd:1,ssd,space_cache=v2,subvolid=257,subvol=/home)
mount
output after recovery:
/dev/nvme0n1p5 on / type btrfs (rw,relatime,seclabel,compress=zstd:1,ssd,space_cache=v2,subvolid=257,subvol=/root)
/dev/nvme0n1p5 on /home type btrfs (rw,relatime,seclabel,compress=zstd:1,ssd,space_cache=v2,subvolid=256,subvol=/home)
Note the different subvolid. Not sure if this is significant or if one can expect this to be preserved.
jsmeix commented at 2024-06-14 05:44:¶
@pcahyna
as far as I remember btrfs subvolume numbers (i.e. 'subvolid')
are in the same way meaningless like disk device node letters
(e.g. sda vs. sdb for two disks) - i.e. they are basically
unpredictable enunmeration IDs.
So when btrfs subvolumes are referenced by 'subvolid'
one has to check before that what is referenced by 'subvolid'
is the actually intended btrfs subvolume.
But right now I could not quickly find
btrfs documentation that clearly describes that.
I think
(in contrast to disk device node letters which may
change from system boot to system boot - but are stable
as long as the system is running and as long as a disk
is connected - cf. removable disks e.g. USB disks)
btrfs subvolume 'subvolid' numbers are stable
as long as a btrfs subvolume exists.
This may lead users to the false assumption that they
can use a btrfs subvolume 'subvolid' number as a stable
identifier to reference a btrfs subvolume.
So when after "rear recover" btrfs subvolumes got
recreated with different 'subvolid' numbers
compared to what it was on the original system
this is technically correct behaviour of "rear recover"
but nevertheless some users may get hit by the changed
btrfs subvolume 'subvolid' numbers.
I think also with BTRFS_SUBVOLUME_SLES_SETUP
btrfs subvolumes get recreated with possibly
different 'subvolid' numbers.
I think I experienced that years ago during my tests
while I implemented the BTRFS_SUBVOLUME_SLES_SETUP method.
I never got a SLES user problem report because of this.
I assume system setup stuff deals correctly with the
unpredictable 'subvolid' numbers.
I think only some user's selfmade setup stuff may
falsely rely on stable 'subvolid' numbers.
gdha commented at 2024-06-14 12:35:¶
@pcahyna @jsmeix @lzaoral According the discussions we better move the milestone to "3.1", no?
lzaoral commented at 2024-06-14 12:53:¶
@ghda If you think that it is fine, that ReaR nukes the /home directory on default Fedora disk setups without any warnings, then feel free to postpone this PR to 3.1. Also, this PR is necessary (but not sufficient) to fix the CI failures on Fedora 40 and 41.
jsmeix commented at 2024-06-17 07:01:¶
@lzaoral
have there been behavioural changes since my
https://github.com/rear/rear/pull/3175#issuecomment-2151951631
i.e. changes that require that I test its current state again?
If not,
@gdha @lzaoral @pcahyna
I think it can and should be merged in its current state.
It is a major improvement that should not be omitted
from ReaR 3.0 - and because of the major version change
ReaR 3.0 behaviour can be rather backward incompatible
provided backward incompatible changes are sufficiently
well documented - I will care about that (as usual), cf.
https://github.com/rear/rear/issues/3238#issuecomment-2172487767
jsmeix commented at 2024-06-17 07:09:¶
@pcahyna @rear/contributors
I would like to merge it this week on Thursday afternoon
(or sooner if @pcahyna agrees) unless there are objections.
jsmeix commented at 2024-06-17 07:39:¶
@lzaoral
what confuses me are your commits here after my
https://github.com/rear/rear/pull/3175#issuecomment-2151951631
that are shown here (in the GitHub web UI) directly before
https://github.com/rear/rear/pull/3175#issuecomment-2160961648
When I click on "Compare" there which has URL
https://github.com/rear/rear/compare/3fe88cc47041a85e0b31ce9eca7cac57a333c6e3..9668297d3980a50514353fa4c599736bfeb50f65
what that shows me looks unrelated to this pull request.
So it seems your commits here after my
https://github.com/rear/rear/pull/3175#issuecomment-2151951631
did not change something that actually belongs
to this pull request but are only other (unrelated)
changes in ReaR because you
"force-pushed your backup-mounted-btrfs-subvolumes branch".
But I cannot be sure if those commits are really only
other (unrelated) changes in ReaR or if perhaps
something that actually belongs to this pull request
is somewhere intermixed?
I don't know how I could clearly see at a glance
whether or not something actually changed here since my
https://github.com/rear/rear/pull/3175#issuecomment-2151951631
Is this somehow possible via the GitHub web UI?
lzaoral commented at 2024-06-17 10:40:¶
@jsmeix The PR itself is unchanged from the last time you have reviewed
it. The diff you linked shows cahnges were added during a rebase against
main
because @pcahyna wanted to see if #3239 and this PR are enough
to fix the Testing Farm CI.
I'm not sure if it is possible to separate new changes added to a PR while also rebasing it in the GH UI.
please, let's wait for @pcahyna's review before merging.
pcahyna commented at 2024-06-17 13:20:¶
@jsmeix you are basically asking whether the old version of the PR (before rebase and force-push, 3fe88cc) introduces the same changes as the new version (after rebase and force-push, 9668297). This can be achieved by comparing the diff that the old version introduces with the diff that the new version introduces (I know, comparing diffs, which are themselves comparisons, a bit clumsy). I am afraid that the easiest way is quite manual: look at the first diff ( https://github.com/rear/rear/compare/master...3fe88cc47041a85e0b31ce9eca7cac57a333c6e3 ) and the second diff ( https://github.com/rear/rear/compare/master...9668297d3980a50514353fa4c599736bfeb50f65 ) side-by-side and you will see that they are the same. Note the three dots in the URL, as opposed to the two dots in the compare URL that you show, https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-comparing-branches-in-pull-requests#three-dot-and-two-dot-git-diff-comparisons .
I am sorry that I can't offer anything easier, I know that this is a drawback of updating PRs by rebasing them (OTOH, the history looks cleaner afterwards).
pcahyna commented at 2024-06-17 14:40:¶
@jsmeix I found a more a automated way! Since you can add .diff
(or
.patch
if you want also the commit metadata) to the GitHub comparison
URL to download the comparison in a text form, you can then diff the
results:
diff -u <(wget -O - https://github.com/rear/rear/compare/master...3fe88cc47041a85e0b31ce9eca7cac57a333c6e3.patch) <(wget -O - https://github.com/rear/rear/compare/master...9668297d3980a50514353fa4c599736bfeb50f65.patch)
which show that the only thing that changes are the commit SHA ids.
jsmeix commented at 2024-06-17 15:08:¶
@pcahyna
WOW! THANK YOU!
I optimized your command a bit
by using diff -U0
and wget -q
and diff
instead of patch
:
# old=3fe88cc47041a85e0b31ce9eca7cac57a333c6e3 \
new=9668297d3980a50514353fa4c599736bfeb50f65 ; \
diff -U0 \
<(wget -q -O - https://github.com/rear/rear/compare/master...$old.diff) \
<(wget -q -O - https://github.com/rear/rear/compare/master...$new.diff)
[no output]
Perfect!
FYI:
I experimeted with "w3m"
(I didn't know that I have to add .diff
or .patch
to make things
work in "w3m")
but that does not work (without .diff
or .patch
) because
# w3m https://github.com/rear/rear/compare/master...3fe88cc47041a85e0b31ce9eca7cac57a333c6e3 | less
...
This comparison is taking too long to generate.
Unfortunately it looks like we can’t render this comparison for you right now.
It might be too big, or there might be something weird with your repository.
You can try running this command locally to see the comparison on your machine:
git diff master...3fe88cc47041a85e0b31ce9eca7cac57a333c6e3
...
So I tried with a local clone
# git clone https://github.com/lzaoral/rear.git
# mv rear rear.lzaoral
# cd rear.lzaoral
# git checkout backup-mounted-btrfs-subvolumes
but that does not contain the git commit
3fe88cc47041a85e0b31ce9eca7cac57a333c6e3
It only contains the git commit
9668297d3980a50514353fa4c599736bfeb50f65
Also git clone https://github.com/rear/rear.git
does not contain the git commit
3fe88cc47041a85e0b31ce9eca7cac57a333c6e3
so currently I don't know where the git commit
3fe88cc47041a85e0b31ce9eca7cac57a333c6e3
could be found so that I gave up.
pcahyna commented at 2024-06-17 15:43:¶
@jsmeix that's an interesting problem. I did not know how to solve it,
so a quick web search (my search term were
git clone unreachable commits
) revealed this:
https://stackoverflow.com/questions/25416003/clone-a-git-repository-and-keep-unreachable-commits#comment53417187_25416117
Adapted to our case (I already have a git remote called lzaoral
in my
repo):
$ git fetch lzaoral 3fe88cc47041a85e0b31ce9eca7cac57a333c6e3:refs/remotes/lzaoral/orphaned-backup-mounted-btrfs-subvolumes
From github.com:lzaoral/rear
* [new ref] 3fe88cc47041a85e0b31ce9eca7cac57a333c6e3 -> lzaoral/orphaned-backup-mounted-btrfs-subvolumes
$ git show lzaoral/orphaned-backup-mounted-btrfs-subvolumes
commit 3fe88cc47041a85e0b31ce9eca7cac57a333c6e3 (lzaoral/orphaned-backup-mounted-btrfs-subvolumes, lzaoral/backup-mounted-btrfs-subvolumes)
Author: Lukáš Zaoral <lzaoral@redhat.com>
Date: Thu Mar 7 10:59:24 2024 +0100
...
jsmeix commented at 2024-06-20 08:33:¶
@pcahyna
I would like to merge it today afternoon, cf.
https://github.com/rear/rear/pull/3175#issuecomment-2172463047
unless there are objections.
pcahyna commented at 2024-06-25 09:45:¶
@jsmeix sorry for the delay, let me quickly check the code.
schlomo commented at 2024-07-12 08:43:¶
2 approvals including from @jsmeix who knows more about btrfs than me, don't see a reason to not merge.
lzaoral commented at 2024-07-12 17:00:¶
@schlomo This was still pending a review from @pcahyna who is the ReaR SME for Fedora and RHEL and we both co-maintain ReaR in these distributions. Please, do not merge my PRs unless they are trivial or have been also reviewed by @pcahyna. Thank you!
schlomo commented at 2024-07-12 17:02:¶
Ah, OK. no problem. Next time, please kindly mark them as WIP or draft so that we see that they are not ready for merging.
PRs that have approvals and that don't look like they need more work to be done should be allowed to be merged any time, IMHO.
schlomo commented at 2024-07-12 17:03:¶
@lzaoral about this PR specifically: Do you want me to un-merge it? Or can you and @pcahyna finish the review and fix any potential issues in a new PR or directly on master (for obvious things I prefer this)
[Export of Github issue for rear/rear.]