#3451 Issue closed
: SAS Ultrium9 tape drive: dd: error writing '/dev/nst0': Invalid argument¶
Labels: support / question
, fixed / solved / done
,
special hardware or VM
DebianGuru opened issue at 2025-04-08 13:03:¶
Requesting support or just a question¶
Is this a blocksize issue?
Platform¶
Debian12,
rear installed from Debian repository.
Relax-and-Recover 2.7 / Git.
New Lenovo Server with Ultrium9 SAS drive.
See the info in
https://github.com/rear/rear/issues/3448
Output¶
I'm new to rear with tape storage.
I am trying to backup to my new tape drive and getting the error above.
I looked at dmesg and saw
st 1:0:0:0: [st0] Write not multiple of tape block size.
a few times.
Makes me think there's a "mismatch" between whatever block size rear is trying to use vs what my tape drive wants.
I can't seem to find the answer or if I'm on the right track.
jsmeix commented at 2025-04-09 07:17:¶
@DebianGuru
in usr/share/rear/conf/default.conf there is
# Tape block size, default is to leave it up to the tape-device:
TAPE_BLOCKSIZE=
Perhaps setting this appropriately helps?
DebianGuru commented at 2025-04-16 14:22:¶
If this help, I saw in the logs, that the command that was running to backup was:
tar --warning=no-xdev --sparse --block-number --totals --verbose --no-wildcards-match-slash --one-file-system --ignore-failed-read --anchored --xattrs --xattrs-include=security.capability --xattrs-include=security.selinux --acls --gzip -X /var/tmp/rear.dOwUF559mjej0dx/tmp/backup-exclude.txt -C / -c -f - /home / /tmp /var /boot/efi /boot /var/log/rear/rear-wn-kvm01-pri.281105.log | dd of=/dev/nst0 bs=1M
So I pasted it directly to bash. It fails with "dd: error writing '/dev/nst0': Invalid argument"
I see from this command that tar is outputting to stdout with the "-f -
" then piping that to the "dd" command. So I did a bit of testing.
Skipping the piping to dd an writing directly to tape like so:
tar --warning=no-xdev --sparse --block-number --totals --verbose --no-wildcards-match-slash --one-file-system --ignore-failed-read --anchored --xattrs --xattrs-include=security.capability --xattrs-include=security.selinux --acls --gzip -X /var/tmp/rear.dOwUF559mjej0dx/tmp/backup-exclude.txt -C / -c -f /dev/nst0 /home / /tmp /var /boot/efi /boot /var/log/rear/rear-wn-kvm01-pri.281105.log
runs just fine and throws no errors.
I also discovered that just removing the '--gzip' switch in tar allowed
me to run it like so:
tar --warning=no-xdev --sparse --block-number --totals --verbose --no-wildcards-match-slash --one-file-system --ignore-failed-read --anchored --xattrs --xattrs-include=security.capability --xattrs-include=security.selinux --acls -X /var/tmp/rear.dOwUF559mjej0dx/tmp/backup-exclude.txt -C / -c -f - /home / /tmp /var /boot/efi /boot /var/log/rear/rear-wn-kvm01-pri.281105.log | dd of=/dev/nst0 bs=1M
Again, this works with no errors.
So it appear that running tar with gzip, then piping it out through dd is causing an issue.
I then followed up by modifying the two previous examples by replacing "-f /dev/nst0" with "-f /root/backup.tar.gz" in the first example. In the second example, I replaced "of=/dev/nst0" with "of=/root/backup2.tar.gz". Both times, these ran fine and generated an appropriate .tar.gz file.
So it appears, that the combination of --gzip and dd only fail with a tape drive. If you output to a hard drive, they work fine. Any guesses?
Is there a reason the default is to pipe the output of tar throught dd rather than just writing to my tape?
jsmeix commented at 2025-04-16 15:29:¶
I think the script which makes the backup in your case is
usr/share/rear/backup/NETFS/default/500_make_backup.sh
See therein the comment:
# Check if the backup needs to be split or not (on multiple ISOs).
# Dummy split command when the backup is not split (the default case).
# Let 'dd' read and write up to 1M=1024*1024 bytes at a time to speed up things
# for example from only 500KiB/s (with the 'dd' default of 512 bytes)
# via a 100MBit network connection to about its full capacity
# cf. https://github.com/rear/rear/issues/2369
SPLIT_COMMAND="dd of=$backuparchive bs=1M"
Perhaps a tape drive does not like up to 1MiB at once?
Meanwhile I think that in "man rear"
https://github.com/rear/rear/blob/master/doc/rear.8.adoc
the part
When using BACKUP=NETFS you must provide the backup target location
through the BACKUP_URL variable. Possible BACKUP_URL settings are:
...
BACKUP_URL=tape://
To backup to tape device, use BACKUP_URL=tape:///dev/nst0
or alternatively, simply define TAPE_DEVICE=/dev/nst0
could have become meanwhile outdated and be no longer valid
because I think ReaR with tape devices is no longer tested
since a long time and also it is likely no longer used since
a long time - in particular not recent ReaR versions.
Perhaps there are some users who still use old ReaR versions
which work for their tape devices since a long time?
gdha commented at 2025-04-19 05:56:¶
@DebianGuru As you mentioned that backup to tape works fine without
--gzip
you can make it work by defining in your /etc/rear/local.conf
the following:
BACKUP_PROG_COMPRESS_OPTIONS=( )
(see the details in the /usr/share/rear/conf/default.conf
)
gdha commented at 2025-04-24 13:48:¶
@DebianGuru can you confirm above statement please?
gdha commented at 2025-04-29 14:03:¶
@DebianGuru could you show us your /etc/rear/local.conf
file?
DebianGuru commented at 2025-04-29 14:43:¶
@gdha ,
I kind of abandoned the efforts out of frustration. As it turns out, my
need (to backup a bare-metal level kvm hypervisor) was met by using Rear
onto a flash drive. The big data (the VMs) are being sent to the tape
drive via tar. I really do appreciate your help and effort. And yes, at
one point, I did try using BACKUP_PROG_COMPRESS_OPTIONS=( ). It just
seemed like if I could get a backup to run, then the restore would
refuse to run. In the end, I'm using Rear with a USB flash and the
backup and recovery work great. Thanks again for your help.
jsmeix commented at 2025-04-30 07:22:¶
@DebianGuru
I think you did "the excat right thing" by separating
disaster recovery of the basic operating system
from restore of the big data, cf.
https://en.opensuse.org/SDB:Disaster_Recovery#Basics
(excerpt)
In this particular case "disaster recovery" means
to recreate the basic operating system
(i.e. what you had initially installed from an
... Linux ... install medium).
In particular special third party applications
(e.g. a third party database system which often requires
special actions to get it installed and set up)
must usually be recreated in an additional separate step.
In your case the basic operating system
is the KVM host system (the hypervisor)
and the "special third party applications"
are the VMs on that KVM host system.
In general I recommend to not have big data
in the backup which is meant for disaster recovery
because when you have big data in the backup
which is meant for disaster recovery, then
the disaster recovery process takes very long
(restoring the backup takes most of the time)
until you have your basic system back.
Furthermore I think a huge "all-in-one" backup
may have a higher likelihood to fail
(during backup and in particular during restore)
than several separated backups, e.g. in your case
one backup for the basic operating system
and as needed separated backups for the VMs, cf.
https://github.com/rear/rear/pull/3177#issuecomment-1985926458
Finally I think several separated backups
are easier to handle in practice because
you can backup separated things individually as needed,
e.g. in your case you can backup the basic operating system
as needed independent of backups of the VMs as needed, cf.
https://en.opensuse.org/SDB:Disaster_Recovery#Relax-and-Recover_versus_backup_and_restore
(excerpts)
after each change of the basic system
(in particular after a change of the disk layout)
"rear mkbackup" needs to be run to create
a new ReaR recovery system together with
a matching new backup of the files
(or when third party backup software is used
"rear mkrescue" needs to be run to create
a new ReaR recovery system and additionally
a matching new backup of the files must be created).
jsmeix commented at 2025-05-23 07:52:¶
From what I wrote above in
https://github.com/rear/rear/issues/3451#issuecomment-2841051182
I derived a new part
ReaR versus "big data" backup
in
https://en.opensuse.org/SDB:Disaster_Recovery#Relax-and-Recover_versus_backup_and_restore
[Export of Github issue for rear/rear.]