#1994 Issue closed: Recovery from GRUB menu with Borg failed due to full disk in certain VM environment

Labels: support / question, fixed / solved / done

Signum opened issue at 2018-12-04 14:38:

  • ReaR version ("/usr/sbin/rear -V"): Relax-and-Recover 2.3 / Git

  • OS version ("cat /etc/rear/os.conf" or "lsb_release -a" or "cat /etc/os-release"): Debian 9

  • ReaR configuration files ("cat /etc/rear/site.conf" and/or "cat /etc/rear/local.conf"):

    OUTPUT=ISO
    OUTPUT_URL="nfs://nas/share/mnt/rear"
    GRUB_RESCUE=y
    SSH_UNPROTECTED_PRIVATE_KEYS=yes

    BACKUP=BORG
    BORGBACKUP_HOST="nas"
    BORGBACKUP_USERNAME="rear"
    BORGBACKUP_REPO="/share/rear/ansible-test"

    export BORG_PASSPHRASE="foobar"
    COPY_AS_IS_BORG=( "/root/.ssh/id_rsa" "/root/rear" )

    export BORG_RELOCATED_REPO_ACCESS_IS_OK=yes
    export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=yes

    export PRE_BACKUP_SCRIPT=/root/rear/pre_backup
    export POST_BACKUP_SCRIPT=/root/rear/post_backup
    export POST_RECOVERY_SCRIPT=/root/rear/post_recovery

    BACKUP_PROG_INCLUDE=( "${BACKUP_PROG_INCLUDE[@]}" '/dbsnap/*' )

    USE_SERIAL_CONSOLE=y

    BORGBACKUP_PRUNE_WEEKLY=2
    export BACKUP_PROG_EXCLUDE=( "/var/lib/postgresql" )

  • Hardware (PC or PowerNV BareMetal or ARM) or virtual machine (KVM guest or PoverVM LPAR):

Virtualbox VM with Debian 9 started by Vagrant and deployed by Ansible.
Used for staging of REAR configuration.

  • System architecture (x86 compatible or PPC64/PPC64LE or what exact ARM device):

AMD64

  • Firmware (BIOS or UEFI or Open Firmware) and bootloader (GRUB or ELILO or Petitboot):

VirtualBox BIOS
GRUB

  • Storage (lokal disk or SSD) and/or SAN (FC or iSCSI or FCoE) and/or multipath (DM or NVMe):

Emulated disk via VirtualBox.

  • Description of the issue (ideally so that others can reproduce it):

I deployed a fresh Debian 9 system from a Vagrant box. I installed the standalone "borg" binary
and installed REAR.

When booting the VM I get "Relax and Recover" as third item in the GRUB boot menu. I can choose that and boot the system. Recovery will start to partition the disks correctly. Then "borg" is called and quits immediately with messages like "Failed to write all bytes for _bisect.so". Apparently that is caused by the root disk being full 100% so borg fails to run.

However when I create a VirtualBox VM manually then there is space left on "/" and recovering from the GRUB menu entry works as expected.

Booting from the ISO image works every time, too.

  • Workaround, if any:

Boot from ISO.

jsmeix commented at 2018-12-04 14:59:

@gozora
I assign it to you because it mentiones booting and Borg.

@Signum
I know nothing at all about
Virtualbox VM with Debian 9 started by Vagrant and deployed by Ansible.
Can't you determine the available disks in that Virtualbox VM
or explain what the root disk being full actually means?

Also there is space left on "/" is not really meaningful for me
because within the running ReaR recovery system what is
mounted at '/' is only the ramdisk of the ReaR recovery system.
The actual system disk i.e. the target system disk (e.g. /dev/sda/)
is not mounted at '/' within the running ReaR recovery system.
The target system disk (e.g. /dev/sda/) gets partitioned and
filesystems get created and those filesystems get mounted
at /mnt/local within the ReaR recovery system while rear recover
runs according to the data in disklayout.conf.

See also in
https://github.com/rear/rear/blob/master/.github/ISSUE_TEMPLATE.md

Attachments, as applicable ("rear -D mkrescue/mkbackup/recover" debug log files):

and "Debugging issues with Relax-and-Recover" in
https://en.opensuse.org/SDB:Disaster_Recovery

Signum commented at 2018-12-04 15:11:

@jsmeix
Sorry for the sparse information. It was not easy to get log files out of the broken VM at that time.
Would it be okay if I prepared a simple Git project with the necessary files to run "Vagrant" in order to reproduce this issue? Or is this asked too much? I would totally understand that.

Just as an enhancement idea: should REAR check if the recovery binary (e.g. "borg") is available and can run before destroying the physical disks and repartitioning them?

Signum commented at 2018-12-04 15:13:

Typo by the way… I meant to say there there is no space left on "/".

gozora commented at 2018-12-04 16:25:

Hello @Signum,

Honestly I've never used booting ReaR rescue system from Grub (apart from writing couple of patches...), I simply don't like it ;-).
I'll try to reproduce your problem and we will see.

should REAR check if the recovery binary (e.g. "borg") is available ...

borg binary normally should be available since it is checked during rear mkbackup/mkrescue.

and can run before destroying the physical disks and repartitioning them?

We are talking here about disaster recovery, your system is already destroyed :-).

I'll check what can be done in terms of pre-checks

V.

gozora commented at 2018-12-04 16:53:

I happen to have Fedora release 26 (Twenty Six) installed and configured with ReaR Borg back end.
I've let ReaR to create Grub boot entry:

### BEGIN /etc/grub.d/45_rear ###
menuentry 'Relax-and-Recover' --class os {
          search --no-floppy --fs-uuid --set=root 44123a90-d84b-49d9-9fb4-70664dc08174
          echo 'Loading kernel /boot/rear-kernel ...'
          linux /rear-kernel  selinux=0
          echo 'Loading initrd /boot/rear-initrd.cgz (may take a while) ...'
          initrd /rear-initrd.cgz
}
### END /etc/grub.d/45_rear ###

Booted this entry and triggered rear recover.
Everything works as it should.

After booting into ReaR recovery system, I've got following file system layout:

RESCUE fedora:~ # df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        374M     0  374M   0% /dev
tmpfs           493M     0  493M   0% /dev/shm
tmpfs           493M  6.9M  486M   2% /run
tmpfs           493M     0  493M   0% /sys/fs/cgroup

Then after starting rear recover:

ESCUE fedora:~ # rear recover
Relax-and-Recover 2.4 / Git
Running rear recover (PID 595)
Using log file: /var/log/rear/rear-fedora.log
Running workflow recover within the ReaR rescue/recovery system
Comparing disks
Device sda has expected (same) size 8589934592 (will be used for recovery)
Disk configuration looks identical
Proceed with recovery (yes) otherwise manual disk layout configuration is enforced
(default 'yes' timeout 30 seconds)
yes
User confirmed to proceed with recovery
Start system layout restoration.
Creating partitions for disk /dev/sda (msdos)
Creating LVM PV /dev/sda2
Restoring LVM VG 'fedora'
Sleeping 3 seconds to let udev or systemd-udevd create their devices...
Creating filesystem of type xfs with mount point / on /dev/mapper/fedora-root.
Mounting filesystem /
Creating filesystem of type ext4 with mount point /boot on /dev/sda1.
Mounting filesystem /boot
Creating swap on /dev/mapper/fedora-swap
Disk layout created.
Starting Borg restore

=== Borg archives list ===
Host:       backup
Repository: /mnt/rear/borg/fedora

[1] rear_6  Wed, 2018-05-09 15:25:19
[2] rear_8  Mon, 2018-09-24 18:33:06

[3] Exit

Choose archive to recover from
(timeout 300 seconds)
1
Recovering from Borg archive rear_6
Borg OS restore finished successfully
Recreating directories (with permissions) from /var/lib/rear/recovery/directories_permissions_owner_group
Running mkinitrd...
Updated initrd with new drivers for kernel 4.14.13-200.fc26.x86_64.
Running mkinitrd...
Updated initrd with new drivers for kernel 4.14.16-200.fc26.x86_64.
Running mkinitrd...
Updated initrd with new drivers for kernel 4.15.17-200.fc26.x86_64.
Skip installing GRUB Legacy boot loader because GRUB 2 is installed (grub-probe or grub2-probe exist).
Installing GRUB2 boot loader...
Determining where to install GRUB2 (no GRUB2_INSTALL_DEVICES specified)
Found possible boot disk /dev/sda - installing GRUB2 there
Finished recovering your system. You can explore it under '/mnt/local'.
Exiting rear recover (PID 595) and its descendant processes
Running exit tasks

RESCUE fedora:~ # df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 375M     0  375M   0% /dev
tmpfs                    493M     0  493M   0% /dev/shm
tmpfs                    493M  6.9M  486M   2% /run
tmpfs                    493M     0  493M   0% /sys/fs/cgroup
/dev/mapper/fedora-root  6.2G  2.1G  4.2G  34% /mnt/local
/dev/sda1                976M  178M  753M  20% /mnt/local/boot

Now I start to think that I did not fully understand what your problem actually is ....

How can you have full disk, when running rear recover, when rear recover will newly partition your disk, so it is empty ?
Can it be that you are restoring to smaller disk than your backup archive ?
If so, how come that

Then "borg" is called and quits immediately

it should take some time until your disk fills in ...

V.

jsmeix commented at 2018-12-05 10:06:

@Signum
I use neither Vagrant nor Ansible.
I only use KVM/QEMU virtual machines that I set up with 'virt-manager'.

I think to analyze what the root cause is here the crucial point is not
with what tool or stack of tools the virtual machine was set up.

I think to analyze what the root cause is here the crucial point is
to inspect your environment (in particular your disks parted -l and
your block devices lsblk and what is mounted where findmnt )
in your new booted virtual machine before you run 'rear recover'
to find out what the difference is between using the ReaR ISO image
versus your boot method via GRUB.

jsmeix commented at 2018-12-05 10:19:

@gozora
because of

I deployed a fresh Debian 9 system from a Vagrant box.
I installed the standalone "borg" binary and installed REAR.

I have the dim feeling @Signum may try to run 'rear recover'
from inside the target system disk (e.g. from inside /dev/sda1)
instead of running 'rear recover' outside of the target system disk
as usual for installation systems and/or rescue systems that run
on a ramdisk only within the target system's main memory?

Signum commented at 2018-12-05 10:22:

@gozora
What I did:

  • "rear mkbackup" on $SERVER with /etc/rear/local.conf having GRUB_RESCUE=yes
  • Reboot $SERVER
  • Choose "Relax-and-Recover" in the GRUB boot menu
  • Login as "root" once REAR has gone through the startup scripts
  • Find the "/" partition 100% full

jsmeix commented at 2018-12-05 10:26:

I had overlooked GRUB_RESCUE=y in the initial description.
I understand now how the ReaR recovery system is booted via GRUB.

@Signum
what exactly is the "/" partition for you here?
In the running ReaR recovery system "/" is
a ramdisk that contains the ReaR recovery system.

gozora commented at 2018-12-05 10:33:

@Signum can you maybe try to add more RAM to your VM?
It might be that ReaR recovery system (the one you've booted from Grub menu) have slightly more memory requirements then your ordinary initrd used for regular boot ...

V.

jsmeix commented at 2018-12-05 11:08:

I tried GRUB_RESCUE=y on my SLES12 system
(not with Borg but with BACKUP=NETFS using tar)
and 'rear recover' (via 'Relax-and-Recover' in GRUB)
did "just work" for me
(with 1 GiB main memory in my KVM/QEMU virtual machine).

Signum commented at 2018-12-05 13:54:

This is my disk usage after booting into the system:

> df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs          210M  194M   16M  93% /
devtmpfs        210M     0  210M   0% /dev
tmpfs           247M     0  247M   0% /dev/shm
tmpfs           247M  3.4M  243M   2% /run
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           247M     0  247M   0% /sys/fs/cgroup

After calling "borg list" it instantly becomes:

> df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs          210M  210M     0 100% /
devtmpfs        210M     0  210M   0% /dev
tmpfs           247M     0  247M   0% /dev/shm
tmpfs           247M  3.4M  243M   2% /run
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           247M     0  247M   0% /sys/fs/cgroup

…and borg fails with "Failed to write all bytes for libcrypto.so.1.0.0".

It looks like BorgBackup has created a directory /tmp/_MEIoKn9nB/ that contains:

total 15736
drwx------ 2 root root     920 Dec  4 22:12 .
drwxr-xr-x 3 root root      80 Dec  4 22:12 ..
-rwx------ 1 root root   36054 Dec  4 22:12 _bisect.so
-rwx------ 1 root root   57092 Dec  4 22:12 _bz2.so
-rwx------ 1 root root  181683 Dec  4 22:12 _codecs_cn.so
-rwx------ 1 root root  182206 Dec  4 22:12 _codecs_hk.so
-rwx------ 1 root root   73275 Dec  4 22:12 _codecs_iso2022.so
-rwx------ 1 root root  313190 Dec  4 22:12 _codecs_jp.so
-rwx------ 1 root root  166375 Dec  4 22:12 _codecs_kr.so
-rwx------ 1 root root  134008 Dec  4 22:12 _codecs_tw.so
-rwx------ 1 root root  611525 Dec  4 22:12 _ctypes.so
-rwx------ 1 root root  367413 Dec  4 22:12 _datetime.so
-rwx------ 1 root root 1949218 Dec  4 22:12 _decimal.so
-rwx------ 1 root root   78120 Dec  4 22:12 _hashlib.so
-rwx------ 1 root root   48695 Dec  4 22:12 _heapq.so
-rwx------ 1 root root  152383 Dec  4 22:12 _json.so
-rwx------ 1 root root   65569 Dec  4 22:12 _lsprof.so
-rwx------ 1 root root  110627 Dec  4 22:12 _lzma.so
-rwx------ 1 root root   48385 Dec  4 22:12 _md5.so
-rwx------ 1 root root  130437 Dec  4 22:12 _multibytecodec.so
-rwx------ 1 root root   52074 Dec  4 22:12 _multiprocessing.so
-rwx------ 1 root root   20814 Dec  4 22:12 _opcode.so
-rwx------ 1 root root  515672 Dec  4 22:12 _pickle.so
-rwx------ 1 root root   89207 Dec  4 22:12 _posixsubprocess.so
-rwx------ 1 root root   36731 Dec  4 22:12 _random.so
-rwx------ 1 root root   47306 Dec  4 22:12 _sha1.so
-rwx------ 1 root root   66784 Dec  4 22:12 _sha256.so
-rwx------ 1 root root   72941 Dec  4 22:12 _sha512.so
-rwx------ 1 root root  276203 Dec  4 22:12 _socket.so
-rwx------ 1 root root  148385 Dec  4 22:12 _struct.so
-rwx------ 1 root root  186992 Dec  4 22:12 array.so
-rwx------ 1 root root   81241 Dec  4 22:12 binascii.so
-rwx------ 1 root root  242736 Dec  4 22:12 borg.algorithms.checksums.so
-rwx------ 1 root root  192249 Dec  4 22:12 borg.chunker.so
-rwx------ 1 root root 4839421 Dec  4 22:12 borg.compress.so
-rwx------ 1 root root  563556 Dec  4 22:12 borg.crypto.low_level.so
-rwx------ 1 root root  649683 Dec  4 22:12 borg.hashindex.so
-rwx------ 1 root root  698128 Dec  4 22:12 borg.item.so
-rwx------ 1 root root  817363 Dec  4 22:12 borg.platform.linux.so
-rwx------ 1 root root  135495 Dec  4 22:12 borg.platform.posix.so
-rwx------ 1 root root   44526 Dec  4 22:12 fcntl.so
-rwx------ 1 root root   35988 Dec  4 22:12 grp.so
-rwx------ 1 root root   35320 Dec  4 22:12 libacl.so.1
-rwx------ 1 root root   18672 Dec  4 22:12 libattr.so.1
-rwx------ 1 root root   66824 Dec  4 22:12 libbz2.so.1.0
-rw-r--r-- 1 root root 1396736 Dec  4 22:12 libcrypto.so.1.0.0

…and that's pretty exactly 16 MB in size.

So either 210 MB is not enough for the root partition or it is not expected that Borg creates that many files in /tmp.

gozora commented at 2018-12-05 14:16:

@Signum that makes perfect sense.
Standalone borg carries all its file and libraries in its executable and extracts them to temporary location (mostly /tmp) upon its execution.

I guess that you just need more RAM assigned to your VM and you are safe ...

V.

Signum commented at 2018-12-05 14:48:

@gozora
You are totally right. RAM increased from 512 MB to 1024 MB and now the RAM disk is large enough to start Borg. Oh, my. Sorry for being so stupid not to track that down earlier. I rest my case. :)

This really seems to be a corner case. It was just unfortunate because REAR destroys the existing partitions and then Borg fails. So perhaps it could be checked if "borg" is executable. But in the end that's really a minor issue.

Thanks everyone for lending me your brains.

jsmeix commented at 2018-12-05 14:59:

@Signum
thank you for your explanatory feedback what the root cause was.
It helps us a lot to better imagine even special use-cases.

@gozora
now I am thinking about to add another test to the recovery system
startup scripts that checks for "sufficient" free ramdisk space in advance
before things fail in arbitrary weird ways later during "rear recover".

Currently I don't know what "sufficient free ramdisk space" could be
to run "rear recover" successfully under usual circumstances.
I guess a bit more than 16 MiB would be not too bad in general ;-)

On my KVM/QEMU system with 1GiB memory
df -h / shows for the recovery system rootfs
449M size where 228M are used (51%) and 222M are free
before I run "rear recover" and after "rear recover" finished
238M are used (54%) and 211M are free.

@Signum
what does df -h / show for the recovery system rootfs
before and after "rear recover" in your case?

I like to get an idea how much free ramdisk space is usually needed.

gozora commented at 2018-12-05 15:31:

@jsmeix question of memory can be quite tricky.
On my testing VM with Arch Linux I have really minimalist initramfs and kernel:

arch-efi:(/root)(root)# ll /boot/initramfs-linux.img /boot/vmlinuz-linux 
-rwxr-xr-x 1 root root 8052959 Oct 29 19:17 /boot/initramfs-linux.img
-rwxr-xr-x 1 root root 5875568 Oct 21 00:06 /boot/vmlinuz-linux

This means that my Arch VM will boot without any trouble with 256MB of RAM.
However if I run backup of Arch I have initrramfs (~200MB)

arch-efi:(/root)(root)# ll /mnt/iso/isolinux/{initrd.cgz,kernel}
-rw------- 1 root root 221129886 Dec  4 18:45 /mnt/iso/isolinux/initrd.cgz
-rwxr-xr-x 1 root root   5875568 Oct 21 00:06 /mnt/iso/isolinux/kernel

Which will fail to boot with 256MB of RAM.

This is why I think that it might be a bit hard to find universal way how to resolve this ...

V.

gozora commented at 2018-12-05 15:33:

I guess @Signum was kind of lucky that he was able to boot, if his initramfs would be ~16MB larger (e.g. by using MODULES=( 'all_modules' )), his ReaR recovery system would not boot at all.

V.

jsmeix commented at 2018-12-05 15:46:

@gozora
I do not plan to abort when there is only very little free space
but I like to show in any case how much free space there is
(basically the df -h / output) and show a nice WARNING
if there is only very little free space.

FYI:
I did right now a recovery this way:

# export MIGRATION_MODE=false
# rear -D recover & for i in $( seq 120 ) ; do df -h / ; sleep 1 ; done

that shows me each second a df -h / output while "rear recover" is running
to (hopefully) detect significant space usage by some tools
(e.g. temporary files that get deleted afterwards)
but I found none.
For me it increases steadily from 51% to 54% (which is 11 MiB).

gozora commented at 2018-12-05 15:55:

@jsmeix just a short heads up on df -h.
Not sure why but some of my test machines does not show / utilization in ReaR recovery system:

Arch

RESCUE arch-efi:~ # df -h /
Filesystem      Size  Used Avail Use% Mounted on
rootfs             0     0     0    - /
RESCUE arch-efi:~ # df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        890M     0  890M   0% /dev
tmpfs           997M     0  997M   0% /dev/shm
tmpfs           997M  8.3M  989M   1% /run
tmpfs           997M     0  997M   0% /sys/fs/cgroup

Fedora

RESCUE fedora:~ # df -h /
Filesystem      Size  Used Avail Use% Mounted on
rootfs             0     0     0    - /
RESCUE fedora:~ # df 
Filesystem     1K-blocks  Used Available Use% Mounted on
devtmpfs          898160     0    898160   0% /dev
tmpfs            1020272     0   1020272   0% /dev/shm
tmpfs            1020272  8860   1011412   1% /run
tmpfs            1020272     0   1020272   0% /sys/fs/cgroup

So yet another complication I guess ...

V.

gozora commented at 2018-12-05 16:06:

Heh, even more funny ...
Centos

RESCUE centos69:~ # df -h
df: no file systems processed
RESCUE centos69:~ # mount
rootfs on / type rootfs (rw)
none on /proc type proc (rw,nosuid,nodev,noexec,relatime)
none on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
none on /dev/pts type devpts (rw,relatime,gid=5,mode=620,ptmxmode=000)

V.

jsmeix commented at 2018-12-06 08:24:

@gozora
thank you so much for sharing your experience!
You saved me from feeling hallucination because I had also experienced the
same both crazy df behaviour as you experienced within your recovery system
on my SLES12 KVM/QEMU virtual machine but somehow (on the same virtual
machine with the same recovery system but booted anew for my various attempts)
it also "just worked" as shown in my above comments and I was wondering about
if I am hallucination that df had "just failed" for me in some other attempts before.

jsmeix commented at 2018-12-06 08:26:

Only a blind guess:
Might those unexpected df failures within the recovery system
perhaps be avoided by using MODULES=( 'all_modules' ) ?

gozora commented at 2018-12-06 08:45:

Hello @jsmeix,

This one looks better:

RESCUE arch:~ # df -hT
Filesystem     Type      Size  Used Avail Use% Mounted on
devtmpfs       devtmpfs  1.9G     0  1.9G   0% /dev
tmpfs          tmpfs     2.0G     0  2.0G   0% /dev/shm
tmpfs          tmpfs     2.0G  8.3M  2.0G   1% /run
tmpfs          tmpfs     2.0G     0  2.0G   0% /sys/fs/cgroup
RESCUE arch:~ # df -hTa
Filesystem     Type        Size  Used Avail Use% Mounted on
rootfs         rootfs         0     0     0    - /
sysfs          sysfs          0     0     0    - /sys
proc           proc           0     0     0    - /proc
devtmpfs       devtmpfs    1.9G     0  1.9G   0% /dev
securityfs     securityfs     0     0     0    - /sys/kernel/security
tmpfs          tmpfs       2.0G     0  2.0G   0% /dev/shm
devpts         -              -     -     -    - /dev/pts
tmpfs          tmpfs       2.0G  8.3M  2.0G   1% /run
tmpfs          tmpfs       2.0G     0  2.0G   0% /sys/fs/cgroup
cgroup2        cgroup2        0     0     0    - /sys/fs/cgroup/unified
cgroup         cgroup         0     0     0    - /sys/fs/cgroup/systemd
pstore         pstore         0     0     0    - /sys/fs/pstore
efivarfs       efivarfs       0     0     0    - /sys/firmware/efi/efivars
bpf            bpf            0     0     0    - /sys/fs/bpf
cgroup         cgroup         0     0     0    - /sys/fs/cgroup/blkio
cgroup         cgroup         0     0     0    - /sys/fs/cgroup/hugetlb
cgroup         cgroup         0     0     0    - /sys/fs/cgroup/memory
cgroup         cgroup         0     0     0    - /sys/fs/cgroup/pids
cgroup         cgroup         0     0     0    - /sys/fs/cgroup/perf_event
cgroup         cgroup         0     0     0    - /sys/fs/cgroup/net_cls,net_prio
cgroup         cgroup         0     0     0    - /sys/fs/cgroup/freezer
cgroup         cgroup         0     0     0    - /sys/fs/cgroup/devices
cgroup         cgroup         0     0     0    - /sys/fs/cgroup/cpu,cpuacct
cgroup         cgroup         0     0     0    - /sys/fs/cgroup/cpuset
cgroup         cgroup         0     0     0    - /sys/fs/cgroup/rdma
none           devpts         0     0     0    - /dev/pts

Excerpt from df --help:

Mandatory arguments to long options are mandatory for short options too.
  -a, --all             include pseudo, duplicate, inaccessible file system

V.


[Export of Github issue for rear/rear.]