#3141 Issue closed
: /sbin/mount.nfs Binary not available during disaster recovery¶
Labels: support / question
, fixed / solved / done
StefanFromMunich opened issue at 2024-01-29 08:05:¶
We have created the backup file rear-oralin1.ios.
However, this is not suitable for disaster recovery
because unfortunately, the rear-oralin1.ios does not contain
the required nfs-utils package to carry out the mount
on the NAS in order to get to the backup.tar.gz.
We have already tried to generate a new ISO containing the package
(added line PACKAGES+="nfs-utlis" in /etc/rear/local.conf),
but this doesn't seem to have worked yet because
the /sbin/mount.nfs binary is still missing.
What else do we need to configure so
that /sbin/mount.nfs binary is available
during disaster recovery?
StefanFromMunich commented at 2024-01-29 13:38:¶
I send now the complete configuration of the /etc/rear/local.conf file.
OUTPUT=ISO
OUTPUT_URL=file:///mnt/rescue_system/
BACKUP=NETFS
BACKUP_URL=file:///rear/
PACKAGES+=" nfs-utils"
Perhaps it helps to find the missing entry in this file.
StefanFromMunich commented at 2024-01-30 10:20:¶
I am now sending additional information that should help solve the problem:
-
ReaR version ("/usr/sbin/rear -V"):
Relax-and-Recover 2.6 / 2020-06-17 -
OS version ("cat /etc/os-release" or "lsb_release -a" or "cat /etc/rear/os.conf"):
NAME="Oracle Linux Server"
VERSION="8.5"
ID="ol"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="8.5"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Oracle Linux Server 8.5"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:oracle:linux:8:5:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://bugzilla.oracle.com/"
ORACLE_BUGZILLA_PRODUCT="Oracle Linux 8"
ORACLE_BUGZILLA_PRODUCT_VERSION=8.5
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=8.5
- ReaR configuration files ("cat /etc/rear/site.conf" and/or "cat /etc/rear/local.conf"):
# This file etc/rear/local.conf is intended for the user's
# manual configuration of Relax-and-Recover (ReaR).
# For configuration through packages and other automated means
# we recommend a separated file named site.conf next to this file
# and leave local.conf as is (ReaR upstream will never ship a site.conf).
# The default OUTPUT=ISO creates the ReaR rescue medium as ISO image.
# You need to specify your particular backup and restore method for your data
# as the default BACKUP=REQUESTRESTORE does not really do that (see "man rear").
# Configuration variables are documented in /usr/share/rear/conf/default.conf
# and the examples in /usr/share/rear/conf/examples/ can be used as templates.
# ReaR reads the configuration files via the bash builtin command 'source'
# so bash syntax like VARIABLE="value" (no spaces at '=') is mandatory.
# Because 'source' executes the content as bash scripts you can run commands
# within your configuration files, in particular commands to set different
# configuration values depending on certain conditions as you need like
# CONDITION_COMMAND && VARIABLE="special_value" || VARIABLE="usual_value"
# but that means CONDITION_COMMAND gets always executed when 'rear' is run
# so ensure nothing can go wrong if you run commands in configuration files.
OUTPUT=ISO
OUTPUT_URL=file:///mnt/rescue_system/
BACKUP=NETFS
BACKUP_URL=file:///rear/
PACKAGES+=" nfs-utils"
-
Hardware vendor/product (PC or PowerNV BareMetal or ARM) or VM (KVM guest or PowerVM LPAR):
PC -
System architecture (x86 compatible or PPC64/PPC64LE or what exact ARM device):
x86 -
Firmware (BIOS or UEFI or Open Firmware) and bootloader (GRUB or ELILO or Petitboot):
UEFI -
Storage (local disk or SSD) and/or SAN (FC or iSCSI or FCoE) and/or multipath (DM or NVMe):
SSD -
Storage layout ("lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,LABEL,SIZE,MOUNTPOINT"):
NAME KNAME PKNAME TRAN TYPE FSTYPE LABEL SIZE MOUNTPOINT
/dev/sda /dev/sda sata disk isw_raid_member 447,1G
|-/dev/sda1 /dev/sda1 /dev/sda part vfat 600M
|-/dev/sda2 /dev/sda2 /dev/sda part xfs 1G
|-/dev/sda3 /dev/sda3 /dev/sda part LVM2_member 154G
|-/dev/sda4 /dev/sda4 /dev/sda part LVM2_member 269,2G
`-/dev/md126 /dev/md126 /dev/sda raid1 424,8G
|-/dev/md126p1 /dev/md126p1 /dev/md126 md vfat 600M /boot/efi
|-/dev/md126p2 /dev/md126p2 /dev/md126 md xfs 1G /boot
|-/dev/md126p3 /dev/md126p3 /dev/md126 md LVM2_member 154G
| |-/dev/mapper/ol-root /dev/dm-0 /dev/md126p3 lvm xfs 30G /
| |-/dev/mapper/ol-swap /dev/dm-1 /dev/md126p3 lvm swap 24G [SWAP]
| `-/dev/mapper/ol-u01 /dev/dm-11 /dev/md126p3 lvm xfs 100G /u01
`-/dev/md126p4 /dev/md126p4 /dev/md126 md LVM2_member 269,2G
/dev/sdb /dev/sdb sata disk isw_raid_member 447,1G
|-/dev/sdb1 /dev/sdb1 /dev/sdb part vfat 600M
|-/dev/sdb2 /dev/sdb2 /dev/sdb part xfs 1G
|-/dev/sdb3 /dev/sdb3 /dev/sdb part LVM2_member 154G
|-/dev/sdb4 /dev/sdb4 /dev/sdb part LVM2_member 269,2G
`-/dev/md126 /dev/md126 /dev/sdb raid1 424,8G
|-/dev/md126p1 /dev/md126p1 /dev/md126 md vfat 600M /boot/efi
|-/dev/md126p2 /dev/md126p2 /dev/md126 md xfs 1G /boot
|-/dev/md126p3 /dev/md126p3 /dev/md126 md LVM2_member 154G
| |-/dev/mapper/ol-root /dev/dm-0 /dev/md126p3 lvm xfs 30G /
| |-/dev/mapper/ol-swap /dev/dm-1 /dev/md126p3 lvm swap 24G [SWAP]
| `-/dev/mapper/ol-u01 /dev/dm-11 /dev/md126p3 lvm xfs 100G /u01
`-/dev/md126p4 /dev/md126p4 /dev/md126 md LVM2_member 269,2G
/dev/nvme0n1 /dev/nvme0n1 nvme disk 5,8T
`-/dev/nvme0n1p1 /dev/nvme0n1p1 /dev/nvme0n1 nvme part LVM2_member 5,8T
|-/dev/mapper/vg_oracle-u02_zvk_muc /dev/dm-2 /dev/nvme0n1p1 lvm ext4 2,5T /u02/oradata/ZVK_MUC
|-/dev/mapper/vg_oracle-u02_zvk_da /dev/dm-3 /dev/nvme0n1p1 lvm ext4 1000G /u02/oradata/ZVK_DA
|-/dev/mapper/vg_oracle-u02_bsv /dev/dm-4 /dev/nvme0n1p1 lvm ext4 1000G /u02/oradata/BSV
|-/dev/mapper/vg_oracle-u02_kvv /dev/dm-5 /dev/nvme0n1p1 lvm ext4 500G /u02/oradata/KVV
`-/dev/mapper/vg_oracle-u02_daisy /dev/dm-6 /dev/nvme0n1p1 lvm ext4 500G /u02/oradata/DAISY
/dev/nvme1n1 /dev/nvme1n1 nvme disk 5,8T
`-/dev/nvme1n1p1 /dev/nvme1n1p1 /dev/nvme1n1 nvme part LVM2_member 5,8T
|-/dev/mapper/vg_oracle-u02_vip /dev/dm-7 /dev/nvme1n1p1 lvm ext4 500G /u02/oradata/VIP
|-/dev/mapper/vg_oracle-u02_vbo--portal /dev/dm-8 /dev/nvme1n1p1 lvm ext4 500G /u02/oradata/VBO-Portal
|-/dev/mapper/vg_oracle-u02_bsv--portal /dev/dm-9 /dev/nvme1n1p1 lvm ext4 500G /u02/oradata/BSV-Portal
`-/dev/mapper/vg_oracle-u02_vbo /dev/dm-10 /dev/nvme1n1p1 lvm ext4 500G /u02/oradata/VBO
/dev/nvme3n1 /dev/nvme3n1 nvme disk 2,9T
/dev/nvme4n1 /dev/nvme4n1 nvme disk 2,9T
/dev/nvme5n1 /dev/nvme5n1 nvme disk 2,9T
/dev/nvme2n1 /dev/nvme2n1 nvme disk 2,9T
`-/dev/nvme2n1p1 /dev/nvme2n1p1 /dev/nvme2n1 nvme part ext4 2,9T /u03/flashdata/ZVK_MUC
StefanFromMunich commented at 2024-01-31 13:29:¶
Now I send the configuration of the target system. So the system in which the backup was supposed to be restored and then failed:
ReaR version ("/usr/sbin/rear -V"):
Relax-and-Recover 2.6 / 2020-06-17
OS version ("cat /etc/os-release" or "lsb_release -a" or "cat /etc/rear/os.conf"):
NAME="Oracle Linux Server"
VERSION="8.5"
ID="ol"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="8.5"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Oracle Linux Server 8.5"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:oracle:linux:8:5:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://bugzilla.oracle.com/"
ORACLE_BUGZILLA_PRODUCT="Oracle Linux 8"
ORACLE_BUGZILLA_PRODUCT_VERSION=8.5
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=8.5
ReaR configuration files ("cat /etc/rear/site.conf" and/or "cat /etc/rear/local.conf"):
OUTPUT=ISO
OUTPUT_URL=file:///mnt/rescue_system/
BACKUP=NETFS
BACKUP_URL=file:///rear/
Hardware vendor/product (PC or PowerNV BareMetal or ARM) or VM (KVM
guest or PowerVM LPAR):
PC
System architecture (x86 compatible or PPC64/PPC64LE or what exact ARM
device):
x86
Firmware (BIOS or UEFI or Open Firmware) and bootloader (GRUB or ELILO
or Petitboot):
UEFI
Storage (local disk or SSD) and/or SAN (FC or iSCSI or FCoE) and/or
multipath (DM or NVMe):
SAN
Storage layout ("lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,LABEL,SIZE,MOUNTPOINT"):
NAME KNAME PKNAME TRAN TYPE FSTYPE LABEL SIZE MOUNTPOINT
/dev/sda /dev/sda disk 500G
|-/dev/sda1 /dev/sda1 /dev/sda part vfat 4G /boot/efi
|-/dev/sda2 /dev/sda2 /dev/sda part xfs 4G /boot
`-/dev/sda3 /dev/sda3 /dev/sda part LVM2_member 492G
|-/dev/mapper/ol_svl00048-root /dev/dm-0 /dev/sda3 lvm xfs 460G /
`-/dev/mapper/ol_svl00048-swap /dev/dm-1 /dev/sda3 lvm swap 32G [SWAP]
/dev/sr0 /dev/sr0 sata rom 1024M
/dev/sr1 /dev/sr1 sata rom 1024M
StefanFromMunich commented at 2024-02-29 07:52:¶
The error only occurs when BACKUP=NETFS is used. If we load the backup
file into memory, then disaster recovery works.
But this is not an alternative for us because it requires too much RAM.
StefanFromMunich commented at 2024-04-09 09:52:¶
The backup contains manually created mount points. We need to check whether the manually created mount points are causing the disaster recovery to abort.
jsmeix commented at 2024-04-09 12:00:¶
I didn't look at the details but
PACKAGES+=" nfs-utils"
is not something that ReaR supports.
'PACKAGES' is not a ReaR config variable.
See usr/share/rear/conf/default.conf
what config variables are supported by ReaR
and how to use them.
To add things to the ReaR recovery system
use COPY_AS_IS to "only copy as is".
To get libraries into the recovery system,
use the LIBS array.
To get non-mandatory programs into the recovery system,
use the PROGS array.
To get mandatory programs into the recovery system,
use the REQUIRED_PROGS array.
StefanFromMunich commented at 2024-04-29 10:21:¶
The issue cannot be closed yet. I'm currently testing my solution
idea.
The difficulty seems to be that for a ReaR backup using the NFS method
and a UEFI system you need a tailor-made etc/rear/local.conf.
In order for it to work with UEFI, you must configure the following
parameters in particular:
UEFI_BOOTLOADER=/boot/efi/EFI/redhat/grubx64.efi
SECURE_BOOT_BOOTLOADER=/boot/efi/EFI/redhat/shimx64.efi
For this purpose I have now changed the etc/rear/local.conf for the UEFI system as follows.
OUTPUT=ISO
OUTPUT_URL=nfs://197.178.76.1/volume1/Rear
BACKUP=NETFS
BACKUP_URL=nfs://197.178.76.1/volume1/Rear
BACKUP_PROG_EXCLUDE=("${BACKUP_PROG_EXCLUDE[@]}" '/media' '/var/tmp' '/var/crash')
NETFS_KEEP_OLD_BACKUP_COPY=
UEFI_BOOTLOADER=/boot/efi/EFI/redhat/grubx64.efi
SECURE_BOOT_BOOTLOADER=/boot/efi/EFI/redhat/shimx64.efi
EXCLUDE_MOUNTPOINTS=( '/u03/flashdata/ZVK_MUC' )
Additionally, I excluded the temporary mount point from the backup with
EXCLUDE_MOUNTPOINTS.
ReaR may have problems restoring temporary mount points.
Now I'm testing my solution idea. However, disaster recovery cannot take
place until next week as data center staff tests disaster recovery.
They are on vacation this week.
Next week I will report on the result of my solution idea.
Please don't close the issue. I'll get back to you next week with the result of my solution idea.
StefanFromMunich commented at 2024-05-06 05:14:¶
My solution works. The issue can be closed.
[Export of Github issue for rear/rear.]