#1798 Issue closed
: ReaR post recovery UEFI booting error (error:file '/normal.mod' not found)¶
Labels: support / question
, fixed / solved / done
manums1983 opened issue at 2018-05-08 03:54:¶
Relax-and-Recover (ReaR) Issue Template¶
Fill in the following items before submitting a new issue
(quick response is not guaranteed with free support):
-
ReaR version ("/usr/sbin/rear -V"):
Relax-and-Recover 2.3 / 2017-12-20 -
OS version ("cat /etc/rear/os.conf" or "lsb_release -a" or "cat /etc/os-release"):
NAME="SLES" VERSION="12-SP2" VERSION_ID="12.2" PRETTY_NAME="SUSE Linux Enterprise Server 12 SP2" ID="sles" ANSI_COLOR="0;32" CPE_NAME="cpe:/o:suse:sles:12:sp2"
Rear Configuration File:
# Begin example setup for SLE12-SP2 with default btrfs subvolumes. # Since SLE12-SP1 what is mounted at '/' is a btrfs snapshot subvolume # see https://github.com/rear/rear/issues/556 # and since SLE12-SP2 btrfs quota via "snapper setup-quota" is needed # see https://github.com/rear/rear/issues/999 # You must adapt "your.NFS.server.IP/path/to/your/rear/backup" at BACKUP_URL. # You must decide whether or not you want to have /home/* in the backup. # It depends on the size of your harddisk whether or not /home is by default # a btrfs subvolume or a separated xfs filesystem on a separated partition. # You may activate SSH_ROOT_PASSWORD and adapt the "password_on_the_rear_recovery_system". # For basic information see the SLE12-SP2 manuals. # Also see the support database article "SDB:Disaster Recovery" # at http://en.opensuse.org/SDB:Disaster_Recovery # In particular note: # There is no such thing as a disaster recovery solution that "just works". # Regarding btrfs snapshots: # Recovery of btrfs snapshot subvolumes is not possible. # Only recovery of "normal" btrfs subvolumes is possible. # On SLE12-SP1 and SP2 the only exception is the btrfs snapshot subvolume # that is mounted at '/' but that one is not recreated but instead # it is created anew from scratch during the recovery installation with the # default first btrfs snapper snapshot subvolume path "@/.snapshots/1/snapshot" # by the SUSE tool "installation-helper --step 1" (cf. below). # Other snapshots like "@/.snapshots/234/snapshot" are not recreated. # Create rear recovery system as ISO image: OUTPUT=ISO # Store the backup file via NFS on a NFS server: BACKUP=NETFS #BACKUP=DP # BACKUP_OPTIONS variable contains the NFS mount options and # with 'mount -o nolock' no rpc.statd (plus rpcbind) are needed: BACKUP_OPTIONS="nfsvers=3,nolock" # If the NFS server is not an IP address but a hostname, # DNS must work in the rear recovery system when the backup is restored. BACKUP_URL=file:///mnt/backup # Keep an older copy of the backup in a HOSTNAME.old directory # provided there is no '.lockfile' in the HOSTNAME directory: NETFS_KEEP_OLD_BACKUP_COPY=yes # Have all modules of the original system in the recovery system with the # same module loading ordering as in the original system by using the output of # lsmod | tail -n +2 | cut -d ' ' -f 1 | tac | tr -s '[:space:]' ' ' # as value for MODULES_LOAD (cf. https://github.com/rear/rear/issues/626): #MODULES_LOAD=( ) # On SLE12-SP1 and SP2 with default btrfs subvolumes what is mounted at '/' is a btrfs snapshot subvolume # that is controlled by snapper so that snapper is needed in the recovery system. # In SLE12-SP1 and SP2 some btrfs subvolume directories (/var/lib/pgsql /var/lib/libvirt/images /var/lib/mariadb) # have the "no copy on write (C)" file attribute set so that chattr is required in the recovery system # and accordingly also lsattr is useful to have in the recovery system (but not strictly required): REQUIRED_PROGS=( "${REQUIRED_PROGS[@]}" snapper chattr lsattr ) # Snapper setup by the recovery system uses /usr/lib/snapper/installation-helper # that is linked to all libraries where snapper is linked to # (except libdbus that is only needed by snapper). # "installation-helper --step 1" creates a snapper config based on /etc/snapper/config-templates/default COPY_AS_IS=( "${COPY_AS_IS[@]}" /usr/lib/snapper/installation-helper /etc/snapper/config-templates/default ) # Files in btrfs subvolumes are excluded by 'tar --one-file-system' # so that such files must be explicitly included to be in the backup. # Files in the following SLE12-SP1 and SP2 default btrfs subvolumes are # in the below example not included to be in the backup # /.snapshots/* /var/crash/* # but files in /home/* are included to be in the backup. # You may use a command like # findmnt -n -r -t btrfs | cut -d ' ' -f 1 | grep -v '^/$' | egrep -v 'snapshots|crash' | sed -e "s/$/\/*'/" -e "s/^/'/" | tr '\n' ' ' # to generate the values: BACKUP_PROG_INCLUDE=( '/var/cache/*' '/var/lib/mailman/*' '/var/tmp/*' '/var/lib/pgsql/*' '/usr/local/*' '/opt/*' '/var/lib/libvirt/images/*' '/boot/grub2/i386/*' '/var/opt/*' '/srv/*' '/boot/grub2/x86_64/*' '/var/lib/mariadb/*' '/var/spool/*' '/var/lib/mysql/*' '/tmp/*' '/home/*' '/var/log/*' '/var/lib/named/*' '/var/lib/machines/*' ) BACKUP_PROG_EXCLUDE=( "${BACKUP_PROG_EXCLUDE[@]}" "/mnt/backup" ) EXCLUDE_RECREATE=( "${EXCLUDE_RECREATE[@]}" "fs:/mnt/backup" ) # The following POST_RECOVERY_SCRIPT implements during "rear recover" # btrfs quota setup for snapper if that is used in the original system: POST_RECOVERY_SCRIPT=( 'if snapper --no-dbus -r $TARGET_FS_ROOT get-config | grep -q "^QGROUP.*[0-9]/[0-9]" ; then snapper --no-dbus -r $TARGET_FS_ROOT set-config QGROUP= ; snapper --no-dbus -r $TARGET_FS_ROOT setup-quota && echo snapper setup-quota done || echo snapper setup-quota failed ; else echo snapper setup-quota not used ; fi' ) # This option defines a root password to allow SSH connection # whithout a public/private key pair #SSH_ROOT_PASSWORD="password_on_the_rear_recovery_system" # Let the rear recovery system run dhclient to get an IP address # instead of using the same IP address as the original system: #USE_DHCLIENT="yes" # End example setup for SLE12-SP2 with default btrfs subvolumes.
-
System architecture (x86 compatible or POWER and/or what kind of virtual machine):
Physical server: DL 380 GEN9, x86_64 -
Are you using BIOS or UEFI or another way to boot?
Yes UEFI -
Brief description of the issue:
ReaR backup and recovery completed successfully. But the server is not booting up after the recovery, it is login to "grub rescue>" mode . -
Work-around, if any:
Noticed that the server boots up after choosing to boot from hardisk under the F11 option, after wards it will automatically create the UEFI "SLES-Secure-boot". Following reboots not showing any boot issue.
Not sure this is a work around because when i do fdisk -l it is showing boundary errors.
gesprd1:~ # fdisk -l Disk /dev/sda: 279.4 GiB, 299966445568 bytes, 585871964 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 262144 bytes / 262144 bytes Disklabel type: gpt Disk identifier: B5F1057F-8042-4918-8FFC-C9102B2F3ED5 Device Start End Sectors Size Type /dev/sda1 4096 323583 319488 156M EFI System /dev/sda2 323592 4532231 4208640 2G Microsoft basic data /dev/sda3 4532240 35987471 31455232 15G Microsoft basic data /dev/sda4 35987480 585871930 549884451 262.2G Microsoft basic data **Partition 2 does not start on physical sector boundary. Partition 3 does not start on physical sector boundary. Partition 4 does not start on physical sector boundary.** Disk /dev/sdb: 279.4 GiB, 299966445568 bytes, 585871964 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 262144 bytes / 262144 bytes Disklabel type: gpt Disk identifier: 064F38B8-C4E2-4C12-A349-362BA5618724 Device Start End Sectors Size Type /dev/sdb1 2048 585871359 585869312 279.4G Microsoft basic data
rear-gesprd1.log
backup-restore.log
jsmeix commented at 2018-05-08 09:16:¶
@manums1983
where exactly do you get those error:file '/normal.mod' not found
message?
Do you use UEFI with or without Secure Boot?
gozora commented at 2018-05-08 09:36:¶
@manums1983
I really don't like your UEFI boot order:
...
Boot0016* sles-secureboot
Boot0017* Red Hat Enterprise Linux 6
Boot0018* Windows Boot Manager
Boot0019* SUSE_LINUX 12.2
Boot001A* SUSE_LINUX 12.2
Boot001B* SUSE_LINUX 12.2
Boot001C* SUSE_LINUX 12.2
Boot001D* SUSE_LINUX 12.2
Boot001E* SUSE_LINUX 12.2
Boot001F* SUSE_LINUX 12
Boot0020* sles-secureboot
Boot0021* SUSE_LINUX 12
Boot0022* sles-secureboot
Boot0023* SUSE_LINUX 12
Boot0024* SUSE_LINUX 12
efibootmgr
complaints about this by:
** Warning ** : Boot001F has same label SUSE_LINUX 12
** Warning ** : Boot0021 has same label SUSE_LINUX 12
** Warning ** : Boot0023 has same label SUSE_LINUX 12
IMHO; UEFI boot entry created by ReaR is not honored by your Proliant
firmware for whatever reason. Some Linux distros (SLES including) have
their own scripts that are launched during boot, that do setup of UEFI
boot entry, so this is the reason why you can boot 2nd time without any
trouble.
Maybe you could try to cleanup your UEFI boot entries before running
rear recover
? Maybe it will help...
Unfortunately I don't have currently time to dig into this much deeper :-(.
Hope it helps
V.
jsmeix commented at 2018-05-08 09:54:¶
@manums1983
the fdisk -l does not start on physical sector boundary
messages
are very likely not related to this UEFI boot issue here,
nevertheless:
Do you get those messages also on your original system
or only on the recreated system after "rear recover"?
manums1983 commented at 2018-05-08 10:40:¶
@jsmeix ,
This problem "fdisk -l does not start on physical sector boundary"
exactly happened after doing the recovery using Rear. Before that it was
not there.
jsmeix commented at 2018-05-08 11:18:¶
@manums1983
because you use ReaR 2.3 that is expected and
it means you are hit by one of the issues that are described at
https://github.com/rear/rear/issues/1731
and
https://github.com/rear/rear/issues/1733
and in particular by
https://github.com/rear/rear/issues/102
I recommend to try out our current ReaR upstream GitHub master code
because that is the only place where we at ReaR upstream fix bugs.
Bugs in released ReaR versions are not fixed by us (by ReaR upstream).
Bugs in released ReaR versions that got fixed in current ReaR upstream
GitHub master code might be fixed (if the fix can be backported with
reasonable effort) by the Linux distributor wherefrom you got ReaR.
To use our current ReaR upstream GitHub master code
do the following:
Basically "git clone" it into a separated directory and then
configure and run ReaR from within that directory like:
# git clone https://github.com/rear/rear.git # mv rear rear.github.master # cd rear.github.master # vi etc/rear/local.conf # usr/sbin/rear -D mkbackup
Note the relative paths "etc/rear/" and "usr/sbin/".
manums1983 commented at 2018-05-08 13:11:¶
Excellent Jsmeix !!
Git clone image worked very well. After recovery the server successfully
boots up, there are no boundary issues in fdisk out put. My issue is
resolved.
jsmeix commented at 2018-05-09 12:08:¶
Wow - it seems to be another unexpeced bad side-effect of
https://github.com/rear/rear/issues/1731
and its related issues that it may even cause
UEFI booting errors after "rear recover".
[Export of Github issue for rear/rear.]