#3081 Issue closed
: Cannot create EFI Boot Manager entry (unable to find ESP /mnt/local/boot/efi among mounted devices)¶
Labels: support / question
, fixed / solved / done
, old version
lcascales opened issue at 2023-11-14 16:12:¶
-
ReaR version ("/usr/sbin/rear -V"):
Relax-and-Recover 2.4 / Git -
If your ReaR version is not the current version, explain why you can't upgrade:
the backup package we received to test the recovery came with this version of ReaR -
OS version ("cat /etc/os-release" or "lsb_release -a" or "cat /etc/rear/os.conf"):
NAME="Oracle Linux Server"
VERSION="7.9"
ID="ol"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="7.9"
PRETTY_NAME="Oracle Linux Server 7.9"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:oracle:linux:7:9:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://bugzilla.oracle.com/"
ORACLE_BUGZILLA_PRODUCT="Oracle Linux 7"
ORACLE_BUGZILLA_PRODUCT_VERSION=7.9
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=7.9
######
OS_VENDOR=OracleServer
OS_VERSION=7
# The following information was added automatically by the mkbackup workflow:
ARCH='Linux-i386'
OS='GNU/Linux'
OS_VERSION='7'
OS_VENDOR='OracleServer'
OS_VENDOR_VERSION='OracleServer/7'
OS_VENDOR_ARCH='OracleServer/i386'
# End of what was added automatically by the mkbackup workflow.
- ReaR configuration files ("cat /etc/rear/site.conf" and/or "cat /etc/rear/local.conf"):
OUTPUT=ISO
OUTPUT_URL=nfs://172.30.161.126/volume14/ol_backups_nfs
ISO_DEFAULT=manual
BACKUP=NETFS
BACKUP_URL=nfs://172.30.161.126/volume14/ol_backups_nfs
BACKUP_PROG_EXCLUDE=("${BACKUP_PROG_EXCLUDE[@]}" '/media' '/var/tmp' '/var/crash' '/prdgpodb2c_mysql' '/postgres' '/pg_wal_archive' '/mnt')
BACKUP_TYPE=differential
FULLBACKUPDAY="Sun"
USE_STATIC_NETWORKING=y
TIMESYNC=NTP
SSH_ROOT_PASSWORD=xxxxxxxxx
-
Hardware vendor/product (PC or PowerNV BareMetal or ARM) or VM (KVM guest or PowerVM LPAR):
VMware -
System architecture (x86 compatible or PPC64/PPC64LE or what exact ARM device):
# uname -p
x86_64
- Firmware (BIOS or UEFI or Open Firmware) and bootloader (GRUB or
ELILO or Petitboot):
EFI and GRUB2 supposedly
RESCUE xxxxxxxx:/mnt/local # ls -ltrh /mnt/local/boot/
total 192M
drwx------. 3 root root 17 Jan 1 1970 efi
-rw-r--r--. 1 root root 150K Oct 2 2020 config-3.10.0-1160.el7.x86_64
-rw-------. 1 root root 3.5M Oct 2 2020 System.map-3.10.0-1160.el7.x86_64
-rwxr-xr-x. 1 root root 6.5M Oct 2 2020 vmlinuz-3.10.0-1160.el7.x86_64
-rw-r--r--. 1 root root 314K Oct 2 2020 symvers-3.10.0-1160.el7.x86_64.gz
-rw-r--r--. 1 root root 213K Apr 23 2021 config-5.4.17-2102.201.3.el7uek.x86_64
-rw-r--r--. 1 root root 4.1M Apr 23 2021 System.map-5.4.17-2102.201.3.el7uek.x86_64
-rwxr-xr-x. 1 root root 8.6M Apr 23 2021 vmlinuz-5.4.17-2102.201.3.el7uek.x86_64
-rw-r--r--. 1 root root 376K Apr 23 2021 symvers-5.4.17-2102.201.3.el7uek.x86_64.gz
-rw-r--r--. 1 root root 214K Apr 14 2023 config-5.4.17-2136.318.7.1.el7uek.x86_64
-rw-r--r--. 1 root root 4.2M Apr 14 2023 System.map-5.4.17-2136.318.7.1.el7uek.x86_64
-rwxr-xr-x. 1 root root 11M Apr 14 2023 vmlinuz-5.4.17-2136.318.7.1.el7uek.x86_64
-rw-r--r--. 1 root root 381K Apr 14 2023 symvers-5.4.17-2136.318.7.1.el7uek.x86_64.gz
-rw-------. 1 root root 60M Oct 24 19:21 initramfs-0-rescue-8a08aa20fb684869b0b7918c2b5bfccb.img
-rwxr-xr-x. 1 root root 6.5M Oct 24 19:21 vmlinuz-0-rescue-8a08aa20fb684869b0b7918c2b5bfccb
-rw-------. 1 root root 16M Oct 24 19:29 initramfs-5.4.17-2102.201.3.el7uek.x86_64kdump.img
-rw-------. 1 root root 17M Oct 25 19:19 initramfs-5.4.17-2136.318.7.1.el7uek.x86_64kdump.img
drwx------. 2 root root 21 Nov 8 12:10 grub2
-rw------- 1 root root 18M Nov 14 11:31 initramfs-3.10.0-1160.el7.x86_64.img
-rw------- 1 root root 18M Nov 14 11:32 initramfs-5.4.17-2102.201.3.el7uek.x86_64.img
-rw------- 1 root root 19M Nov 14 11:32 initramfs-5.4.17-2136.318.7.1.el7uek.x86_64.img
RESCUE prdmxedb2c:/mnt/local #
-
Storage (local disk or SSD) and/or SAN (FC or iSCSI or FCoE) and/or multipath (DM or NVMe):
local disk -
Storage layout ("lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,LABEL,SIZE,MOUNTPOINT"):
RESCUE prdmxedb2c:/mnt/local # lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,LABEL,SIZE,MOUNTPOINT
NAME KNAME PKNAME TRAN TYPE FSTYPE LABEL SIZE MOUNTPOINT
/dev/sr0 /dev/sr0 sata rom iso9660 RELAXRECOVER 527.1M
/dev/sda /dev/sda disk 80G
`-/dev/sda1 /dev/sda1 /dev/sda part xfs 80G /mnt/local
- Description of the issue (ideally so that others can reproduce it):
Creating EFI Boot Manager entries...
Unable to find ESP /boot/efi in layout
Trying to determine device currently mounted at /mnt/local/boot/efi as fallback
Cannot create EFI Boot Manager entry (unable to find ESP /mnt/local/boot/efi among mounted devices)
WARNING:
For this system
OracleServer/7 on Linux-i386 (based on Fedora/7/i386)
there is no code to install a boot loader on the recovered system
or the code that we have failed to install the boot loader correctly.
Please contribute appropriate code to the Relax-and-Recover project,
see http://relax-and-recover.org/development/
Take a look at the scripts in /usr/share/rear/finalize,
for example see the scripts
/usr/share/rear/finalize/Linux-i386/210_install_grub.sh
/usr/share/rear/finalize/Linux-i386/220_install_grub2.sh
---------------------------------------------------
| IF YOU DO NOT INSTALL A BOOT LOADER MANUALLY, |
| THEN YOUR SYSTEM WILL NOT BE ABLE TO BOOT. |
---------------------------------------------------
You can use 'chroot /mnt/local bash --login'
to change into the recovered system.
You should at least mount /proc in the recovered system
e.g. via 'mount -t proc none /mnt/local/proc'
before you change into the recovered system
and manually install a boot loader therein.
Finished recovering your system. You can explore it under '/mnt/local'.
Exiting rear recover (PID 2362) and its descendant processes
Running exit tasks
You should also rm -Rf --one-file-system /tmp/rear.2GCs8lruvbzdZUX
RESCUE prdmxedb2c:/mnt/local # mount --bind /dev /mnt/local/dev
RESCUE prdmxedb2c:/mnt/local #
RESCUE prdmxedb2c:/mnt/local # mount -t proc none /mnt/local/proc
RESCUE prdmxedb2c:/mnt/local #
RESCUE prdmxedb2c:/mnt/local # mount -t sysfs none /mnt/local/sys
RESCUE prdmxedb2c:/mnt/local #
RESCUE prdmxedb2c:/mnt/local # mount -o bind /dev /mnt/local/dev
RESCUE prdmxedb2c:/mnt/local #
RESCUE prdmxedb2c:/mnt/local # mount -o bind /dev/pts /mnt/local/dev/pts
RESCUE prdmxedb2c:/mnt/local #
RESCUE prdmxedb2c:/mnt/local # chroot /mnt/local/ bash --login
[root@prdmxedb2c /]# grub2-install /dev/sda
Installing for x86_64-efi platform.
[root@prdmxedb2c /]# grub2-install --efi-directory=/boot/efi/EFI/
Installing for x86_64-efi platform.
grub2-install: error: /boot/efi/EFI/ doesn't look like an EFI partition.
[root@prdmxedb2c /]# mount |grep /boot/efi
[root@prdmxedb2c /]#
-
Workaround, if any:
-
Attachments, as applicable ("rear -D mkrescue/mkbackup/recover" debug log files):
rear-prdmxedb2c.log
You can drag-drop log files into this editor to create an attachment
or paste verbatim text like command output or file content
by including it between a leading and a closing line of
three backticks like this:
verbatim content
pcahyna commented at 2023-11-14 18:54:¶
Something is strange here with the disk layout. The log file rear-prdmxedb2c.log from rear mkbackup that you have provided contains this:
2023-11-10 09:31:00.808093903 Including layout/save/GNU/Linux/200_partition_layout.sh
2023-11-10 09:31:00.816095728 Saving disk partitions.
2023-11-10 09:31:00.848321299 No partitions found on /dev/sda.
2023-11-10 09:31:00.853154007 Ignoring sdaa: it is a path of a multipath device
2023-11-10 09:31:00.857753511 Ignoring sdab: it is a path of a multipath device
2023-11-10 09:31:00.998488620 No partitions found on /dev/sdc.
blockdev: cannot open /dev/sdd: No medium found
Error: Error opening /dev/sdd: No medium found
2023-11-10 09:31:01.018947740 No partitions found on /dev/sdd.
2023-11-10 09:31:01.023260842 Ignoring sde: it is a path of a multipath device
2023-11-10 09:31:01.027483774 Ignoring sdf: it is a path of a multipath device
2023-11-10 09:31:01.031766858 Ignoring sdg: it is a path of a multipath device
2023-11-10 09:31:01.035916230 Ignoring sdh: it is a path of a multipath device
2023-11-10 09:31:01.040546538 Ignoring sdi: it is a path of a multipath device
2023-11-10 09:31:01.045318859 Ignoring sdj: it is a path of a multipath device
2023-11-10 09:31:01.049686416 Ignoring sdk: it is a path of a multipath device
2023-11-10 09:31:01.053950752 Ignoring sdl: it is a path of a multipath device
2023-11-10 09:31:01.058101938 Ignoring sdm: it is a path of a multipath device
2023-11-10 09:31:01.062242676 Ignoring sdn: it is a path of a multipath device
2023-11-10 09:31:01.066522726 Ignoring sdo: it is a path of a multipath device
2023-11-10 09:31:01.070624857 Ignoring sdp: it is a path of a multipath device
2023-11-10 09:31:01.074737913 Ignoring sdq: it is a path of a multipath device
2023-11-10 09:31:01.078826746 Ignoring sdr: it is a path of a multipath device
2023-11-10 09:31:01.082833970 Ignoring sds: it is a path of a multipath device
2023-11-10 09:31:01.086917017 Ignoring sdt: it is a path of a multipath device
2023-11-10 09:31:01.091054416 Ignoring sdu: it is a path of a multipath device
2023-11-10 09:31:01.095243958 Ignoring sdv: it is a path of a multipath device
2023-11-10 09:31:01.099458738 Ignoring sdw: it is a path of a multipath device
2023-11-10 09:31:01.103570195 Ignoring sdx: it is a path of a multipath device
2023-11-10 09:31:01.107793229 Ignoring sdy: it is a path of a multipath device
2023-11-10 09:31:01.111910121 Ignoring sdz: it is a path of a multipath device
2023-11-10 09:31:01.115785995 Including layout/save/GNU/Linux/210_raid_layout.sh
2023-11-10 09:31:01.120199989 Including layout/save/GNU/Linux/220_lvm_layout.sh
2023-11-10 09:31:01.121547523 Saving LVM layout.
2023-11-10 09:31:01.339994070 Including layout/save/GNU/Linux/230_filesystem_layout.sh
2023-11-10 09:31:01.341408633 Begin saving filesystem layout
2023-11-10 09:31:01.343919660 Saving filesystem layout (using the findmnt command).
2023-11-10 09:31:01.365768321 Processing filesystem 'xfs' on '/dev/mapper/3624a9370ed4226a2973a4882000113e6' mounted at '/pg_wal_archive'
2023-11-10 09:31:01.384817687 Processing filesystem 'xfs' on '/dev/mapper/3624a9370ed4226a2973a4882000113e8' mounted at '/postgres'
2023-11-10 09:31:01.403827442 Processing filesystem 'xfs' on '/dev/mapper/3624a9370ed4226a2973a4882000113e9' mounted at '/prdgpodb2c_mysql'
2023-11-10 09:31:01.425108659 Processing filesystem 'xfs' on '/dev/mapper/ol-opt' mounted at '/opt'
2023-11-10 09:31:01.460249459 Processing filesystem 'xfs' on '/dev/mapper/ol-root' mounted at '/'
2023-11-10 09:31:01.494324116 Processing filesystem 'xfs' on '/dev/mapper/ol-var' mounted at '/var'
2023-11-10 09:31:01.527943142 Processing filesystem 'vfat' on '/dev/sdb1' mounted at '/boot/efi'
2023-11-10 09:31:01.564461921 Processing filesystem 'xfs' on '/dev/sdb2' mounted at '/boot'
2023-11-10 09:31:01.591135727 End saving filesystem layout
I see several discrepancies with what you have in the recovered system:
your system seems to be on /dev/sda
while the original system was on
/dev/sdb
(ok, the disks may have changed order) and the root
filesystem is XFS on the /dev/sda1
partition, while in the original
system it was on LVM, the /dev/mapper/ol-root
LV, not on a partition
(the first partition was the missing EFI system partition in fact). This
should not happen. Can you please provide your disklayout.conf file
under /var/lib/rear/layout
? And is there anything relevant in the
rear recover
output before Creating EFI Boot Manager entries...
?
Can you please provide also the recovery log?
lcascales commented at 2023-11-14 19:24:¶
Hi, as I noticed the mapping stated 2 disks , I was told to remove
/dev/sdb
as the objective was to check if the recover would proceed,
as the 2nd disk also had 1.5TB and there wasn't enough space in the test
environment storage. And when I see the recovered system /etc/fstab
when I chroot into it , it did show something wasn't right. I can
provide those once I get on the office in the morning, and yes I have
the full output of the "rear recover" command.
lcascales commented at 2023-11-15 08:10:¶
so I'll just put the whole outpupt of "rear recover"
RESCUE prdmxedb2c:~ # rear -d -v recover
Relax-and-Recover 2.4 / Git
Using log file: /var/log/rear/rear-prdmxedb2c.log
Running workflow recover within the ReaR rescue/recovery system
Starting required daemons for NFS: RPC portmapper (portmap or rpcbind) and rpc.statd if available.
Started RPC portmapper 'rpcbind'.
RPC portmapper 'rpcbind' available.
RPC status rpc.statd available.
Started rpc.idmapd.
For backup restore using 2023-11-08-1546-F.tar.gz 2023-11-10-0930-D.tar.gz
Will do driver migration (recreating initramfs/initrd)
Calculating backup archive size
Backup archive size is 2.8G /tmp/rear.GorR1ZzNvfZCrV8/outputfs/prdmxedb2c/2023-11-08-1546-F.tar.gz (compressed)
Calculating backup archive size
Backup archive size is 72M /tmp/rear.GorR1ZzNvfZCrV8/outputfs/prdmxedb2c/2023-11-10-0930-D.tar.gz (compressed)
Comparing disks
Device sdb does not exist (manual configuration needed)
Switching to manual disk layout configuration
Original disk /dev/sdb does not exist (with same size) in the target system
Using /dev/sda (the only appropriate) for recreating /dev/sdb
Current disk mapping table (source -> target):
/dev/sdb /dev/sda
UserInput -I LAYOUT_MIGRATION_CONFIRM_MAPPINGS needed in /usr/share/rear/layout/prepare/default/300_map_disks.sh line 211
Confirm or edit the disk mapping
1) Confirm disk mapping and continue 'rear recover'
2) Edit disk mapping (/var/lib/rear/layout/disk_mappings)
3) Use Relax-and-Recover shell and return back to here
4) Abort 'rear recover'
(default '1' timeout 300 seconds)
2
UserInput: Valid choice number result 'Edit disk mapping (/var/lib/rear/layout/disk_mappings)'
/dev/sda
~
~
:wq
"/var/lib/rear/layout/disk_mappings" 1 line, 9 characters written
Current disk mapping table (source -> target):
/dev/sda
UserInput -I LAYOUT_MIGRATION_CONFIRM_MAPPINGS needed in /usr/share/rear/layout/prepare/default/300_map_disks.sh line 211
Confirm or edit the disk mapping
1) Confirm disk mapping and continue 'rear recover'
2) Edit disk mapping (/var/lib/rear/layout/disk_mappings)
3) Use Relax-and-Recover shell and return back to here
4) Abort 'rear recover'
(default '1' timeout 300 seconds)
1
UserInput: Valid choice number result 'Confirm disk mapping and continue 'rear recover''
User confirmed disk mapping
Disk /dev/sdb and all dependant devices will not be recreated
ERROR:
====================
BUG in /usr/share/rear/lib/layout-functions.sh line 784:
'get_part_device_name_format function called without argument (device)'
--------------------
Please report this issue at https://github.com/rear/rear/issues
and include the relevant parts from /var/log/rear/rear-prdmxedb2c.log
preferably with full debug information via 'rear -d -D recover'
====================
Aborting due to an error, check /var/log/rear/rear-prdmxedb2c.log for details
Exiting rear recover (PID 1388) and its descendant processes
Running exit tasks
You should also rm -Rf --one-file-system /tmp/rear.GorR1ZzNvfZCrV8
Terminated
RESCUE prdmxedb2c:~ # rm -Rf --one-file-system /tmp/rear.GorR1ZzNvfZCrV8
RESCUE prdmxedb2c:~ # rear -d -v recover
Relax-and-Recover 2.4 / Git
Using log file: /var/log/rear/rear-prdmxedb2c.log
Running workflow recover within the ReaR rescue/recovery system
Starting required daemons for NFS: RPC portmapper (portmap or rpcbind) and rpc.statd if available.
Started RPC portmapper 'rpcbind'.
RPC portmapper 'rpcbind' available.
RPC status rpc.statd available.
Started rpc.idmapd.
For backup restore using 2023-11-08-1546-F.tar.gz 2023-11-10-0930-D.tar.gz
Will do driver migration (recreating initramfs/initrd)
Calculating backup archive size
Backup archive size is 2.8G /tmp/rear.2GCs8lruvbzdZUX/outputfs/prdmxedb2c/2023-11-08-1546-F.tar.gz (compressed)
Calculating backup archive size
Backup archive size is 72M /tmp/rear.2GCs8lruvbzdZUX/outputfs/prdmxedb2c/2023-11-10-0930-D.tar.gz (compressed)
Comparing disks
Disk configuration looks identical
UserInput -I DISK_LAYOUT_PROCEED_RECOVERY needed in /usr/share/rear/layout/prepare/default/250_compare_disks.sh line 146
Proceed with recovery (yes) otherwise manual disk layout configuration is enforced
(default 'yes' timeout 30 seconds)
UserInput: No real user input (empty or only spaces) - using default input
UserInput: No choices - result is 'yes'
User confirmed to proceed with recovery
Start system layout restoration.
Disk layout created.
Restoring from '/tmp/rear.2GCs8lruvbzdZUX/outputfs/prdmxedb2c/2023-11-08-1546-F.tar.gz' (restore log in /var/lib/rear/restore/recover.2023-11-08-1546-F.tar.gz.2362.restore.log) ...
Restoring var/log/rear/rear-prdmxedb2c.log OK
Restored 6739 MiB in 76 seconds [avg. 90809 KiB/sec]
Restoring from '/tmp/rear.2GCs8lruvbzdZUX/outputfs/prdmxedb2c/2023-11-10-0930-D.tar.gz' (restore log in /var/lib/rear/restore/recover.2023-11-10-0930-D.tar.gz.2362.restore.log) ...
Restored 181 MiB [avg. 92975 KiB/sec] OK
Restored 300 MiB in 3 seconds [avg. 102604 KiB/sec]
Restoring finished (verify backup restore log messages in /var/lib/rear/restore/recover.2023-11-10-0930-D.tar.gz.2362.restore.log)
Recreating directories (with permissions) from /var/lib/rear/recovery/directories_permissions_owner_group
Failed to 'chown UNKNOWN:UNKNOWN prdgpodb2c_mysql'
Running mkinitrd...
Updated initrd with new drivers for kernel 3.10.0-1160.el7.x86_64.
Running mkinitrd...
Updated initrd with new drivers for kernel 5.4.17-2102.201.3.el7uek.x86_64.
Running mkinitrd...
Updated initrd with new drivers for kernel 5.4.17-2136.318.7.1.el7uek.x86_64.
Creating EFI Boot Manager entries...
Unable to find ESP /boot/efi in layout
Trying to determine device currently mounted at /mnt/local/boot/efi as fallback
Cannot create EFI Boot Manager entry (unable to find ESP /mnt/local/boot/efi among mounted devices)
WARNING:
For this system
OracleServer/7 on Linux-i386 (based on Fedora/7/i386)
there is no code to install a boot loader on the recovered system
or the code that we have failed to install the boot loader correctly.
Please contribute appropriate code to the Relax-and-Recover project,
see http://relax-and-recover.org/development/
Take a look at the scripts in /usr/share/rear/finalize,
for example see the scripts
/usr/share/rear/finalize/Linux-i386/210_install_grub.sh
/usr/share/rear/finalize/Linux-i386/220_install_grub2.sh
---------------------------------------------------
| IF YOU DO NOT INSTALL A BOOT LOADER MANUALLY, |
| THEN YOUR SYSTEM WILL NOT BE ABLE TO BOOT. |
---------------------------------------------------
You can use 'chroot /mnt/local bash --login'
to change into the recovered system.
You should at least mount /proc in the recovered system
e.g. via 'mount -t proc none /mnt/local/proc'
before you change into the recovered system
and manually install a boot loader therein.
Finished recovering your system. You can explore it under '/mnt/local'.
Exiting rear recover (PID 2362) and its descendant processes
Running exit tasks
You should also rm -Rf --one-file-system /tmp/rear.2GCs8lruvbzdZUX
Listing the content of disklayout.conf
:
:/var/lib/rear/layout # cat disklayout.conf
# Disk /dev/sda
# Format: disk <devname> <size(bytes)> <partition label type>
#disk /dev/sda 900151926784 gpt
# Partitions on /dev/sda
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
# Disk /dev/sdb
# Format: disk <devname> <size(bytes)> <partition label type>
disk /dev/sdb 1600287760384 gpt
# Partitions on /dev/sdb
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
part /dev/sdb 209715200 1048576 EFI%20System%20Partition boot /dev/sdb1
part /dev/sdb 1073741824 210763776 rear-noname none /dev/sdb2
part /dev/sdb 201867657216 1284505600 rear-noname lvm /dev/sdb3
part /dev/sdb 1397135580672 203152162816 rear-noname none /dev/sdb4
# Disk /dev/sdc
# Format: disk <devname> <size(bytes)> <partition label type>
#disk /dev/sdc 6001048248320 gpt
# Partitions on /dev/sdc
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
# Disk /dev/sdd
# Format: disk <devname> <size(bytes)> <partition label type>
#disk /dev/sdd 0
# Partitions on /dev/sdd
# Format: part <device> <partition size(bytes)> <partition start(bytes)> <partition type|name> <flags> /dev/<partition>
# Format for LVM PVs
# lvmdev <volume_group> <device> [<uuid>] [<size(bytes)>]
lvmdev /dev/ol /dev/sdb3 BwxFt3-SXAT-bC9N-KV8x-65DL-8MlK-GLYIaA 394272768
lvmdev /dev/ol /dev/sdb4 4RzhA8-igUP-tMZ8-kZt9-rL3G-LYIS-QqgeAy 2728780431
# Format for LVM VGs
# lvmgrp <volume_group> <extentsize> [<size(extents)>] [<size(bytes)>]
lvmgrp /dev/ol 4096 381230 1561518080
# Format for LVM LVs
# lvmvol <volume_group> <name> <size(bytes)> <layout> [key:value ...]
lvmvol /dev/ol opt 17179869184b linear
lvmvol /dev/ol root 53687091200b linear
lvmvol /dev/ol swap 21474836480b linear
lvmvol /dev/ol var 109517471744b linear
# Filesystems (only ext2,ext3,ext4,vfat,xfs,reiserfs,btrfs are supported).
# Format: fs <device> <mountpoint> <fstype> [uuid=<uuid>] [label=<label>] [<attributes>]
#fs /dev/mapper/3624a9370ed4226a2973a4882000113e6 /pg_wal_archive xfs uuid=feeaa4ba-3dec-48bc-bc6d-8faec6ae4b75 label= options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
#fs /dev/mapper/3624a9370ed4226a2973a4882000113e8 /postgres xfs uuid=2187ba27-2a14-41f7-8522-f5f8f1ca919e label= options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
#fs /dev/mapper/3624a9370ed4226a2973a4882000113e9 /prdgpodb2c_mysql xfs uuid=70a98bc4-b5f6-4e94-9d55-3e8b06404817 label= options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota
fs /dev/mapper/ol-opt /opt xfs uuid=c95e0a8b-ea09-44c1-8e02-8c28b0cafeb5 label= options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=512,swidth=512,noquota
fs /dev/mapper/ol-root / xfs uuid=499d59f6-3528-4a89-bce8-1ec2fa60de5f label= options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=512,swidth=512,noquota
fs /dev/mapper/ol-var /var xfs uuid=6b93e965-7753-4938-906a-760cddb6bded label= options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=512,swidth=512,noquota
fs /dev/sdb1 /boot/efi vfat uuid=92AA-9477 label= options=rw,relatime,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=winnt,errors=remount-ro
fs /dev/sdb2 /boot xfs uuid=e416f8a3-52fc-42a2-a016-c25f4a9ce141 label= options=rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=512,swidth=512,noquota
# Swap partitions or swap files
# Format: swap <filename> uuid=<uuid> label=<label>
swap /dev/mapper/ol-swap uuid=c7a58ef2-c168-4e3f-aec1-4de0f1728539 label=
#multipath /dev/mapper/3624a9370ed4226a2973a4882000113e8 1288490188800 /dev/sdaa,/dev/sdf,/dev/sdi,/dev/sdl,/dev/sdo,/dev/sdr,/dev/sdu,/dev/sdx
#multipath /dev/mapper/3624a9370ed4226a2973a4882000113e6 858993459200 /dev/sdab,/dev/sdg,/dev/sdj,/dev/sdm,/dev/sdp,/dev/sds,/dev/sdv,/dev/sdy
#multipath /dev/mapper/3624a9370ed4226a2973a4882000113e9 214748364800 /dev/sde,/dev/sdh,/dev/sdk,/dev/sdn,/dev/sdq,/dev/sdt,/dev/sdw,/dev/sdz
As for the recovery log... I'm afraid I might have overwritten when I mistakenly ran "rear -D mkrescue/mkbackup/recover" on the target VM when it hit me that was from the backup...
But from from your initial comments, it seems that the target machine should have exactly the same resources as the original machine, as in devices?
jsmeix commented at 2023-11-15 08:17:¶
@lcascales
is your original system still up and running
so you could redo things like "rear mkbackup"
or is your original system destroyed now
and you need to recover with what you have?
If you your original system is still up and running
see the section
"Debugging issues with Relax-and-Recover" on
https://en.opensuse.org/SDB:Disaster_Recovery
what information we usually need
to analyze and debug a "rear recover" failure.
In general:
Relax-and-Recover 2.4 is rather old.
It was released in June 2018, see
https://github.com/rear/rear/blob/master/doc/rear-release-notes.txt
Please test if it works for you when you use
our current ReaR GitHub master code.
See the section
"Testing current ReaR upstream GitHub master code" in
https://en.opensuse.org/SDB:Disaster_Recovery
how you can try out our current ReaR GitHub master code
without conflicts with your already installed ReaR version.
In general I recommend to try out our latest GitHub master code
because the GitHub master code is the only place where we fix things
and if there are issues it helps when you use exactly the code
where we could fix things.
In general we at ReaR upstream do not support older ReaR versions.
We at ReaR upstream do not plain reject issues with older ReaR
versions
(e.g. we may answer easy to solve questions also for older ReaR
versions)
but we do not spend much time on issues with older ReaR versions
because
we do not (and cannot) fix issues in released ReaR versions.
Issues in released ReaR versions are not fixed by us (by ReaR
upstream).
Issues in released ReaR versions that got fixed in current ReaR
upstream
GitHub master code might be fixed (if the fix can be backported with
reasonable effort) by the Linux distributor wherefrom you got ReaR.
In case of Oracle Linux you may contact Oracle directly
provided you have an appropriate support contract.
pcahyna commented at 2023-11-15 09:06:¶
Original disk /dev/sdb does not exist (with same size) in the target system
Using /dev/sda (the only appropriate) for recreating /dev/sdb
Current disk mapping table (source -> target):
/dev/sdb /dev/sda
UserInput -I LAYOUT_MIGRATION_CONFIRM_MAPPINGS needed in /usr/share/rear/layout/prepare/default/300_map_disks.sh line 211
Confirm or edit the disk mapping
1) Confirm disk mapping and continue 'rear recover'
2) Edit disk mapping (/var/lib/rear/layout/disk_mappings)
3) Use Relax-and-Recover shell and return back to here
4) Abort 'rear recover'
(default '1' timeout 300 seconds)
2
UserInput: Valid choice number result 'Edit disk mapping (/var/lib/rear/layout/disk_mappings)'
/dev/sda
~
~
:wq
"/var/lib/rear/layout/disk_mappings" 1 line, 9 characters written
Current disk mapping table (source -> target):
/dev/sda
This does not look right. Isn't every line in the mapping file supposed to contain two entries: source and target? What did the file contain before you started editing it? Why to change it, actually - if you are replacing /dev/sdb with /dev/sda, the mapping looked correct as it was in the beginning?
pcahyna commented at 2023-11-15 09:21:¶
TLDR: I suppose you should just redo the recovery and not touch the mapping file, looks like ReaR has determined the mapping right with its automatism.
lcascales commented at 2023-11-15 09:41:¶
Yes, I was just given heads ups (only after) that /dev/sdb is in fact
the boot disk...
Well, going to redo the process, will give feedback soon
pcahyna commented at 2023-11-15 10:08:¶
well editing the mapping file this way would be wrong in any case, it is supposed to contain "source target" pairs.
Recent ReaR performs some validation of the mapping file entries, but ReaR 2.4 seems to take them as they are and possibly produce nonsense if they are invalid.
I also second the general suggestion to contact Oracle support about the version they ship.
jsmeix commented at 2023-11-15 10:32:¶
@lcascales
only to be on the safe side:
At
Original disk /dev/sdb does not exist (with same size) in the target system
Using /dev/sda (the only appropriate) for recreating /dev/sdb
Current disk mapping table (source -> target):
/dev/sdb /dev/sda
before you
Confirm disk mapping and continue 'rear recover'
ensure that your /dev/sda is really the right disk
where "rear recover" should recreate the system
because "rear recover" will overwrite all what there is
on your /dev/sda disk.
I write this because in
https://github.com/rear/rear/issues/3081#issuecomment-1812113185
you talked about "/dev/sdb is in fact the boot disk".
Currently I cannot see what disks you have
on your original system for what purpose
versus
what disks there are on your replacement hardware
and for what purpose these are intended.
jsmeix commented at 2023-11-15 10:46:¶
I think in
https://github.com/rear/rear/issues/3081#issuecomment-1811984210
Current disk mapping table (source -> target):
/dev/sda
UserInput -I LAYOUT_MIGRATION_CONFIRM_MAPPINGS needed in /usr/share/rear/layout/prepare/default/300_map_disks.sh line 211
Confirm or edit the disk mapping
1) Confirm disk mapping and continue 'rear recover'
2) Edit disk mapping (/var/lib/rear/layout/disk_mappings)
3) Use Relax-and-Recover shell and return back to here
4) Abort 'rear recover'
(default '1' timeout 300 seconds)
1
UserInput: Valid choice number result 'Confirm disk mapping and continue 'rear recover''
User confirmed disk mapping
Disk /dev/sdb and all dependant devices will not be recreated
shows how that attempt failed.
The
Disk ... and all dependant devices will not be recreated
message comes in ReaR 2.4 from
https://github.com/rear/rear/blob/rear-2.4/usr/share/rear/layout/prepare/default/300_map_disks.sh#L237
lcascales commented at 2023-11-15 12:37:¶
well, I added one more disk to the VM (so two disks of 80G) , just to make it equal to the original machine.
So no go, with that size anyway...
RESCUE prdmxedb2c:~ # rear -d -v recover
Relax-and-Recover 2.4 / Git
Using log file: /var/log/rear/rear-prdmxedb2c.log
Running workflow recover within the ReaR rescue/recovery system
Starting required daemons for NFS: RPC portmapper (portmap or rpcbind) and rpc.statd if available.
Started RPC portmapper 'rpcbind'.
RPC portmapper 'rpcbind' available.
RPC status rpc.statd available.
Started rpc.idmapd.
For backup restore using 2023-11-08-1546-F.tar.gz 2023-11-10-0930-D.tar.gz
Will do driver migration (recreating initramfs/initrd)
Calculating backup archive size
Backup archive size is 2.8G /tmp/rear.ZkW5KNDPoku5qUE/outputfs/prdmxedb2c/2023-11-08-1546-F.tar.gz (compressed)
Calculating backup archive size
Backup archive size is 72M /tmp/rear.ZkW5KNDPoku5qUE/outputfs/prdmxedb2c/2023-11-10-0930-D.tar.gz (compressed)
Comparing disks
Device sdb has size 85899345920 but 1600287760384 is expected (needs manual configuration)
Switching to manual disk layout configuration
Original disk /dev/sdb does not exist (with same size) in the target system
UserInput -I LAYOUT_MIGRATION_REPLACEMENT_SDB needed in /usr/share/rear/layout/prepare/default/300_map_disks.sh line 176
Choose an appropriate replacement for /dev/sdb
1) /dev/sda
2) /dev/sdb
3) Do not map /dev/sdb
4) Use Relax-and-Recover shell and return back to here
(default '1' timeout 300 seconds)
1
UserInput: Valid choice number result '/dev/sda'
Using /dev/sda (chosen by user) for recreating /dev/sdb
Current disk mapping table (source -> target):
/dev/sdb /dev/sda
UserInput -I LAYOUT_MIGRATION_CONFIRM_MAPPINGS needed in /usr/share/rear/layout/prepare/default/300_map_disks.sh line 211
Confirm or edit the disk mapping
1) Confirm disk mapping and continue 'rear recover'
2) Edit disk mapping (/var/lib/rear/layout/disk_mappings)
3) Use Relax-and-Recover shell and return back to here
4) Abort 'rear recover'
(default '1' timeout 300 seconds)
UserInput: No real user input (empty or only spaces) - using default input
UserInput: Valid choice number result 'Confirm disk mapping and continue 'rear recover''
User confirmed disk mapping
Examining /dev/sda to automatically resize its last active partition
Checking /dev/sda1 if it is the last partition on /dev/sda
Checking /dev/sda2 if it is the last partition on /dev/sda
Checking /dev/sda3 if it is the last partition on /dev/sda
Checking /dev/sda4 if it is the last partition on /dev/sda
Found 'rear-noname' partition /dev/sda4 as last partition on /dev/sda
Determining if last partition /dev/sda4 is resizeable
Determining new size for last partition /dev/sda4
ERROR: No space for last partition /dev/sda4 on new disk (new last partition size would be less than 1 MiB)
Aborting due to an error, check /var/log/rear/rear-prdmxedb2c.log for details
Exiting rear recover (PID 1562) and its descendant processes
Running exit tasks
You should also rm -Rf --one-file-system /tmp/rear.ZkW5KNDPoku5qUE
Terminated
But other than that, it could work without an issue I guess?
lcascales commented at 2023-11-15 12:57:¶
I write this because in #3081 (comment) you talked about "/dev/sdb is in fact the boot disk". Currently I cannot see what disks you have on your original system for what purpose versus what disks there are on your replacement hardware and for what purpose these are intended.
# cat /var/lib/rear/layout/config/df.txt
Filesystem 1048576-blocks Used Available Capacity Mounted on
/dev/mapper/ol-root 51175M 8232M 42944M 17% /
/dev/sdb2 1014M 239M 776M 24% /boot
/dev/mapper/3624a9370ed4226a2973a4882000113e9 204700M 16064M 188637M 8% /prdgpodb2c_mysql
/dev/mapper/ol-opt 16374M 107M 16268M 1% /opt
/dev/mapper/ol-var 104394M 1055M 103339M 2% /var
/dev/sdb1 200M 8M 193M 4% /boot/efi
/dev/mapper/3624a9370ed4226a2973a4882000113e8 1228200M 33M 1228168M 1% /postgres
/dev/mapper/3624a9370ed4226a2973a4882000113e6 818800M 33M 818768M 1% /pg_wal_archive
jsmeix commented at 2023-11-15 13:01:¶
@lcascales
normally you need a sufficiently big replacement disk.
For details see the section about
# AUTORESIZE_PARTITIONS
# AUTORESIZE_EXCLUDE_PARTITIONS
# AUTOSHRINK_DISK_SIZE_LIMIT_PERCENTAGE
# AUTOINCREASE_DISK_SIZE_THRESHOLD_PERCENTAGE
in usr/share/rear/conf/default.conf
which is for ReaR 2.4 online at
https://github.com/rear/rear/blob/rear-2.4/usr/share/rear/conf/default.conf#L368
Because you use LVM
but this does not resize volumes
on top of the affected partitions,
only autoshrinking partitions likely does not work
because when partitions are autoshrinked that are used
as PVs for LVM then there is likely not sufficient space
to create the logical volumes (unless you have sufficient
unused space so that there is still sufficient space
on the autoshrinked PVs).
jsmeix commented at 2023-11-15 13:33:¶
@lcascales
FYI:
Since ReaR version 2.7 we have
https://github.com/rear/rear/pull/2591
pcahyna commented at 2023-11-15 13:34:¶
Sorry, I have not realized at first that you are trying to replace a 1.5 TB disk with a 80 GB one.
lcascales commented at 2023-11-15 15:12:¶
hmm, going to try assigning a disk with that capacity, with thin provisioning.
lcascales commented at 2023-11-15 16:10:¶
It worked, should have thought of the thin-provisioning earlier... waiting on some credencials from the real machine to log on, well at least that part is sorted.
Thanks for your insights.
As for rear 2.4, it seems is the latest version suported by
Oracle/CentOS/RHEL 7 , our client still doesn't feel the need to
upgrade...
lcascales commented at 2023-11-15 17:42:¶
Update: after detaching the ISO the system booted correctly and we're going to do further tests to the machine in our lab environment to check the reliability of the backup.
jsmeix commented at 2023-11-16 08:30:¶
A generic side note FYI:
When an older ReaR version (like ReaR 2.4)
is the latest version that is suported
by this or that Linux distribution
or by another organization or vendor,
and the user of this ReaR version
prefers to use that older ReaR version,
then the user must ask for support at his
Linux distribution or organization or vendor
who supports that older ReaR version.
Cf. above
https://github.com/rear/rear/issues/3081#issuecomment-1811992064
[Export of Github issue for rear/rear.]