#3326 Issue closed
: Oracle Linux Server with Red Hat Compatible Kernel (Fedora Linux distribution): UEFI booting from ReaR ISO image does not work¶
Labels: support / question
, fixed / solved / done
StefanFromMunich opened issue at 2024-10-04 09:21:¶
-
ReaR version ("/usr/sbin/rear -V"):
Relax-and-Recover 2.7 / Git -
If your ReaR version is not the current version, explain why you can't upgrade:
Does not apply -
OS version ("cat /etc/os-release" or "lsb_release -a" or "cat /etc/rear/os.conf"):
NAME="Oracle Linux Server"
VERSION="8.10"
ID="ol"
ID_LIKE="fedora"
VARIANT="Server"
VARIANT_ID="server"
VERSION_ID="8.10"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Oracle Linux Server 8.10"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:oracle:linux:8:10:server"
HOME_URL="https://linux.oracle.com/"
BUG_REPORT_URL="https://github.com/oracle/oracle-linux"
ORACLE_BUGZILLA_PRODUCT="Oracle Linux 8"
ORACLE_BUGZILLA_PRODUCT_VERSION=8.10
ORACLE_SUPPORT_PRODUCT="Oracle Linux"
ORACLE_SUPPORT_PRODUCT_VERSION=8.10
- ReaR configuration files ("cat /etc/rear/site.conf" and/or "cat /etc/rear/local.conf"):
OUTPUT=ISO
OUTPUT_URL=nfs://193.14.236.134/volume1/Rear
BACKUP=NETFS
BACKUP_URL=nfs://193.14.236.134/volume1/Rear
BACKUP_PROG_EXCLUDE=("${BACKUP_PROG_EXCLUDE[@]}" '/media' '/var/tmp' '/var/crash' '/u02' '/dev/nvme0n1' '/dev/nvme1n1' '/dev/nvme2n1' '/dev/nvme3n1' '/dev/nvme4n1' '/dev/nvme5n1' )
EXCLUDE_COMPONENTS=( "${EXCLUDE_COMPONENTS[@]}" "/dev/nvme0n1" "/dev/nvme1n1" "/dev/nvme2n1" "/dev/nvme3n1" "/dev/nvme4n1" "/dev/nvme5n1" )
NETFS_KEEP_OLD_BACKUP_COPY=
UEFI_BOOTLOADER=/boot/efi/EFI/redhat/grubx64.efi
SECURE_BOOT_BOOTLOADER=/boot/efi/EFI/redhat/shimx64.efi
EXCLUDE_MOUNTPOINTS=( '/u03/flashdata/ZVK_MUC' '/u02/oradata/ZVK_MUC' )
AUTOEXCLUDE_MULTIPATH=n
AUTORESIZE_PARTITIONS=false
-
Hardware vendor/product (PC or PowerNV BareMetal or ARM) or VM (KVM guest or PowerVM LPAR):
PC -
System architecture (x86 compatible or PPC64/PPC64LE or what exact ARM device):
x86_64 -
Firmware (BIOS or UEFI or Open Firmware) and bootloader (GRUB or ELILO or Petitboot):
UEFI and bootloader GRUB -
Storage (local disk or SSD) and/or SAN (FC or iSCSI or FCoE) and/or multipath (DM or NVMe):
SSD (DM and NVMe) -
Storage layout ("lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,LABEL,SIZE,MOUNTPOINT"):
NAME KNAME PKNAME TRAN TYPE FSTYPE LABEL SIZE MOUNTPOINT
/dev/sda /dev/sda sata disk isw_raid_member 447,1G
|-/dev/sda1 /dev/sda1 /dev/sda part vfat 600M
|-/dev/sda2 /dev/sda2 /dev/sda part xfs 1G
|-/dev/sda3 /dev/sda3 /dev/sda part LVM2_member 154G
|-/dev/sda4 /dev/sda4 /dev/sda part LVM2_member 269,2G
`-/dev/md126 /dev/md126 /dev/sda raid1 424,8G
|-/dev/md126p1 /dev/md126p1 /dev/md126 md vfat 600M /boot/efi
|-/dev/md126p2 /dev/md126p2 /dev/md126 md xfs 1G /boot
|-/dev/md126p3 /dev/md126p3 /dev/md126 md LVM2_member 154G
| |-/dev/mapper/ol-root /dev/dm-0 /dev/md126p3 lvm xfs 30G /
| |-/dev/mapper/ol-swap /dev/dm-1 /dev/md126p3 lvm swap 24G [SWAP]
| `-/dev/mapper/ol-u01 /dev/dm-11 /dev/md126p3 lvm xfs 100G /u01
`-/dev/md126p4 /dev/md126p4 /dev/md126 md LVM2_member 269,2G
/dev/sdb /dev/sdb sata disk isw_raid_member 447,1G
|-/dev/sdb1 /dev/sdb1 /dev/sdb part vfat 600M
|-/dev/sdb2 /dev/sdb2 /dev/sdb part xfs 1G
|-/dev/sdb3 /dev/sdb3 /dev/sdb part LVM2_member 154G
|-/dev/sdb4 /dev/sdb4 /dev/sdb part LVM2_member 269,2G
`-/dev/md126 /dev/md126 /dev/sdb raid1 424,8G
|-/dev/md126p1 /dev/md126p1 /dev/md126 md vfat 600M /boot/efi
|-/dev/md126p2 /dev/md126p2 /dev/md126 md xfs 1G /boot
|-/dev/md126p3 /dev/md126p3 /dev/md126 md LVM2_member 154G
| |-/dev/mapper/ol-root /dev/dm-0 /dev/md126p3 lvm xfs 30G /
| |-/dev/mapper/ol-swap /dev/dm-1 /dev/md126p3 lvm swap 24G [SWAP]
| `-/dev/mapper/ol-u01 /dev/dm-11 /dev/md126p3 lvm xfs 100G /u01
`-/dev/md126p4 /dev/md126p4 /dev/md126 md LVM2_member 269,2G
/dev/nvme0n1 /dev/nvme0n1 nvme disk 5,8T
`-/dev/nvme0n1p1 /dev/nvme0n1p1 /dev/nvme0n1 nvme part LVM2_member 5,8T
|-/dev/mapper/vg_oracle-u02_zvk_muc /dev/dm-2 /dev/nvme0n1p1 lvm ext4 2,5T /u02/oradata/ZVK_MUC
|-/dev/mapper/vg_oracle-u02_zvk_da /dev/dm-3 /dev/nvme0n1p1 lvm ext4 1000G /u02/oradata/ZVK_DA
|-/dev/mapper/vg_oracle-u02_bsv /dev/dm-4 /dev/nvme0n1p1 lvm ext4 1000G /u02/oradata/BSV
|-/dev/mapper/vg_oracle-u02_kvv /dev/dm-5 /dev/nvme0n1p1 lvm ext4 500G /u02/oradata/KVV
`-/dev/mapper/vg_oracle-u02_daisy /dev/dm-6 /dev/nvme0n1p1 lvm ext4 500G /u02/oradata/DAISY
/dev/nvme3n1 /dev/nvme3n1 nvme disk 2,9T
/dev/nvme1n1 /dev/nvme1n1 nvme disk 5,8T
`-/dev/nvme1n1p1 /dev/nvme1n1p1 /dev/nvme1n1 nvme part LVM2_member 5,8T
|-/dev/mapper/vg_oracle-u02_vip /dev/dm-7 /dev/nvme1n1p1 lvm ext4 500G /u02/oradata/VIP
|-/dev/mapper/vg_oracle-u02_vbo--portal /dev/dm-8 /dev/nvme1n1p1 lvm ext4 500G /u02/oradata/VBO-Portal
|-/dev/mapper/vg_oracle-u02_bsv--portal /dev/dm-9 /dev/nvme1n1p1 lvm ext4 500G /u02/oradata/BSV-Portal
`-/dev/mapper/vg_oracle-u02_vbo /dev/dm-10 /dev/nvme1n1p1 lvm ext4 500G /u02/oradata/VBO
/dev/nvme4n1 /dev/nvme4n1 nvme disk 2,9T
/dev/nvme2n1 /dev/nvme2n1 nvme disk 2,9T
`-/dev/nvme2n1p1 /dev/nvme2n1p1 /dev/nvme2n1 nvme part ext4 2,9T /u03/flashdata/ZVK_MUC
/dev/nvme5n1 /dev/nvme5n1 nvme disk 2,9T
- Description of the issue (ideally so that others can reproduce it):
There seems to be something wrong with the ISO image.
The virtual machine cannot be booted with it.
Attached you will find three screenshots with the screen messages.
The numbering corresponds to the order in which the screen messages
appear.
After "Press any key to continue..." nothing happens.
What else do I have to do to make the machine bootable?
I also noticed that the two files xzio and fshelp,
which were not found during the boot process,
are entered in the backup log file:
block 7609484: /usr/lib/grub/x86_64-efi/xzio.mod
block 7600337: /usr/lib/grub/x86_64-efi/fshelp.mod
Only kernel.exec is entered in the ISO log file:
/usr/lib/grub/x86_64-efi/kernel.exec
The two files xzio and fshelp were not entered into the ISO log file.
The initrd.cgz file is included in the ISO log file:
2024-09-04 11:35:00.755448505 Created initrd.cgz with gzip default
compression (778 MiB) in 56 seconds
-
Workaround, if any:
No -
Attachments, as applicable ("rear -D mkrescue/mkbackup/recover" debug log files):
StefanFromMunich commented at 2024-10-04 09:30:¶
First analysis was performed by jsmeix for issue #3230:
according to your
#3230 (comment)
(excerpt)
There are no more problems with the multipath devices.
However, there seems to be something wrong with the ISO image.
The virtual machine cannot be booted with it.
this issue has now changed its topic / subject
from something about
NVMe multipathing devices
to something about
UEFI booting from ReaR's ISO image
which is somewhat problematic for us because
we at ReaR upstream (and in particular I)
prefer separated issues for separated topics
so that the right people who know best about
a certain area could focus on a particular issue.
So ideally I wished you could re-report your
"UEFI booting from ReaR's ISO image" issue
as a new separated issue here at GitHub.
I am not at all an expert in UEFI booting and
I am also in general not an expert in booting stuff
so personally I cannot help much with booting issues.
In particular am not a Red Hat / Fedora / Oracle Linux user
so I cannot reproduce issues which are specific for
those Linux distributions (and in particular booting issues
are often specific for a particular Linux distribution).
For my personal experience with UEFI booting
from ReaR's ISO image you may read through
#3084
in particular therein see
#3084 (comment)
(and follow the links therein as needed)
and see
#3084 (comment)
and finally see
#3084 (comment)
By the way:
I think both
UEFI_BOOTLOADER=/boot/efi/EFI/redhat/grubx64.efi
SECURE_BOOT_BOOTLOADER=/boot/efi/EFI/redhat/shimx64.efi
are currently not supported in ReaR because
setting SECURE_BOOT_BOOTLOADER overwrites UEFI_BOOTLOADER
and when SECURE_BOOT_BOOTLOADER is set it specifies the
first-stage bootloader (which is shim) and then shim
runs as second-stage bootloader something that is
basically hardcoded inside shim so that what ReaR does
is to copy all possible second-stage bootloaders
(basicaly all GRUB EFI binaries as far as I remember)
into the ISO to make shim's second-stage bootloader
available via the ISO, cf. the comments in
usr/share/rear/output/USB/Linux-i386/100_create_efiboot.sh
starting at
# Copy UEFI bootloader:
if test -f "$SECURE_BOOT_BOOTLOADER" ; then
currently online at
https://github.com/rear/rear/blob/master/usr/share/rear/output/USB/Linux-i386/100_create_efiboot.sh#L65
This is for OUTPUT=USB but since
#3031
it is similar as what is done for OUTPUT=ISO.
For background information what I had done in the past
you may have a look at
#3031
(and follow the links therein as needed).
StefanFromMunich commented at 2024-10-04 09:46:¶
@jsmeix
Thank you for the detailed initial analysis.
The Red Hat Customer Portal recommends the following entries in the
local.conf configuration file:
UEFI_BOOTLOADER=/boot/efi/EFI/redhat/grubx64.efi
SECURE_BOOT_BOOTLOADER=/boot/efi/EFI/redhat/shimx64.efi
The recommendation is marked as SOLUTION VERIFIED and dated August 2,
2024. The recommendation has the following link:
https://access.redhat.com/solutions/2115051
Based on this recommendation, should I keep the two entries in the configuration file or delete the entries and then try to create a bootable ISO image again?
jsmeix commented at 2024-10-07 15:02:¶
@StefanFromMunich
I am not a Oracle Linux user so I cannot actually help
with issues that seem to be specific for Oracle Linux.
In general support here at ReaR upstream
is on a voluntary base as time permits
as best-effort without guarantee/warranty/liability
(in particular when people work on other things
that have higher priority by their employer ;-)
jsmeix commented at 2024-10-07 15:06:¶
Whoops!
I falsely thought it is about Red Hat
but actually it is Oracle Linux Server.
So @StefanFromMunich
when you have an Oracle Linux Server support contract
you may better contact them directly for further help.
@pcahyna
I am sorry for causing confusion.
I edited my above comment accordingly.
StefanFromMunich commented at 2024-10-09 10:05:¶
@pcahyna
I do not have an Oracle Linux Server support contract or a Red Hat Linux
support contract.
Therefore, I need the solution from ReaR support.
I have chosen the Oracle Linux Server with the following operating
system option:
Red Hat Compatible Kernel
This means that the kernel of the Oracle Linux Server is identical to
the kernel that is supplied with Red Hat Enterprise Linux.
I am working with an x86_64 client. Red Hat Enterprise Linux equips
these clients with the Fedora Linux distribution.
The development of the Fedora Linux distribution is controlled by the
Fedora project.
The manufacturer Red Hat is the leader of the Fedora project.
Since you are the expert on Red Hat Linux distributions, it is also
right that you are dealing with the topic.
So please tell me what steps I have to take to get a bootable ISO image?
StefanFromMunich commented at 2024-10-10 11:06:¶
@pcahyna
The Red Hat Customer Portal recommends the following entries in the
local.conf configuration file:
UEFI_BOOTLOADER=/boot/efi/EFI/redhat/grubx64.efi
SECURE_BOOT_BOOTLOADER=/boot/efi/EFI/redhat/shimx64.efi
The recommendation is marked as SOLUTION VERIFIED and dated August 2,
2024. The recommendation has the following link:
https://access.redhat.com/solutions/2115051
Now I checked the availability of the two files on the server in the path /boot/efi/EFI/redhat/:
$ sudo ls -al /boot/efi/EFI/redhat/
drwx------. 3 root root 4096 4. Sep 09:26 .
drwx------. 5 root root 4096 12. Apr 2021 ..
-rwx------. 1 root root 134 23. Apr 19:37 BOOTX64.CSV
drwx------. 2 root root 4096 7. Jun 08:34 fonts
-rwx------. 1 root root 6511 11. Dez 2023 grub.cfg
-rwx------. 1 root root 1024 4. Sep 09:26 grubenv
-rwx------. 1 root root 2367664 7. Jun 08:34 grubx64.efi
-rwx------. 1 root root 861512 23. Apr 19:37 mmx64.efi
-rwx------. 1 root root 960272 23. Apr 19:37 shimx64.efi
-rwx------. 1 root root 963088 23. Apr 19:37 shimx64-oracle.efi
Both files are located in the path that needs to be referenced in the
configuration file.
What do you suggest next?
gdha commented at 2024-11-09 16:05:¶
@StefanFromMunich Perhaps you could add the 2 missing modules:
block 7609484: /usr/lib/grub/x86_64-efi/xzio.mod
block 7600337: /usr/lib/grub/x86_64-efi/fshelp.mod
in the /etc/rear/local.conf file with the variable:
COPY_AS_IS+=( /usr/lib/grub/x86_64-efi )
StefanFromMunich commented at 2024-11-22 09:21:¶
Unfortunately, the computer cannot be booted with the new COPY_AS_IS
configuration parameter.
I have now found out that it is due to ReaR version 2.7.
The virtual machine can be booted with ReaR version 2.6.
There is no alternative to using ReaR version 2.6.
We have also now found that it is difficult to perform a disaster
recovery with a virtual machine.
You have to configure a lot with ReaR and even if it works, we think the
test is not very meaningful.
We are therefore planning to replace the two SATA disks from the
original computer with two new SATA disks.
We then want to perform a disaster recovery on the new disks in the
original computer. After that, the original computer should work
perfectly again.
We will then test that.
If there are any problems with this test, I will let you know.
In any case, this incident can be closed due to the available
workaround.
jsmeix commented at 2024-11-22 13:49:¶
@StefanFromMunich
FYI regarding
We are therefore planning to replace the two SATA disks
from the original computer with two new SATA disks.
We then want to perform a disaster recovery on the new disks
in the original computer.
When the new disks do not have exact same size and type
as the old ones, you do a migration onto different
(physical or virtual) hardware.
See in conf/default.conf the description about
MIGRATION_MODE
and see also in conf/default.conf the description about
DISKS_TO_BE_WIPED
and the sections
"Prepare replacement hardware for disaster recovery"
and
"Fully compatible replacement hardware is needed"
in
https://en.opensuse.org/SDB:Disaster_Recovery
In particular when you use virtual disks
I would recommend to make the new ones with
exact same size and same type as the old ones
(usually one can set up various kind of virtual disks)
to avoid "migration onto different hardware" issues,
i.e. so that when the old one was e.g. a
123456.0 MiB /dev/sda virtual disk, then the new one
should be also a 123456.0 MiB /dev/sda virtual disk.
Regarding migration to a system with a bit smaller or a bit bigger
disk
see in conf/default.conf the description of the config variables
AUTORESIZE_PARTITIONS
AUTORESIZE_EXCLUDE_PARTITIONS
AUTOSHRINK_DISK_SIZE_LIMIT_PERCENTAGE
AUTOINCREASE_DISK_SIZE_THRESHOLD_PERCENTAGE
I recommend to not use AUTORESIZE_PARTITIONS="yes"
with layout/prepare/default/430_autoresize_all_partitions.sh
because that may result bad aligned partitions in particular
bad aligned for what flash memory based disks (i.e. SSDs) need
that usually need a 4MiB or 8MiB alignment (a too small value
will result lower speed and less lifetime of flash memory devices),
see the comment at USB_PARTITION_ALIGN_BLOCK_SIZE
in default.conf
In general regarding system migration with ReaR
(e.g. to a system with substantially different disk size):
In general migrating a system onto different hardware
(where "hardware" could be also a virtual machine)
does not "just work", cf. "Inappropriate expectations" at
https://en.opensuse.org/SDB:Disaster_Recovery
In sufficiently simple cases it may "just work" but in general
do not expect too much built-in intelligence from a program
(written in plain bash which is not a programming language
that is primarily meant for artificial intelligence ;-)
to automatically do the annoying legwork for you.
In general ReaR is first and foremost meant to recreate
a system as much as possible exactly as it was before.
Additionally ReaR supports to migrate a system
but here "supports" means that ReaR provides a lot
that should help you to get such a task done
but it does not mean that it would "just work" without
possibly laborious manual settings and adaptions
with trial and error legwork until you made things work
for you in your particular case.
[Export of Github issue for rear/rear.]