#2617 Issue closed: REAR image does not boot in UEFI ( centOS 7 )

Labels: support / question, no-issue-activity

cvijayvinoth opened issue at 2021-05-21 07:30:

Relax-and-Recover (ReaR) Issue Template

Fill in the following items before submitting a new issue
(quick response is not guaranteed with free support):

  • ReaR version ("/usr/sbin/rear -V"):
    Relax-and-Recover 2.6 / Git

  • OS version ("cat /etc/os-release" or "lsb_release -a" or "cat /etc/rear/os.conf"):

NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"

CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
  • ReaR configuration files ("cat /etc/rear/site.conf" and/or "cat /etc/rear/local.conf"):
OUTPUT=ISO
BACKUP=RSYNC
RSYNC_PREFIX="yuvaraj1_${HOSTNAME}"
BACKUP_PROG="/var/www/html/imageBackup/rsync"
OUTPUT_URL=rsync://yuvaraj1@192.168.1.123::rsync_backup
BACKUP_URL=rsync://yuvaraj1@192.168.1.123::rsync_backup
BACKUP_RSYNC_OPTIONS+=(-z --progress --password-file=/var/www/html/xxxxx/xxxx)
ISO_DIR="/var/www/html/imageBackup/iso/$HOSTNAME"
MESSAGE_PREFIX="$$: "
PROGRESS_MODE="plain"
AUTOEXCLUDE_PATH=( /tmp )
PROGRESS_WAIT_SECONDS="1"
#export TMPDIR="$(</etc/rear/path.txt)/imageBackup/iso/"
export TMPDIR="/var/www/html/imageBackup/iso/"
PXE_RECOVER_MODE=automatic
ISO_FILES=("/var/www/html/imageBackup/rsync")
ISO_PREFIX="${HOSTNAME}"
ISO_DEFAULT="automatic"
  • Hardware (PC or PowerNV BareMetal or ARM) or virtual machine (KVM guest or PoverVM LPAR):
    virtual machine

  • System architecture (x86 compatible or PPC64/PPC64LE or what exact ARM device):
    x86 compatible

  • Firmware (BIOS or UEFI or Open Firmware) and bootloader (GRUB or ELILO or Petitboot):
    UEFI & GRUB

  • Storage (local disk or SSD) and/or SAN (FC or iSCSI or FCoE) and/or multipath (DM or NVMe):
    local

  • Storage layout ("lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,SIZE,MOUNTPOINT" or "lsblk" as makeshift):

NAME              MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda                 8:0    0   20G  0 disk
├─sda1              8:1    0 18.1G  0 part
│ └─md127           9:127  0   18G  0 raid1
│   ├─centos-root 253:0    0 16.2G  0 lvm   /
│   └─centos-swap 253:1    0  1.9G  0 lvm   [SWAP]
├─sda2              8:2    0  977M  0 part
│ └─md125           9:125  0  977M  0 raid1 /boot/efi
└─sda3              8:3    0  977M  0 part
  └─md126           9:126  0  976M  0 raid1 /boot
sdb                 8:16   0   20G  0 disk
├─sdb1              8:17   0 18.1G  0 part
│ └─md127           9:127  0   18G  0 raid1
│   ├─centos-root 253:0    0 16.2G  0 lvm   /
│   └─centos-swap 253:1    0  1.9G  0 lvm   [SWAP]
├─sdb2              8:18   0  977M  0 part
│ └─md125           9:125  0  977M  0 raid1 /boot/efi
└─sdb3              8:19   0  977M  0 part
  └─md126           9:126  0  976M  0 raid1 /boot
sdc                 8:32   0   20G  0 disk
└─sdc1              8:33   0   20G  0 part
  └─md0             9:0    0   20G  0 raid1 /mnt/raid1
sdd                 8:48   0   20G  0 disk
└─sdd1              8:49   0   20G  0 part
  └─md0             9:0    0   20G  0 raid1 /mnt/raid1
sr0                11:0    1 1024M  0 rom
  • Description of the issue (ideally so that others can reproduce it):
    post recovery os is not booting

  • Workaround, if any:
    attached recover and dislayout.cong file

  • Attachments, as applicable ("rear -D mkrescue/mkbackup/recover" debug log files): rear -D recover
    VirtualBox_centos-uefi-yuvi_21_05_2021_12_22_40
    disklayout.conf.txt
    rear-cent7-uefi.log

pcahyna commented at 2021-05-21 08:15:

Do you have some logs from the emergency mode? As the message suggests, journalctl -xb can be used to view them.

If the system has booted up to this point, I think that the problem is not in EFI support.

pcahyna commented at 2021-05-21 09:27:

I can see the following suspect message in the log:

863: 2021-05-21 02:41:40.400940301 Warning: rsync --fake-super not possible on system (192.168.1.123) (please upgrade rsync to 3.x)

what is the version of rsync on your rsync server? What OS/version is the rsync server running?

Also, there are some other errors:

863: 2021-05-21 02:41:47.041860918 Source function: 'source /usr/share/rear/verify/RSYNC/GNU/Linux/600_check_rsync_xattr.sh' returns 1
(...)
+ source /usr/share/rear/restore/RSYNC/default/800_copy_restore_log.sh
gzip: /mnt/local//root/restore-20210521.*.log: No such file or directory
863: 2021-05-21 02:45:51.460176691 Source function: 'source /usr/share/rear/restore/RSYNC/default/800_copy_restore_log.sh' returns 1

cvijayvinoth commented at 2021-05-25 04:38:

Do you have some logs from the emergency mode? As the message suggests, journalctl -xb can be used to view them.

If the system has booted up to this point, I think that the problem is not in EFI support.

Unable to get this log. After entering the root password system is getting booted.

cvijayvinoth commented at 2021-05-25 05:23:

I can see the following suspect message in the log:

863: 2021-05-21 02:41:40.400940301 Warning: rsync --fake-super not possible on system (192.168.1.123) (please upgrade rsync to 3.x)

what is the version of rsync on your rsync server? What OS/version is the rsync server running?

Also, there are some other errors:

863: 2021-05-21 02:41:47.041860918 Source function: 'source /usr/share/rear/verify/RSYNC/GNU/Linux/600_check_rsync_xattr.sh' returns 1
(...)
+ source /usr/share/rear/restore/RSYNC/default/800_copy_restore_log.sh
gzip: /mnt/local//root/restore-20210521.*.log: No such file or directory
863: 2021-05-21 02:45:51.460176691 Source function: 'source /usr/share/rear/restore/RSYNC/default/800_copy_restore_log.sh' returns 1

I am using rsync version 2.6.9 protocol version 29 in both server & local machine

pcahyna commented at 2021-05-25 08:20:

2.6.9 is surprisingly old for CentOS 7, where does the package come from? I think you should upgrade it, such an old version probably won't work very well with an unprivileged remote user (the Warning above should probably be turned into an Error).

pcahyna commented at 2021-05-25 08:23:

Even RHEL 6 has rsync 3.0.6 protocol version 30.

cvijayvinoth commented at 2021-05-25 08:27:

We are having our own custom rsync binary file. and i have noticed dependency failure issue while rebooting the machine. (attached below )
VirtualBox_centos-uefi11_25_05_2021_13_51_56

cvijayvinoth commented at 2021-05-25 08:27:

let me try with latest rsync too..

cvijayvinoth commented at 2021-05-25 12:51:

client -- rsync version 3.1.2 protocol version 31
server -- rsync version 3.1.1 protocol version 31

rear-cent7-uefi.log

pcahyna commented at 2021-05-25 13:19:

@cvijayvinoth and does the restored system work ok now?

cvijayvinoth commented at 2021-05-25 13:35:

@pcahyna : No...facing same issue

pcahyna commented at 2021-05-25 13:37:

Looks like an issue with /dev/md0 then, according to the screenshot? Can you mount it manually?

cvijayvinoth commented at 2021-06-09 04:20:

@pcahyna : Sorry for the delayed response. Yes its working fine if i unmount and manually mount it. The issue is it shows /dev/md0 only post recovery. But /dev/md0 changed as /dev/md128 post reboot the machine.

pcahyna commented at 2021-06-10 14:24:

/dev/md0 changed as /dev/md128 post reboot the machine.

Interesting. Before backup it was /dev/md0, right? What does /etc/fstab in the restored system contain? /etc/md0 or /etc/md128? (I suppose the former?) So the problem is that /dev/md0 got renamed to /dev/md128? If you mount /dev/md128 manually, does it contain the right content?

I also found this strange line in disklayout. Not sure whether it is related to the problem, but it does not sound right.

raid /dev/md0 metadata=1.2 level=raid1 raid-devices=2 uuid=4ce022fa:8f7182df:079c9db2:46522e6a name= name=boot_efi devices=/dev/sdc1,/dev/sdd1

Note the empty name= followed by name=boot_efi. What is the correct name of the array?

pcahyna commented at 2021-06-10 14:37:

also, you pasted the lsblk output from the original system above (Storage layout). is there a difference in the output on the restored system?

cvijayvinoth commented at 2021-06-11 04:35:

/dev/md0 changed as /dev/md128 post reboot the machine.

Interesting. Before backup it was /dev/md0, right? What does /etc/fstab in the restored system contain? /etc/md0 or /etc/md128? (I suppose the former?) So the problem is that /dev/md0 got renamed to /dev/md128? If you mount /dev/md128 manually, does it contain the right content?

I also found this strange line in disklayout. Not sure whether it is related to the problem, but it does not sound right.

raid /dev/md0 metadata=1.2 level=raid1 raid-devices=2 uuid=4ce022fa:8f7182df:079c9db2:46522e6a name= name=boot_efi devices=/dev/sdc1,/dev/sdd1

Note the empty name= followed by name=boot_efi. What is the correct name of the array?

Right before backup it was /dev/md0.
/etc/fstab in the restored system contained /dev/md0 only. ( i manually removed it post recovery )
yes... If I manually mount /dev/md128 it contains the right content.

cvijayvinoth commented at 2021-06-11 05:27:

also, you pasted the lsblk output from the original system above (Storage layout). is there a difference in the output on the restored system?

NAME              MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                 8:0    0    20G  0 disk  
├─sda1              8:1    0  18.1G  0 part  
│ └─md127           9:127  0    18G  0 raid1 
│   ├─centos-root 253:0    0  16.2G  0 lvm   /
│   └─centos-swap 253:1    0   1.9G  0 lvm   [SWAP]
├─sda2              8:2    0   977M  0 part  
│ └─md124           9:124  0   977M  0 raid1 /boot/efi
└─sda3              8:3    0   977M  0 part  
  └─md126           9:126  0   976M  0 raid1 /boot
sdb                 8:16   0    20G  0 disk  
├─sdb1              8:17   0  18.1G  0 part  
│ └─md127           9:127  0    18G  0 raid1 
│   ├─centos-root 253:0    0  16.2G  0 lvm   /
│   └─centos-swap 253:1    0   1.9G  0 lvm   [SWAP]
├─sdb2              8:18   0   977M  0 part  
│ └─md124           9:124  0   977M  0 raid1 /boot/efi
└─sdb3              8:19   0   977M  0 part  
  └─md126           9:126  0   976M  0 raid1 /boot
sdc                 8:32   0    20G  0 disk  
└─sdc1              8:33   0    20G  0 part  
  └─md125           9:125  0    20G  0 raid1 /mnt/raid1
sdd                 8:48   0    20G  0 disk  
└─sdd1              8:49   0    20G  0 part  
  └─md125           9:125  0    20G  0 raid1 /mnt/raid1
sr0                11:0    1 401.9M  0 rom   
sr1                11:1    1  1024M  0 rom

pcahyna commented at 2021-06-11 07:54:

The storage layout is strange, because you said that /mnt/raid1 got renamed from /dev/md0 to /dev/md128, but the listing shows it as /dev/md125, not 128.

Also, /dev/md125 (the EFI system partition) got renamed to /dev/md124.

cvijayvinoth commented at 2021-06-14 06:44:

I lost the access of recovered machine. When i tried to perform the recovery option once again in the new machine, i got this output.

github-actions commented at 2021-08-14 02:08:

Stale issue message


[Export of Github issue for rear/rear.]