#1952 Issue closed
: Rear recovery does not start due to "No code has been generated to recreate pv:/dev/sda2 (lvmdev)."¶
Labels: support / question
, fixed / solved / done
chrismorgan240 opened issue at 2018-11-05 14:17:¶
Relax-and-Recover (ReaR) Issue Template¶
Fill in the following items before submitting a new issue
(quick response is not guaranteed with free support):
-
ReaR version ("/usr/sbin/rear -V"):
Relax-and-Recover 2.3-git.3007.056bfdb.master.changed / 2018-06-05 -
OS version ("cat /etc/rear/os.conf" or "lsb_release -a" or "cat /etc/os-release"):
OS_VENDOR=RedHatEnterpriseServer OS_VERSION=7 ARCH='Linux-i386' OS='GNU/Linux' OS_VERSION='7' OS_VENDOR='RedHatEnterpriseServer' OS_VENDOR_VERSION='RedHatEnterpriseServer/7' OS_VENDOR_ARCH='RedHatEnterpriseServer/i386'
- ReaR configuration files ("cat /etc/rear/site.conf" and/or "cat /etc/rear/local.conf"):
RESCUE dc1dsydb106:~ # cat /etc/rear/site.conf ONLY_INCLUDE_VG=( "vg_root" ) GRUB_RESCUE= OUTPUT=ISO AUTOEXCLUDE_MULTIPATH=y BACKUP=NETFS BACKUP_URL=nfs://10.100.11.61/backups BACKUP_PROG=tar BACKUP_OPTIONS="nfsvers=3,nolock" ONLY_INCLUDE_VG=( "vg_root" ) #BACKUP_PROG_EXCLUDE=("${BACKUP_PROG_EXCLUDE[@]}" '/u01*' '/oraexp/*' '/oralogs/*' '/opt/oem/*' '/syncscp/*' )
-
Hardware (PC or PowerNV BareMetal or ARM) or virtual machine (KVM guest or PoverVM LPAR):
PC -
System architecture (x86 compatible or PPC64/PPC64LE or what exact ARM device):
x86 -
Firmware (BIOS or UEFI or Open Firmware) and bootloader (GRUB or ELILO or Petitboot):
BIOS / GRUB -
Storage (lokal disk or SSD) and/or SAN (FC or iSCSI or FCoE) and/or multipath (DM or NVMe):
root VG is local disk -
Description of the issue (ideally so that others can reproduce it):
rear recover doesn't start with no code message.
Selecting option 4. to continue then displays
No code has been generated to recreate /dev/vg_root (lvmgrp). To recreate it manually add code to /var/lib/rear/layout/diskrestore.sh or abort. Manually add code that recreates /dev/vg_root (lvmgrp)
Selecting 4 then shows the same message for vg_root LVs.
disklayout.conf does have entries for above.
-
Workaround, if any:
None - we have had a fatal root filesystem failure on this host and are trying to recover it using rear. -
Attachments, as applicable ("rear -D mkrescue/mkbackup/recover" debug log files):
RESCUE dc1dsydb106:~ # rear -D recover Relax-and-Recover 2.3-git.3007.056bfdb.master.changed / 2018-06-05 Using log file: /var/log/rear/rear-dc1dsydb106.log Running workflow recover within the ReaR rescue/recovery system Starting required daemons for NFS: RPC portmapper (portmap or rpcbind) and rpc.statd if available. Started RPC portmapper 'rpcbind'. RPC portmapper 'rpcbind' available. RPC status rpc.statd available. Started rpc.idmapd. Using backup archive '/tmp/rear.0pJQ4ASaV5jqpO6/outputfs/dc1dsydb106/backup.tar.gz' Will do driver migration (recreating initramfs/initrd) Calculating backup archive size Backup archive size is 3.4G /tmp/rear.0pJQ4ASaV5jqpO6/outputfs/dc1dsydb106/backup.tar.gz (compressed) Comparing disks Disk configuration looks identical UserInput -I DISK_LAYOUT_PROCEED_RECOVERY needed in /usr/share/rear/layout/prepare/default/250_compare_disks.sh line 146 Proceed with recovery (yes) otherwise manual disk layout configuration is enforced (default 'yes' timeout 30 seconds) yes UserInput: No choices - result is 'yes' User confirmed to proceed with recovery No code has been generated to recreate pv:/dev/sda2 (lvmdev). To recreate it manually add code to /var/lib/rear/layout/diskrestore.sh or abort. UserInput -I ADD_CODE_TO_RECREATE_MISSING_PVDEVSDA2LVMDEV needed in /usr/share/rear/layout/prepare/default/600_show_unprocessed.sh line 33 Manually add code that recreates pv:/dev/sda2 (lvmdev) 1) View /var/lib/rear/layout/diskrestore.sh 2) Edit /var/lib/rear/layout/diskrestore.sh 3) Go to Relax-and-Recover shell 4) Continue 'rear recover' 5) Abort 'rear recover' (default '4' timeout 300 seconds) 5 UserInput: Valid choice number result 'Abort 'rear recover'' ERROR: User chose to abort 'rear recover' in /usr/share/rear/layout/prepare/default/600_show_unprocessed.sh Aborting due to an error, check /var/log/rear/rear-dc1dsydb106.log for details Exiting rear recover (PID 6209) and its descendant processes Running exit tasks You should also rm -Rf /tmp/rear.0pJQ4ASaV5jqpO6 Terminated
jsmeix commented at 2018-11-05 14:39:¶
@chrismorgan240
please attach your original disklayout.conf
(I mean the one before you run rear -D recover
)
and the created var/lib/rear/layout/diskrestore.sh
after rear -D recover
aborted
and your whole debug log file for rear -D recover
,
cf. the section "Debugging issues with Relax-and-Recover" in
https://en.opensuse.org/SDB:Disaster_Recovery
gdha commented at 2018-11-05 14:42:¶
@chrismorgan240 perhaps also add the output of df
chrismorgan240 commented at 2018-11-05 16:23:¶
This is a ndf from a very similar machine (just ip / hostname differences)
[root@dc1dsydb206 ~]# df Filesystem 1K-blocks Used Available Use% Mounted on /dev/mapper/vg_root-lv_root 10225664 3796740 6428924 38% / devtmpfs 264025896 156 264025740 1% /dev tmpfs 264036996 1302528 262734468 1% /dev/shm tmpfs 264036996 2096 264034900 1% /run tmpfs 264036996 0 264036996 0% /sys/fs/cgroup /dev/sda1 503040 421928 81112 84% /boot /dev/mapper/vg_ora-lv_adm 19912448 17478176 2434272 88% /syncscp /dev/mapper/vg_ora-lv_exp 52402944 5370688 47032256 11% /oraexp /dev/mapper/vg_ora-lv_log 31441664 4771876 26669788 16% /oralogs /dev/mapper/vg_ora-lv_u01 157235200 127819172 29416028 82% /u01 /dev/mapper/vg_root-lv_usrlocal 1015040 49876 965164 5% /usr/local /dev/mapper/vg_root-lv_home 1015040 117740 897300 12% /home /dev/mapper/vg_root-lv_opt 1015040 498632 516408 50% /opt /dev/mapper/vg_root-lv_tmp 10471424 57972 10413452 1% /tmp /dev/mapper/vg_root-lv_var 5134336 1023916 4110420 20% /var /dev/mapper/vg_root-lv_varcrash 62879744 33328 62846416 1% /var/crash /dev/mapper/vg_ora-lv_oem 26201344 1980064 24221280 8% /opt/oem /dev/mapper/vg_root-lv_varlog 6259968 1127840 5132128 19% /var/log /dev/mapper/vg_root-lv_varlogaudit 1560576 884308 676268 57% /var/log/audit /dev/asm/mysqldev-459 10485760 721720 9764040 7% /mysqldev /dev/asm/backup_test-459 68157440 52844856 15312584 78% /backup_test /dev/asm/mysqltest-459 10485760 728324 9757436 7% /mysqltest /dev/asm/mysqluat-459 10485760 722000 9763760 7% /mysqluat /dev/asm/syncd01-459 209715200 188641768 21073432 90% /syncd01 /dev/asm/synct01-459 367001600 136423028 230578572 38% /synct01 dc1dsydbcl-tacfs-vip:/synct01 367001600 136422400 230579200 38% /synctnfs01 dc1dsydbcl-dacfs-vip:/syncd01 209715200 188641280 21073920 90% /syncdnfs01 /dev/asm/acfsreptest-79 157286400 4535864 152750536 3% /acfsreptest /dev/asm/syncu01-289 1022361600 2278556 1020083044 1% /syncu01
chrismorgan240 commented at 2018-11-05 16:24:¶
jsmeix commented at 2018-11-05 16:57:¶
@chrismorgan240
your rear-dc1dsydb106.log was made with /bin/rear -d recover
but -d
is not -D
- the latter is the needed full debugging mode
(the so called debugscript mode).
Your disklayout.conf.txt contains for sda
only
lvmdev /dev/vg_root /dev/sda2 tqugIk-a3nI-mS8g-mz8P-JJ4X-CMT8-0Buqe3 584843860 fs /dev/sda1 /boot xfs uuid=cd3460e8-d7fb-4a3c-a993-aafe3ae91860 label= options=rw,relatime,attr2,inode64,logbsize=256k,sunit=512,swidth=512,noquota logicaldrive /dev/sda 0|A|1 raid=1 drives=1I:1:1,1I:1:2, spares= sectors=32 stripesize=256
where a XFS filesystem should be created on '/dev/sda1'
but there is no entry how to create '/dev/sda1' - usually such an
entry
would be a 'part' entry that describes the '/dev/sda1' partition on
'/dev/sda'.
And there is also nothing that describes how to create '/dev/sda2' -
usually
such an entry would be another 'part' entry that describes the
'/dev/sda2' partition
on '/dev/sda'.
Because you use HP SmartArray and multipath
I am afraid I cannot really help further here because
I have no experience with HP SmartArray or multipath
(I use use neither of them).
In particular regarding AUTOEXCLUDE_MULTIPATH=y
you may have a look at
https://github.com/rear/rear/issues/1925#issuecomment-428463362
which might be somehow related.
In your initial description
https://github.com/rear/rear/issues/1952#issue-377419158
you wrote
root VG is local disk
but your disklayout.conf.txt looks as if your root VG is perhaps
not on a usual local disk (but I am no expert here)?
See
https://github.com/rear/rear/blob/master/doc/user-guide/06-layout-configuration.adoc
how disklayout.conf looks like when LVM is used on usual local disks.
gdha commented at 2018-11-06 08:24:¶
@chrismorgan240 You should check which HP RAID tool you are using
2018-11-05 16:12:52.917074921 Including layout/prepare/GNU/Linux/170_include_hpraid_code.sh
/usr/share/rear/lib/_input-output-functions.sh: line 328: type: hpacucli: not found
You need to dig into that script to find out what could have went wrong (use debug mode) to have a clear view.
jsmeix commented at 2018-11-06 14:34:¶
@gdha
I think hpacucli: not found
alone does not indicate a real problem
because I think that happens as a result of the function call
define_HPSSACLI
at the beginning of
layout/prepare/GNU/Linux/170_include_hpraid_code.sh
where define_HPSSACLI
is in lib/hp_raid-functions.sh
function define_HPSSACLI() { # HP Smart Storage Administrator CLI is either hpacucli, hpssacli or ssacli if has_binary hpacucli ; then HPSSACLI=hpacucli elif has_binary hpssacli ; then HPSSACLI=hpssacli elif has_binary ssacli ; then HPSSACLI=ssacli fi }
where the type: hpacucli: not found
comes from has_binary hpacucli
but because there is no subsequent type: hpssacli: not found
from
the subsequent has_binary hpssacli
I conclude that the result is
HPSSACLI=hpssacli
and that hpssacli
is the right HP RAID tool
that @chrismorgan240 actually uses.
chrismorgan240 commented at 2018-11-07 13:52:¶
I added entries for /dev/sda from another similar server :
disk /dev/sda 299966445568 msdos part /dev/sda 524288000 2097152 primary boot /dev/sda1 part /dev/sda 299440056320 526389248 primary lvm /dev/sda2
This allows restore to proceed :
RESCUE dc1dsydb106:/var/lib/rear/layout # rear -D recover Relax-and-Recover 2.3-git.3007.056bfdb.master.changed / 2018-06-05 Using log file: /var/log/rear/rear-dc1dsydb106.log Running workflow recover within the ReaR rescue/recovery system Starting required daemons for NFS: RPC portmapper (portmap or rpcbind) and rpc.statd if available. Started RPC portmapper 'rpcbind'. RPC portmapper 'rpcbind' available. RPC status rpc.statd available. Started rpc.idmapd. Using backup archive '/tmp/rear.yWwNxSWLLaYl7K0/outputfs/dc1dsydb106/backup.tar.gz' Will do driver migration (recreating initramfs/initrd) Calculating backup archive size Backup archive size is 3.4G /tmp/rear.yWwNxSWLLaYl7K0/outputfs/dc1dsydb106/backup.tar.gz (compressed) Comparing disks Device sda has expected (same) size 299966445568 (will be used for recovery) Disk configuration looks identical UserInput -I DISK_LAYOUT_PROCEED_RECOVERY needed in /usr/share/rear/layout/prepare/default/250_compare_disks.sh line 146 Proceed with recovery (yes) otherwise manual disk layout configuration is enforced (default 'yes' timeout 30 seconds) yes UserInput: No choices - result is 'yes' User confirmed to proceed with recovery Start system layout restoration. Creating partitions for disk /dev/sda (msdos) Creating LVM PV /dev/sda2 Restoring LVM VG 'vg_root' Sleeping 3 seconds to let udev or systemd-udevd create their devices... Creating filesystem of type xfs with mount point / on /dev/mapper/vg_root-lv_root. Mounting filesystem / Creating filesystem of type xfs with mount point /home on /dev/mapper/vg_root-lv_home. Mounting filesystem /home Creating filesystem of type xfs with mount point /opt on /dev/mapper/vg_root-lv_opt. Mounting filesystem /opt Creating filesystem of type xfs with mount point /tmp on /dev/mapper/vg_root-lv_tmp. Mounting filesystem /tmp Creating filesystem of type xfs with mount point /usr/local on /dev/mapper/vg_root-lv_usrlocal. Mounting filesystem /usr/local Creating filesystem of type xfs with mount point /var on /dev/mapper/vg_root-lv_var. Mounting filesystem /var Creating filesystem of type xfs with mount point /var/crash on /dev/mapper/vg_root-lv_varcrash. Mounting filesystem /var/crash Creating filesystem of type xfs with mount point /var/log on /dev/mapper/vg_root-lv_varlog. Mounting filesystem /var/log Creating filesystem of type xfs with mount point /var/log/audit on /dev/mapper/vg_root-lv_varlogaudit. Mounting filesystem /var/log/audit Creating filesystem of type xfs with mount point /boot on /dev/sda1. Mounting filesystem /boot Creating swap on /dev/mapper/vg_root-lv_swap Disk layout created. Restoring from '/tmp/rear.yWwNxSWLLaYl7K0/outputfs/dc1dsydb106/backup.tar.gz' (restore log in /var/lib/rear/restore/recover.backup.tar.gz.9126.restore.log) ... Restored 9106 MiB [avg. 158059 KiB/sec] OK Restored 9243 MiB in 60 seconds [avg. 157757 KiB/sec] Restoring finished (verify backup restore log messages in /var/lib/rear/restore/recover.backup.tar.gz.9126.restore.log) Recreating directories (with permissions) from /var/lib/rear/recovery/directories_permissions_owner_group Running mkinitrd... Updated initrd with new drivers for kernel 3.10.0-123.el7.x86_64. Running mkinitrd... Updated initrd with new drivers for kernel 3.10.0-514.26.2.el7.x86_64. Skip installing GRUB Legacy boot loader because GRUB 2 is installed (grub-probe or grub2-probe exist). Installing GRUB2 boot loader Finished recovering your system. You can explore it under '/mnt/local'. Exiting rear recover (PID 9126) and its descendant processes Running exit tasks You should also rm -Rf /tmp/rear.yWwNxSWLLaYl7K0
Exploring /mnt/local shows all lvs / filesystems in vg_root.
/dev/mapper looks ok
RESCUE dc1dsydb106:/mnt/local # ls -l /dev/mapper total 0 crw------- 1 root root 10, 236 Nov 7 13:30 control lrwxrwxrwx 1 root root 7 Nov 7 13:30 vg_root-lv_home -> ../dm-1 lrwxrwxrwx 1 root root 7 Nov 7 13:30 vg_root-lv_opt -> ../dm-6 lrwxrwxrwx 1 root root 7 Nov 7 13:30 vg_root-lv_root -> ../dm-0 lrwxrwxrwx 1 root root 7 Nov 7 13:30 vg_root-lv_swap -> ../dm-2 lrwxrwxrwx 1 root root 8 Nov 7 13:30 vg_root-lv_temp -> ../dm-10 lrwxrwxrwx 1 root root 7 Nov 7 13:30 vg_root-lv_tmp -> ../dm-5 lrwxrwxrwx 1 root root 7 Nov 7 13:30 vg_root-lv_usrlocal -> ../dm-3 lrwxrwxrwx 1 root root 7 Nov 7 13:30 vg_root-lv_var -> ../dm-4 lrwxrwxrwx 1 root root 7 Nov 7 13:30 vg_root-lv_varcrash -> ../dm-9 lrwxrwxrwx 1 root root 7 Nov 7 13:30 vg_root-lv_varlog -> ../dm-7 lrwxrwxrwx 1 root root 7 Nov 7 13:30 vg_root-lv_varlogaudit -> ../dm-8
Reboot and server goes to rescue mode.
No vg_root visible by vgs / lsblk
/dev entries missing
pvscan / partprobe doesn't help.
jsmeix commented at 2018-11-07 16:17:¶
@chrismorgan240
what exactly do you mean with "Reboot and server goes to rescue mode."?
Do you mean you end up in GRUB rescue mode
or did GRUB succeed to load kernel and initrd and the kernel started
but then the booting system goes into some "rescue mode"
e.g. the systemd rescue mode (i.e. systemd's rescue.target)
or runlevel 1 with SysVinit or anything else like that?
chrismorgan240 commented at 2018-11-07 16:32:¶
@jsmeix
It is the systemd rescue mode.
chrismorgan240 commented at 2018-11-07 17:16:¶
Here is a df after the restore :
RESCUE dc1dsydb106:/var/lib/rear/layout # df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 252G 0 252G 0% /dev
tmpfs 252G 0 252G 0% /dev/shm
tmpfs 252G 9.5M 252G 1% /run
tmpfs 252G 0 252G 0% /sys/fs/cgroup
/dev/mapper/vg_root-lv_root 9.8G 4.4G 5.4G 45% /mnt/local
/dev/mapper/vg_root-lv_home 994M 116M 878M 12% /mnt/local/home
/dev/mapper/vg_root-lv_opt 994M 499M 495M 51% /mnt/local/opt
/dev/mapper/vg_root-lv_tmp 10G 33M 10G 1% /mnt/local/tmp
/dev/mapper/vg_root-lv_usrlocal 994M 49M 945M 5%
/mnt/local/usr/local
/dev/mapper/vg_root-lv_var 4.9G 2.0G 3.0G 39% /mnt/local/var
/dev/mapper/vg_root-lv_varcrash 60G 33M 60G 1% /mnt/local/var/crash
/dev/mapper/vg_root-lv_varlog 6.0G 1.2G 4.9G 20% /mnt/local/var/log
/dev/mapper/vg_root-lv_varlogaudit 1.5G 919M 608M 61%
/mnt/local/var/log/audit
/dev/sda1 494M 413M 81M 84% /mnt/local/boot
chrismorgan240 commented at 2018-11-07 17:20:¶
RESCUE dc1dsydb106:/mnt/local/dev/vg_root # find
/mnt/local/dev/vg_root
/mnt/local/dev/vg_root
/mnt/local/dev/vg_root/lv_temp
/mnt/local/dev/vg_root/lv_varcrash
/mnt/local/dev/vg_root/lv_tmp
/mnt/local/dev/vg_root/lv_opt
/mnt/local/dev/vg_root/lv_varlogaudit
/mnt/local/dev/vg_root/lv_varlog
/mnt/local/dev/vg_root/lv_home
/mnt/local/dev/vg_root/lv_usrlocal
/mnt/local/dev/vg_root/lv_var
/mnt/local/dev/vg_root/lv_root
/mnt/local/dev/vg_root/lv_swap
VG #PV #LV #SN Attr VSize VFree
vg_ora 1 5 0 wz--n- 324.98g 50.98g
vg_reptest 1 2 0 wzx-n- 1008.00m 8.00m
vg_root 1 11 0 wz--n- 278.87g 32.82g
vg_swap 1 1 0 wz--n- 99.98g 1008.00m
chrismorgan240 commented at 2018-11-07 17:41:¶
rear -D recover :
chrismorgan240 commented at 2018-11-07 17:41:¶
chrismorgan240 commented at 2018-11-07 18:25:¶
chrismorgan240 commented at 2018-11-07 18:26:¶
Above are boot messages - this looks like it could be the issue to me ? I would appreciate advice.
chrismorgan240 commented at 2018-11-07 18:38:¶
gdha commented at 2018-11-08 08:12:¶
@chrismorgan240 try a manual relabel?
++ touch /mnt/local/.autorelabel
++ Log 'Created /.autorelabel file : after reboot SELinux will relabel all files'
It seems that the SELinux relabel was not performed for one of other
reason?
I've seen something similar on CentOS 7
(https://github.com/gdha/rear-automated-testing/issues/72)
gdha commented at 2018-11-14 10:10:¶
@chrismorgan240 Have a look at https://access.redhat.com/solutions/24845 to fix your situation.
chrismorgan240 commented at 2018-11-19 14:04:¶
Thanks for your help.
[Export of Github issue for rear/rear.]