#2339 Issue closed
: Recreating two disks with RAID1 onto a single disk does not work automatically¶
Labels: support / question
, fixed / solved / done
oscarkcho opened issue at 2020-03-06 01:45:¶
Relax-and-Recover (ReaR) Issue Template¶
Fill in the following items before submitting a new issue
(quick response is not guaranteed with free support):
-
ReaR version ("/usr/sbin/rear -V"):
Relax-and-Recover 2.4 / Git -
OS version ("cat /etc/rear/os.conf" or "lsb_release -a" or "cat /etc/os-release"):
Red Hat Enterprise Linux Server release 7.6 (Maipo) -
ReaR configuration files ("cat /etc/rear/site.conf" and/or "cat /etc/rear/local.conf"):
OUTPUT=ISO
AUTOEXCLUDE_MULTIPATH=n
OUTPUT_URL=file:///xxxxxxxx/rear/output
BACKUP_PROG_EXCLUDE=("${BACKUP_PROG_EXCLUDE[@]}" '/var/tmp/*' '/var/crash' .......)
BACKUP=NETFS
BACKUP_URL=nfs://xxxxxxxxxx/rear/restore/
-
Hardware (PC or PowerNV BareMetal or ARM) or virtual machine (KVM guest or PoverVM LPAR):
BareMetal -
System architecture (x86 compatible or PPC64/PPC64LE or what exact ARM device):
PPC64LE -
Firmware (BIOS or UEFI or Open Firmware) and bootloader (GRUB or ELILO or Petitboot):
Petitboot -
Storage (local disk or SSD) and/or SAN (FC or iSCSI or FCoE) and/or multipath (DM or NVMe):
Local Disk -
Storage layout ("lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,SIZE,MOUNTPOINT" or "lsblk" as makeshift):
-
Description of the issue (ideally so that others can reproduce it):
2020-03-05 16:08:13.365061298 Including layout/recreate/default/250_verify_mount.sh
2020-03-05 16:08:13.370633827 ERROR: No filesystem mounted on '/mnt/local'. Stopping.
-
Workaround, if any:
-
Attachments, as applicable ("rear -D mkrescue/mkbackup/recover" debug log files):
2020-03-05 16:08:03.589310675 ======================
2020-03-05 16:08:03.590236710 Running 'layout/recreate' stage
2020-03-05 16:08:03.591226308 ======================
2020-03-05 16:08:03.598771118 Including layout/recreate/default/100_confirm_layout_code.sh
2020-03-05 16:08:03.602661583 UserInput: called in /usr/share/rear/layout/recreate/default/100_confirm_layout_code.sh line 26
2020-03-05 16:08:03.605929286 UserInput: Default input in choices - using choice number 1 as default input
2020-03-05 16:08:03.607439595 Confirm or edit the disk recreation script
2020-03-05 16:08:03.608852281 1) Confirm disk recreation script and continue 'rear recover'
2020-03-05 16:08:03.610338150 2) Edit disk recreation script (/var/lib/rear/layout/diskrestore.sh)
2020-03-05 16:08:03.611673007 3) View disk recreation script (/var/lib/rear/layout/diskrestore.sh)
2020-03-05 16:08:03.613022148 4) View original disk space usage (/var/lib/rear/layout/config/df.txt)
2020-03-05 16:08:03.614367454 5) Use Relax-and-Recover shell and return back to here
2020-03-05 16:08:03.615724239 6) Abort 'rear recover'
2020-03-05 16:08:03.617091099 (default '1' timeout 300 seconds)
2020-03-05 16:08:09.142830136 UserInput: 'read' got as user input '1'
2020-03-05 16:08:09.147376933 User confirmed disk recreation script
2020-03-05 16:08:09.152590581 Including layout/recreate/default/200_run_layout_code.sh
2020-03-05 16:08:09.156798777 Start system layout restoration.
WARNING: Failed to connect to lvmetad. Falling back to device scanning.
+++ set +x
2020-03-05 16:08:09.272299878 Disk layout created.
2020-03-05 16:08:09.275389294 UserInput: called in /usr/share/rear/layout/recreate/default/200_run_layout_code.sh line 98
2020-03-05 16:08:09.279827540 UserInput: Default input in choices - using choice number 1 as default input
2020-03-05 16:08:09.281442423 Confirm the recreated disk layout or go back one step
2020-03-05 16:08:09.282972556 1) Confirm recreated disk layout and continue 'rear recover'
2020-03-05 16:08:09.284362761 2) Go back one step to redo disk layout recreation
2020-03-05 16:08:09.285767085 3) Use Relax-and-Recover shell and return back to here
2020-03-05 16:08:09.287119489 4) Abort 'rear recover'
2020-03-05 16:08:09.288496568 (default '1' timeout 300 seconds)
2020-03-05 16:08:13.349940417 UserInput: 'read' got as user input '1'
2020-03-05 16:08:13.358961775 User confirmed recreated disk layout
2020-03-05 16:08:13.365061298 Including layout/recreate/default/250_verify_mount.sh
2020-03-05 16:08:13.370633827 ERROR: No filesystem mounted on '/mnt/local'. Stopping.
==== Stack trace ====
Trace 0: /bin/rear:543 main
Trace 1: /usr/share/rear/lib/recover-workflow.sh:33 WORKFLOW_recover
Trace 2: /usr/share/rear/lib/framework-functions.sh:101 SourceStage
Trace 3: /usr/share/rear/lib/framework-functions.sh:49 Source
Trace 4: /usr/share/rear/layout/recreate/default/250_verify_mount.sh:5 source
Message: No filesystem mounted on '/mnt/local'. Stopping.
== End stack trace ==
jsmeix commented at 2020-03-06 13:37:¶
What about
- Storage layout ("lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,SIZE,MOUNTPOINT" or "lsblk" as makeshift)
- Attachments, as applicable ("rear -D mkrescue/mkbackup/recover" debug log files)
For the latter see "Debugging issues with Relax-and-Recover" at
https://en.opensuse.org/SDB:Disaster_Recovery
oscarkcho commented at 2020-03-09 04:52:¶
Storage layout ("lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,SIZE,MOUNTPOINT" or "lsblk" as makeshift)
Attachments, as applicable ("rear -D mkrescue/mkbackup/recover" debug log files)
jsmeix commented at 2020-03-09 15:08:¶
@oscarkcho
at least on first glance
https://github.com/rear/rear/files/4304699/storage.layout.txt
looks normal and
https://github.com/rear/rear/files/4304703/rear-test01.log
does not show somehing that is obviously going wrong
so we neeed to dig deeper step by step:
Can you also attach your
/var/lib/rear/layout/disklayout.conf
file?
Careful with possible secrets therein (e.g. password
values)
if you use encryption, cf. "Disk layout file syntax" in
https://github.com/rear/rear/blob/master/doc/user-guide/06-layout-configuration.adoc
See the section "Debugging issues with Relax-and-Recover" at
https://en.opensuse.org/SDB:Disaster_Recovery
that reads in particular (excerpt):
Additionally the files in the /var/lib/rear/ directory
and in its sub-directories in the ReaR recovery system
(in particular /var/lib/rear/layout/disklayout.conf
and /var/lib/rear/layout/diskrestore.sh)
are needed to analyze a "rear -d -D recover" failure.
oscarkcho commented at 2020-03-10 06:43:¶
Try to run "rear -D recover again", found stopped in same "ERROR: No filesystem mounted on '/mnt/local'. Stopping.". Then tried to ignore and rename of /usr/share/rear/layout/recreate/default/250_verify_mount.sh to 250_verify_mount.sh.bak to let recover program ignored this step, and data recover can go on and restored
Finally, another problem occurred after data recover as the bootloader cannot be installed successful. Followed the instruction to manual install bootloader, but failed.
2020-03-10 12:09:26.352433794 Including finalize/default/890_finish_checks.sh
2020-03-10 12:09:26.353435593 Entering debugscripts mode via 'set -x'.
+ source /usr/share/rear/finalize/default/890_finish_checks.sh
++ ls -l /sys/block/sda/ /sys/block/sdb/
++ grep -q xen
++ test 1
++ LogPrint 'WARNING:
For this system
RedHatEnterpriseServer/7 on Linux-ppc64le (based on Fedora/7/ppc64le)
there is no code to install a boot loader on the recovered system
or the code that we have failed to install the boot loader correctly.
Please contribute appropriate code to the Relax-and-Recover project,
see http://relax-and-recover.org/development/
Take a look at the scripts in /usr/share/rear/finalize,
for example see the scripts
/usr/share/rear/finalize/Linux-i386/210_install_grub.sh
/usr/share/rear/finalize/Linux-i386/220_install_grub2.sh
---------------------------------------------------
| IF YOU DO NOT INSTALL A BOOT LOADER MANUALLY, |
| THEN YOUR SYSTEM WILL NOT BE ABLE TO BOOT. |
---------------------------------------------------
You can use '\''chroot /mnt/local bash --login'\''
to change into the recovered system.
You should at least mount /proc in the recovered system
e.g. via '\''mount -t proc none /mnt/local/proc'\''
before you change into the recovered system
and manually install a boot loader therein.
disklayout.conf
disklayout.conf.txt
rear -D recover log
rear-test01_recover.log
jsmeix commented at 2020-03-10 10:15:¶
@oscarkcho
a first look at your
https://github.com/rear/rear/files/4310685/rear-test01_recover.log
shows while
/usr/share/rear/layout/recreate/default/200_run_layout_code.sh
is running (i.e. while the diskrestors.sh script is run) in particular
only those component_created
messages
+++ component_created vgchange rear
+++ component_created /dev/sda disk
+++ component_created /dev/sda1 part
+++ component_created /dev/sda2 part
+++ component_created /dev/sda3 part
so you only got sda with its plain partitions recreated
but I don't see messages about creating partitions on sdb
and then RAID setup and LVM setup and creating filesystems.
oscarkcho commented at 2020-03-11 02:08:¶
There are using the same hardware for the rear backup and restoration test, the hardware original have two harddisks with raid 1 setup, we were taken out 1 harddisk to keep as the bootable image to avoid system crash, another harddisk is using for rear restoration. So there are only 1 harddisk during the rear recover.
Would like to know what if using two harddisks with raid 1 for rear backup, there must need two harddisks for rear recover? Would like to be only 1 harddisk, the rear recover was not successful as difference to original hardware config?
jsmeix commented at 2020-03-12 08:36:¶
In general ReaR is first and foremost meant to recreate
a system as much as possible exactly as it was before, see
"Goal: Recreate a destroyed system" and
"Fully compatible replacement hardware is needed" in
https://en.opensuse.org/SDB:Disaster_Recovery
and in most cases ReaR is used in this way.
When the replacement hardware ("hardware" could be also virtual
hardware)
where "rear recover" will be done is different compared to the original
hardware
where "rear mkbackup" had been done it is no longer plain disaster
recovery
but then it is a system migration what we call "migration" in short.
System migration is not done so often so that it is neither as well
tested
nor as well supported as the "recreating exactly as before" case.
For example ReaR does not fully support to migrate the bootloader, cf.
https://github.com/rear/rear/issues/1437
From my personal perception "system migration with ReaR"
is more and more asked for by ReaR users.
In particular migration from one kind of hardware to another
kind of hardware and here mainly migration from physical
hardware to virtual hardware i.e. "P2V"
https://en.wikipedia.org/wiki/Physical-to-Virtual
but that also does not (and often cannot) "just work", cf.
http://lists.relax-and-recover.org/pipermail/rear-users/2018-November/003626.html
In your case it seems you like to recreate a RAID1 system
on replacement hardware that cannot do RAID1 which is
certainly something that cannot work "automagically".
In general see the section
"Inappropriate expectations" in
https://en.opensuse.org/SDB:Disaster_Recovery
When your disk layout on your replacement hardware is different
compared to the original system, you must at least adapt in your
ReaR recovery system that is running on your replacement hardware
your disklayout.conf, cf.
https://github.com/rear/rear/blob/master/doc/user-guide/06-layout-configuration.adoc
before you run "rear recover" on your replacement hardware.
Usually you also have to adapt /mnt/local/etc/fstab afterwards
because your /mnt/local/etc/fstab is the restored one from
your original system that usually does no longer match your
changed disk layout on your recreated system.
jsmeix commented at 2020-03-12 08:40:¶
I closed the issue because I think it is sufficiently answered by
https://github.com/rear/rear/issues/2339#issuecomment-598067616
@oscarkcho
if you need I think you can still add further comments
here in GitHub regardless that the issue is closed.
[Export of Github issue for rear/rear.]