#1829 Issue closed
: 'rear recover' fails when migrating on Debian 9.4 from KVM to XEN VM and vice versa¶
Labels: support / question
, fixed / solved / done
ledj opened issue at 2018-06-14 08:27:¶
rear recover discovers Debian 64 bit as 32 bit
rear recover doesn't install bootloader
jsmeix commented at 2018-06-14 09:03:¶
@ledj
provide the very basic information as plain text as requested in
https://github.com/rear/rear/blob/master/.github/ISSUE_TEMPLATE.md
I won't reverse-engineer the very basic information out of a debug log
file.
jsmeix commented at 2018-06-14 13:15:¶
@ledj
in your rear-debian.log I notice (excerpts):
+ source /usr/share/rear/finalize/Linux-i386/620_install_grub2.sh ... 2018-06-14 07:55:15.608787265 Installing GRUB2 boot loader /dev/xvda3' /dev/xvda3 ]] Installing for i386-pc platform. grub-install: warning: cannot open directory `/usr/share/locale': No such file or directory. grub-install: warning: this GPT partition label contains no BIOS Boot Partition; embedding won't be possible. grub-install: warning: Embedding is not possible. GRUB can only be installed in this setup by using blocklists. However, blocklists are UNRELIABLE and their use is discouraged.. grub-install: error: will not proceed with blocklists.
I guess grub-install
is right and you need to adapt the partitioning
on the original system to something that grub-install
supports.
Or do you perhaps not use GRUB2 (which is used as fallback by ReaR) as
bootloader?
In this case see the BOOTLOADER variable in default.conf.
ledj commented at 2018-06-14 19:37:¶
@jsmeix Ok sorry for not reporting correctly. Was told to supply the log, and thought this was enough.
jsmeix commented at 2018-06-15 08:09:¶
@ledj
I could extract most (but not all) of the basic information from the
log
but that is tedious work (not really helpful with free support ;-)
so that the more easy you could make it for us to correctly imagine
what goes on on your particular system (for example I am not a Debian
user)
the better things could work out in the end.
In this case I would like to know in particular what the partitioning
of your original system of the (main) system disk is.
I.e. what is the (full) output of the command
# parted -s /dev/sda unit MiB print
where you may have to replace /dev/sda
by the actual device node
of your (main) system disk (probably /dev/xvda
).
FYI:
In your rear-debian.log I noticed right now (excerpts):
Relax-and-Recover 2.3 / 2017-12-20 ... Partition rear-noname on /dev/xvda: size reduced to fit on disk. ... End changed from 107374190592 to 107374165504.
which indicates that on your replacement "hardware"
(in your case probably a replacement virtual machine)
you use a replacement (virtual) disk with different size.
Recreation on a bit different hardware is actually a migration.
In case of migration with ReaR I would really recommend
to use our current ReaR upstream GitHub master code.
To use our current ReaR upstream GitHub master code
do the following:
Basically "git clone" it into a separated directory and then
configure and run ReaR from within that directory like:
# git clone https://github.com/rear/rear.git # mv rear rear.github.master # cd rear.github.master # vi etc/rear/local.conf # usr/sbin/rear -D mkbackup
Note the relative paths "etc/rear/" and "usr/sbin/".
For ReaR's so called "migration mode" I intentionally made major
changes
to the partition resizing code in current ReaR upstream GitHub master
code
to avoid really bad automated resizing results that had happened before,
see
https://github.com/rear/rear/issues/1731
https://github.com/rear/rear/pull/1733
https://github.com/rear/rear/issues/102
For a summary of that changes see the ReaR 2.4 release notes
https://github.com/rear/rear.github.com/blob/master/documentation/release-notes-2-4.md
therein read the section
Version 2.4 (June 2018) ... New features, bigger enhancements, and possibly backward incompatible changes: Major rework and changed default behaviour how ReaR behaves in migration mode when partitions can or must be resized to fit on replacement disks with different size. ...
see also
https://github.com/rear/rear/issues/1822#issuecomment-396492635
and regarding size reduced to fit on disk
see in particular
https://github.com/rear/rear/pull/1733#issuecomment-367680598
That End changed from 107374190592 to 107374165504
here
looks like another "fine" example how ReaR before
https://github.com/rear/rear/commit/6414936ba30d6c13020eee8313e93a4e29debc54
badly changes partitioning alingment because
107374190592 = 13107201 * 512 * 16 (i.e. originally aligned to a 8 KiB
unit)
while in contrast
107374165504 = 209715167 * 512 (i.e. re-aligned by ReaR to only a 512 B
unit)
ledj commented at 2018-06-15 12:35:¶
@jsmeix
Super thanks! And generally and in this case, I'm trying to find a way to convert from a KVM based (Proxmox) VM to a XenServer (xcp-ng.org) VM (back and forth actually as we use both). And you are correct, even though the disk is set to the same size on original and the migrated, sizes seem to differ slightly as noticed by rear.
Output of "parted -s /dev/sda unit MiB print" on original system is:
root@debian:~# parted -s /dev/sda unit MiB print Model: QEMU QEMU HARDDISK (scsi) Disk /dev/sda: 102400MiB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1,00MiB 954MiB 953MiB ext2 2 954MiB 1908MiB 954MiB linux-swap(v1) 3 1908MiB 102399MiB 100491MiB lvm
jsmeix commented at 2018-06-15 13:33:¶
@ledj
for comparison what I have on one of my
GPT partitioned systems with BIOS booting
(it is a SLES15 default partitioning):
# parted -s /dev/sda unit MiB print Model: ATA QEMU HARDDISK (scsi) Disk /dev/sda: 20480MiB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: pmbr_boot Number Start End Size File system Name Flags 1 1.00MiB 9.00MiB 8.00MiB bios_grub 2 9.00MiB 12906MiB 12897MiB btrfs legacy_boot 4 12906MiB 18431MiB 5525MiB xfs 3 18431MiB 20480MiB 2049MiB linux-swap(v1) swap
There is a 8 MiB bios_grub
partition that is there for GRUB2
so that GRUB2 can put its second stages there, cf.
https://en.wikipedia.org/wiki/BIOS_boot_partition
I think without such a BIOS boot partition you may run into
booting problems out of a sudden at an arbitrary later time.
Probably for some time it may work when GRUB2 "blindly"
squeezes its second stages somehow somewhere into the
hopefully empty "somewhat (depending on the GPT size) less than 1 MiB"
gap
at the beginning of your disk i.e. after the GPT (where exactly does a
GPT end?)
and before the first partition starts - but as GRUB2 reports
... this GPT partition label contains no BIOS Boot Partition; embedding won't be possible. Embedding is not possible. GRUB can only be installed in this setup by using blocklists. However, blocklists are UNRELIABLE and their use is discouraged. grub-install: error: will not proceed with blocklists.
I still think the actual solution is to have such a BIOS boot partition.
In general migrating a system onto different hardware
does not "just work", cf. "Inappropriate expectations" at
https://en.opensuse.org/SDB:Disaster_Recovery
In sufficiently simple cases it may "just work" but in general
do not expect too much built-in intelligence from a program
(written in plain bash which is not a programming language
that is primarily meant for artificial intelligence ;-)
that would do the annoying legwork for you.
For an example you may have a look at the
"P2V HP microserver to VmWare" issue
https://github.com/rear/rear/issues/943
But migrating a system onto same hardware only with changed
partition sizes should be more or less straightforward.
In this case it should be sufficient to edit disklayout.conf
before you run "rear recover".
When you migrate with ReaR you could manually modify
the disklayout.conf file and insert such a BIOS boot partition
to get one on the replacement hardware when there is none
on the original system.
For an example how a disklayout.conf file with a BIOS boot partition
looks on my above system (excerpts, only disk
part
fs
and swap
entries,
i.e. leaving out all those btrfs related stuff that should not matter
here):
# Disk /dev/sda # Format: diskdisk /dev/sda 21474836480 gpt # Partitions on /dev/sda # Format: part /dev/ part /dev/sda 8388608 1048576 rear-noname bios_grub /dev/sda1 part /dev/sda 13523484672 9437184 rear-noname legacy_boot /dev/sda2 part /dev/sda 2148515328 19326304256 rear-noname swap /dev/sda3 part /dev/sda 5793382400 13532921856 rear-noname none /dev/sda4 # Filesystems (only ext2,ext3,ext4,vfat,xfs,reiserfs,btrfs are supported). # Format: fs [uuid= ] [label=
When you migrate with ReaR you would need to manually modify
the disklayout.conf file in the already running ReaR recovery system
e.g. before you run rear recover
or while rear recover
is running
via the dialog that asks for confirmation of the disklayout.conf file.
It could be laborious and unhandy to manually edit
disklayout.conf within the ReaR recovery system
in particular you need to provide the values as byte values.
In this case have a look at RECOVERY_UPDATE_URL
in usr/share/rear/conf/default.conf
For an example how RECOVERY_UPDATE_URL works see
https://github.com/rear/rear/issues/943#issuecomment-236547810
When you use the ReaR master code as 'git clone/checkout' see
https://github.com/rear/rear/issues/943#issuecomment-237544630
what is special about the disklayout.conf file location
in the ReaR recovery system that you must consider
to make RECOVERY_UPDATE_URL work.
jsmeix commented at 2018-06-15 13:38:¶
@ledj
as a dirty hack you could of course modify ReaR's
usr/share/rear/finalize/Linux-i386/620_install_grub2.sh
script as you like.
Perhaps there is a grub-install
command line option for your
used GRUB2 to enforce it to be installed by using blocklists.
My grub-install
man page shows a --force
command line option
but I don't know if that helps in this case.
jsmeix commented at 2018-06-15 13:45:¶
@ledj
a note regarding migrating to a "XenServer (xcp-ng.org) VM":
I don't know about XEN (I only use KVM) but perhaps
the section "Paravirtualization" in
https://en.opensuse.org/SDB:Disaster_Recovery
could be of interest for you.
jsmeix commented at 2018-06-15 14:21:¶
@ledj
a note regarding migrating from "this hardware" to "that hardware" and
vice versa
where "this hardware" and "that hardware" have "basically same" disk
size:
In general I would recommend to leave some reasonable amount
of unused disk space at the end of the disk where "reasonable amount"
is the maximum expected variation of the "basically same" disk sizes.
In other words:
I would recommend to use the disk only up to the minimum
of the various slightly different available disk space values
of the various disks of "basically same" size that you have.
Reason:
This way - provided you use our current ReaR upstream GitHub master code
- you
could re-create your system on each of your disks of "basically same"
size with
byte-by-byte identical partitioning - provided you specify the right
vaules for
AUTORESIZE_PARTITIONS
AUTORESIZE_EXCLUDE_PARTITIONS
AUTOSHRINK_DISK_SIZE_LIMIT_PERCENTAGE
AUTOINCREASE_DISK_SIZE_THRESHOLD_PERCENTAGE
as appropriate for your particular set of disks of "basically same"
size.
jsmeix commented at 2018-06-15 14:49:¶
Right now I made RECOVERY_UPDATE_URL hopefully easier to use via
https://github.com/rear/rear/commit/803db236edd4f081bfd580b6177579e43752544a
The idea is that one can no longer specify it only in
etc/rear/local.conf
(where one may have to manually edit that file in the recovery
system).
Now it should also work to individually specify it as needed via
something like
export RECOVERY_UPDATE_URL="http://my_internal_server/host123_rear_config.tgz"
directly before one calls "rear recover", cf.
https://github.com/rear/rear/issues/1595#issuecomment-354804481
A precondition for specifying it this way is that it is not also
specified
in etc/rear/local.conf because that would overwrite the export
setting.
jsmeix commented at 2018-07-18 10:07:¶
Because there are no further comments
I assume this isssue is sufficiently answered
so that I can close it hereby.
[Export of Github issue for rear/rear.]