#3162 Issue open: RFC: Why does "rear recover" regenerate the original's system initrd in any case?

Labels: enhancement, cleanup, discuss / RFC

jsmeix opened issue at 2024-02-22 10:58:

Triggered by
https://github.com/rear/rear/issues/3152
and
https://github.com/rear/rear/pull/3155
I am wondering
why "rear recover" regenerates
the original's system initrd in any case?

In particular why I am wondering see in my
https://github.com/rear/rear/issues/3152#issuecomment-1941510047

Cannot create initrd (found no mkinitrd in the recreated system).
...
Nevertheless the recreated system boots well for me
as expected because I recreated on 100% compatible
replacement "hardware" (on a same VM) so the same initrd
from the original system should still work.

and see in my
https://github.com/rear/rear/pull/3155#issue-2138456399

Additionally improved the user messages
(in particular the warning messages)
to make it more clear that the point is
to decide if the recreated system will boot
with the initrd 'as is' from the backup restore.

The default use case of "rear recover" is to recreate
on "Fully compatible replacement hardware"
(where "hardware" could be also virtual hardware), cf.
https://en.opensuse.org/SDB:Disaster_Recovery#Fully_compatible_replacement_hardware_is_needed

On "fully compatible replacement hardware"
the initrd 'as is' from the original system
together with the kernel 'as is' from the original system
(that both together are restored 'as is' from the backup)
must still work
(otherwise it is no "fully compatible replacement hardware").

So for the default use case of "rear recover"
there is no need to regenerate the initrd.

But regenerating the initrd needs relatively long time
so "rear recover" is slower than actually needed
AND
regenerating the original's system initrd
during "rear recover" may generate the new initrd
with (possibly subtle but severe) differences
compared to the original's system initrd
so regenerating the initrd in any case
may do more harm than good in practice
for the default use case of "rear recover".

pcahyna commented at 2024-02-22 15:19:

I would check with SW RAID, I suspect some details of the recreated array may differ from the original (like UUIDs) and regenerating the initrd may be needed for picking up this.

pcahyna commented at 2024-02-22 15:21:

This may be the case also for other IDs in the system config files that may change, if the files are embedded in the ramdisk. We have a script running at the end that updates the IDs in known config files.

jsmeix commented at 2024-02-23 14:24:

@pcahyna
thank you for the reasons why regenerating the initrd
could be needed in any case to be more on the safe side
against (possibly subtle but severe) differences
between the original system and the recreated system.

abbbi commented at 2024-03-14 08:07:

just as side note, i also experiencing this mostly on systems with multipath setups.
Im using qemu to simulate a multipath san environment and noted in my automated tests that initrd is allways recreated, even if system recovery is done into is started with equal qemu options.


[Export of Github issue for rear/rear.]