#3462 PR merged: Tell if 'use_lvmlockd = 1' in /etc/lvm/lvm.conf

Labels: enhancement

jsmeix opened issue at 2025-04-25 12:47:

On my test VM with LVM and
'use_lvmlockd = 1' in /etc/lvm/lvm.conf

# usr/sbin/rear -D mkrescue
...
Running 'layout/save' stage ======================
Creating disk layout
Overwriting existing disk layout file /root/rear.github.master/var/lib/rear/layout/disklayout.conf
Recreating LVM needs 'use_lvmlockd = 0' (there is 'use_lvmlockd = 1' in /etc/lvm/lvm.conf)
...
  • Description of the changes in this pull request:

In layout/save/GNU/Linux/220_lvm_layout.sh
show LogPrintError message to inform the user
when 'use_lvmlockd = 1' was found in /etc/lvm/lvm.conf
but recreating LVM things needs 'use_lvmlockd = 0'

jsmeix commented at 2025-04-25 12:53:

This is a a quick first attempt to implement
https://github.com/rear/rear/issues/3461#issuecomment-2830111347

Currently it only checks the hardcoded file /etc/lvm/lvm.conf
if there is a 'use_lvmlockd = 1' line (ignoring whitespace) in it.
I am not an LVM expert so I don't know if there are other ways
how using lvmlockd could be specified.

gdha commented at 2025-04-28 08:02:

@jsmeix if only the root VG is included do we error out as well?

jsmeix commented at 2025-04-28 08:11:

@gdha
on my test system which I used for
https://github.com/rear/rear/issues/3461
I have only one single VG "system" as in
https://github.com/rear/rear/wiki/Test-Matrix-rear-2.6#sles-12-sp-5-with-default-lvm-and-btrfs-structure

With 'use_lvmlockd = 1' in /etc/lvm/lvm.conf
diskrestore.sh fails (because "lvm pvcreate" fails)
so "rear recover" errors out as in
https://github.com/rear/rear/issues/3461#issue-3017020804

jsmeix commented at 2025-04-28 08:29:

@rear/contributors
I would like to merge it tomorrow afternoon
provided there are no severe objections.

jsmeix commented at 2025-04-29 10:10:

Via
https://github.com/rear/rear/pull/3462/commits/045e28ed9d2755fce2d83e1396d25210a7b01ead
I added the same also to
layout/prepare/GNU/Linux/110_include_lvm_code.sh
to have the user informed also during "rear recover".

How it looks during "rear recover" on my test VM:

RESCUE localhost:~ # rear -D recover
...
Running 'layout/prepare' stage ======================
Recreating LVM needs 'use_lvmlockd = 0' (there is 'use_lvmlockd = 1' in /etc/lvm/lvm.conf)
Comparing disks
...

and then "rear recover" fails as in
https://github.com/rear/rear/issues/3461#issue-3017020804

After manual editing /etc/lvm/lvm.conf in the ReaR recovery system
to use_lvmlockd = 0 a subsequent run of "rear recover" works.

jsmeix commented at 2025-04-29 10:14:

I wonder if it would be better to error out
during "rear recover" in its early 'layout/prepare' stage
when there is 'use_lvmlockd = 1' in /etc/lvm/lvm.conf
in the ReaR recovery system because - as far as I see -
LVM recreating cannot work with 'use_lvmlockd = 1'
because there is no lvmlockd in the ReaR recovery system
so it should be more user-friedly to error out with

Recreating LVM needs 'use_lvmlockd = 0' (there is 'use_lvmlockd = 1' in /etc/lvm/lvm.conf)

instead of proceeding and let later diskrestore.sh fail.

jsmeix commented at 2025-04-29 10:39:

I implemented to error out during "rear recover"
when there is 'use_lvmlockd = 1' in /etc/lvm/lvm.conf

How that looks during "rear recover" on my test VM:

RESCUE localhost:~ # rear -D recover
...
Running 'layout/prepare' stage ======================
ERROR: Recreating LVM requires 'use_lvmlockd = 0' (there is 'use_lvmlockd = 1' in /etc/lvm/lvm.conf)
...

After manual editing /etc/lvm/lvm.conf in the ReaR recovery system
to use_lvmlockd = 0 a subsequent run of "rear recover" works.


[Export of Github issue for rear/rear.]