#3461 Issue closed
: LVM: 'use_lvmlockd = 1' in /etc/lvm/lvm.conf makes "rear recover" fail¶
Labels: enhancement
, fixed / solved / done
jsmeix opened issue at 2025-04-24 11:59:¶
Platform¶
Linux x64
Is your feature request related to a problem? Please describe.¶
When on the original system the user has set
'use_lvmlockd = 1' in /etc/lvm/lvm.conf
(by default it is 'use_lvmlockd = 0')
the 'use_lvmlockd = 1' is also set in the ReaR recovery system
because the ReaR recovery system has a copy
of /etc/lvm/lvm.conf included.
Because during "rear recover" lvmlockd is not running
it fails in diskrestore.sh during LVM setup with something like
+++ lvm pvcreate -ff --yes -v --uuid ... --restorefile ... /dev/vda2
WARNING: lvmlockd process is not running.
WARNING: Couldn't find device with uuid ...
Global lock failed: check that lvmlockd is running.
++ (( 5 == 0 ))
(excerpt from /var/log/rear/rear-localhost.log in the ReaR recovery system)
Describe the solution you'd like¶
I will check what could be done in the ReaR recovery system
to avoid this.
Perhaps the 'pvcreate' option '--nolocking' may help?
Perhaps changing /etc/lvm/lvm.conf in the ReaR recovery system
to contain 'use_lvmlockd = 0' is needed?
gdha commented at 2025-04-24 13:47:¶
@jsmeix is the use case a cluster member?
I think only in these cases the lvmlockd is used.
I think we can force it to 0 in ReaR rescue mode.
jsmeix commented at 2025-04-24 14:28:¶
I don't know if the use case is a cluster member.
From our SUSE internal customer issue report:
use_lvmlockd = 1 was set in /etc/lvm/lvm.conf
when they configured shared storage in their system
I also don't know if shared storage is used.
In general regarding lvmlockd:
It seems ReaR does not support lvmlockd because
'lvmlockd' appears nowhere in ReaR as far as I see.
I know nothing about cluster setup.
Is ReaR useful to recreate a cluster member?
If yes, would lvmlockd be needed during "rear recover"
when a cluster member should be recreated with ReaR?
jsmeix commented at 2025-04-24 14:38:¶
Using the LVM '--nolocking' option helps
but it would have to be specified
for various different LVM commands in
layout/prepare/GNU/Linux/110_include_lvm_code.sh
to avoid that the LVM commands fail with
Global lock failed: check that lvmlockd is running.
So I think it is better to enforce 'use_lvmlockd = 0'
in /etc/lvm/lvm.conf in the ReaR recovery system.
For now the latter can be done by the user via
PRE_RECOVERY_SCRIPT=( 'sed -i -e "s/use_lvmlockd = 1/use_lvmlockd = 0/" /etc/lvm/lvm.conf' )
in his etc/rear/local.conf
gdha commented at 2025-04-24 14:46:¶
Cluster members are using shared storage and are using these days
lvmlockd as only 1 member can mount the shared devices as R/W (others
could mount it R/O).
ReaR rescue does not need the lvmlockd in recovery mode. However, if it
is a cluster member it would probably mean that the cluster is degraded
and then the administrator must be careful not to overwrite any shared
disk.
The question that pops up in my mind is should we not just exit in these
cases? ReaR cannot know what the administrator wants to do or to destroy
in such cases! We could give him a hint.
jsmeix commented at 2025-04-24 15:02:¶
@gdha
thank you for your explanation why enforcing 'use_lvmlockd = 0'
in the ReaR recovery system could lead to an ultimate disaster
(i.e. "rear recover" may falsely overwrite a shared disk)
where actually 'use_lvmlockd = 1' is needed as protection.
So as far as I currently understand things
it is good that during "rear recover" LVM commands fail
when 'use_lvmlockd = 1' is set on the original system.
When the user knows that during "rear recover"
'use_lvmlockd = 0' is really the right thing, then he can
manually set this in his running ReaR recovery system
before he starts "rear recover".
Alternatively (on his own risk) he may even automate this
via something like
PRE_RECOVERY_SCRIPT=( 'sed -i -e "s/use_lvmlockd = 1/use_lvmlockd = 0/" /etc/lvm/lvm.conf' )
in his ReaR configuration.
jsmeix commented at 2025-04-25 11:07:¶
Regarding
https://github.com/rear/rear/issues/3461#issuecomment-2827914738
The question ... is should we not just exit in these cases?
ReaR cannot know what the administrator wants to do
or to destroy in such cases!
We could give him a hint.
During "rear recover" we exit already implicitly
(via various lvm setup failures in diskrestore.sh)
when 'use_lvmlockd = 1' is set in the ReaR recovery system.
On the original system during "rear mkrescue/mkbackup"
when LVM is used and 'use_lvmlockd = 1' set
we should show a LogPrintError message
to inform the user that
'use_lvmlockd = 1' was found in /etc/lvm/lvm.conf
but recreating LVM things needs 'use_lvmlockd = 0'
jsmeix commented at 2025-04-25 12:49:¶
https://github.com/rear/rear/pull/3462
is a quick first attempt to implement
https://github.com/rear/rear/issues/3461#issuecomment-2830111347
jsmeix commented at 2025-04-29 12:10:¶
With https://github.com/rear/rear/pull/3462 merged
this issue should be sufficiently solved
at least for now, cf.
https://github.com/rear/rear/pull/3462#issuecomment-2830351430
jsmeix commented at 2025-04-29 12:21:¶
For what I implemented see
https://github.com/rear/rear/commit/8eb9ea8cbc806d025c32cba640eb0d1c3c8f6093
In layout/save/GNU/Linux/220_lvm_layout.sh and
in layout/prepare/GNU/Linux/110_include_lvm_code.sh
check if 'use_lvmlockd = 1' is found in /etc/lvm/lvm.conf
because recreating LVM things requires 'use_lvmlockd = 0'
as there is no lvmlockd support in the ReaR recovery system
so during "rear mkrescue/mkbackup" show LogPrintError to inform the user
see https://github.com/rear/rear/issues/3461#issuecomment-2830111347
and during "rear recover" Error out because 'lvm' commands
in diskrestore.sh fail with 'use_lvmlockd = 1' in /etc/lvm/lvm.conf
see https://github.com/rear/rear/pull/3462#issuecomment-2838222011
[Export of Github issue for rear/rear.]