#2647 Issue closed: LVM PV recreation fails: excluded by a filter.

Labels: support / question, fixed / solved / done

abbbi opened issue at 2021-07-05 14:31:

  • ReaR version ("/usr/sbin/rear -V"): 2.6
  • OS version ("cat /etc/os-release" or "lsb_release -a" or "cat /etc/rear/os.conf"):

SLES15

  • Hardware (PC or PowerNV BareMetal or ARM) or virtual machine (KVM guest or PoverVM LPAR):

Power LPAR

  • System architecture (x86 compatible or PPC64/PPC64LE or what exact ARM device):

PPC64

hi,

today i came across following situation: Rear Recovery on PPC64 with multipath SAN setup.
Disk layout recreation failed with following error message:

 +++ lvm pvcreate -ff --yes -v --uuid R7iUAc-3rHe-sLi8-4eb4-PVHM-gwOj-D76cKT --norestorefile /dev/mapper/360050768108183224800000000000246-part1
     Wiping internal VG cache
     Wiping cache of LVM-capable devices
   Device /dev/mapper/360050768108183224800000000000246-part1 excluded by a filter.

Investigation showed that the partition included some "data" to an extend where it would include parts of an UEFI bootloader,
old data ..

So i attempted to wipe any data on it via:

wipefs -a /dev/mapper/360050768108183224800000000000246-part1

which didnt help, carried on and cleared the partitions disk data using dd, and everything worked smoothly:

 4 RESCUE sapdb:~ # dd if=/dev/zero of=/dev/mapper/36005076810818322480000
  5 0000000246-part1 bs=1M count=10
  6 10+0 records in
  7 10+0 records out
  8 10485760 bytes (10 MB, 10 MiB) copied, 0.0393749 s, 266 MB/s
  9 RESCUE sapdb:~ # lvm pvcreate -ff --yes -v --uuid "R7iUAc-3rHe-sLi8-4eb
 10 4-PVHM-gwOj-D76cKT" --norestorefile /dev/mapper/3600507681081832248000000000002
 11 46-part1
 12     Wiping internal VG cache
 13     Wiping cache of LVM-capable devices
 14     Wiping signatures on new PV /dev/mapper/360050768108183224800000000000246-part1.
 15     Set up physical volume for "/dev/mapper/360050768108183224800000000000246-part1" with 41940959 available sectors.
 16     Zeroing start of device /dev/mapper/360050768108183224800000000000246-part1.
 17     Writing physical volume data to disk "/dev/mapper/360050768108183224800000000000246-part1".
 18   Physical volume "/dev/mapper/360050768108183224800000000000246-part1" successfully created.

As far as i can see, REAR does only call wipefs or dd on partitions before creating the filesystems, maybe it makes sense to do so before attempting to create logical volumes, too?

abbbi commented at 2021-07-05 19:18:

I dont have access to the original disks data anymore, it is hard to tell what data on the disk may have caused the situation.
Usually lvm create -ff should wipe all things like an existing partition table, as reproduced via:

sles15ppclefix:~ # truncate -s 30M testfile
sles15ppclefix:~ # losetup /dev/loop0 testfile
sles15ppclefix:~ # parted /dev/loop0 mklabel gpt
lvm pvcreate -ff --yes -v --uuid R7iUAc-3rHe-sLi8-4eb4-PVHM-gwOj-D76cKT --norestorefile /dev/loop0 
    Scanning all devices to update lvmetad.
    No PV label found on /dev/loop0.
    No PV label found on /dev/build/build.
    No PV label found on /dev/sda1.
    No PV label found on /dev/sda2.
    No PV label found on /dev/sda3.
    No PV label found on /dev/sda4.
    Wiping internal VG cache
    Wiping cache of LVM-capable devices
    Wiping signatures on new PV /dev/loop0.
    Found existing signature on /dev/loop0 at offset 512: LABEL="(null)" UUID="(null)" TYPE="gpt" USAGE="partition table"
  Wiping gpt signature on /dev/loop0.
    Found existing signature on /dev/loop0 at offset 31456768: LABEL="(null)" UUID="(null)" TYPE="gpt" USAGE="partition table"
  Wiping gpt signature on /dev/loop0.
    Found existing signature on /dev/loop0 at offset 510: LABEL="(null)" UUID="(null)" TYPE="PMBR" USAGE="partition table"
  Wiping PMBR signature on /dev/loop0.
    Set up physical volume for "/dev/loop0" with 61440 available sectors.
    Zeroing start of device /dev/loop0.
    Writing physical volume data to disk "/dev/loop0".
  Physical volume "/dev/loop0" successfully created.

abbbi commented at 2021-07-06 09:22:

Nevermind, i just found out about #799

jsmeix commented at 2021-07-06 12:15:

Yes, https://github.com/rear/rear/issues/799 is the matching issue.

See also the comments in https://github.com/rear/rear/pull/2514
and the comments in the code for some background information
about the current state of the DISKS_TO_BE_WIPED functionality.

In particular regarding removing LVs and VGs see
https://github.com/rear/rear/pull/2514#issuecomment-729009274

jsmeix commented at 2021-07-07 06:27:

In general see also my reply
https://github.com/rear/rear/issues/2638#issuecomment-868223949

jsmeix commented at 2021-07-13 13:53:

@abbbi FYI:
I am neither an LVM nor a multipath expert but I think the message
Device /dev/mapper/360050768108183224800000000000246-part1 excluded by a filter.
could be only an information message but no actual error message.
I think such a message could be caused by a filter in /etc/lvm/lvm.conf
e.g. something similar as https://github.com/rear/rear/issues/2651

abbbi commented at 2021-07-13 14:37:

@abbbi FYI:
I am neither an LVM nor a multipath expert but I think the message
Device /dev/mapper/360050768108183224800000000000246-part1 excluded by a filter.
could be only an information message but no actual error message.
I think such a message could be caused by a filter in /etc/lvm/lvm.conf
e.g. something similar as #2651

no, it wasnt informational, The "excluded by filter" was gone after nulling the device!
in /etc/lvm/lvm.conf no settings regards filters were set.

jsmeix commented at 2021-07-13 15:07:

Only an offhanded blind guess:
Perhaps there was whatever kind of LVM metadata on that device
and the (as far as I know via udev triggered) automated LVM detection
detects something and reports the "excluded by a filter" message.
In contrast after nulling that device the automated LVM detection
can no longer detect anything LVM related so there is no message.
So the actual root cause could have been LVM metadata remainders
but not whether or not some LVM things are "excluded by a filter".

On my SLES15-SP3 test VM /etc/lvm/lvm.conf contains

# This configuration option has an automatic default value.
# global_filter = [ "a|.*/|" ]

I don't know what that automatic default value actually is
but perhaps such an automatic default value might result
some automatic "excluded by a filter" message in some cases?


[Export of Github issue for rear/rear.]