#1057 Issue closed: Restore via ISO swaps disks /dev/sda <==> /dev/sdb

Labels: enhancement, bug, support / question, fixed / solved / done

Adrian987654321 opened issue at 2016-11-01 14:30:

  • rear version (/usr/sbin/rear -V):
    Relax-and-Recover 1.17.2 / Git

  • OS version (cat /etc/rear/os.conf or lsb_release -a):
    OS_VENDOR=SUSE_LINUX
    OS_VERSION=12

  • rear configuration files (cat /etc/rear/site.conf or cat /etc/rear/local.conf):

cat /etc/rear/local.conf
# Create Relax-and-Recover rescue media as ISO image
OUTPUT=ISO
BACKUP=TSM
  • Brief description of the issue
    Running on a VM with two disks, one 50GiB and the other 80GiB. When running through the menus the disks are swapped over /dev/sda becomes /dev/sdb and /dev/sdb becomes /dev/sda. This is booting the ISO image so I don't have a log file to paste in to this message. I do however have a couple of screen shots but cant paste them in to the messsage.

  • Work-around, if any
    No work around

Adrian987654321 commented at 2016-11-04 13:32:

This is the first part of the output after booting from the ISO image

Compairing disks.

Device sda has size 85899345920, 53687091200 expected
Device sdb has size 53687091200, 85899345920 expected
Switching to manual disk layout configuration.
This is the disk mapping table
    /dev/sda /dev/sdb
    /dev/sdb /dev/sda
 Please confirm that 'var/lib/rear/layout/disklayout.conf' is as you expect.

The disk have been switched!

Can anyone shed any light on this and how to resolve the issue.

gozora commented at 2016-11-04 13:44:

I've run through restore process several times, but did not had the luck chance to see something like this.
@Adrian987654321 maybe you could give me a hint, how to reproduce this?

Adrian987654321 commented at 2016-11-04 15:28:

I'm at a bit of a loss to explain what has happened.

All our SLES11 servers use rear 1.14 and work perfectly. I took the
opportunity to upgrade as part of the move to SLES12 so am now using rear
1.17. Here are two screenshots of the error message..

Any help you can offer is gratefully appreciated.

[image: Inline images 1]

[image: Inline images 2]

gdha commented at 2016-11-04 15:39:

@Adrian987654321 perhaps pasting the content of /var/lib/rear/layout/disklayout.conf could help us as well

gozora commented at 2016-11-04 15:46:

Is it only me, or are the screenshots really missing?

Adrian987654321 commented at 2016-11-04 15:50:

gdha

Thanks for the help, this is the disklayout.conf file contents

disk /dev/sda 53687091200 msdos
part /dev/sda 279659520 1048576 primary boot /dev/sda1
part /dev/sda 53406383104 280708096 primary lvm /dev/sda2
lvmdev /dev/system /dev/sda2 lDaxch-gQmT-aEzw-CPnO-gNhl-vNA7-QmJZO7
104309342
lvmgrp /dev/system 4096 12732 52150272
lvmvol /dev/system lvhome 768 6291456
lvmvol /dev/system lvopt 384 3145728
lvmvol /dev/system lvperflog 128 1048576
lvmvol /dev/system lvroot 381 3121152
lvmvol /dev/system lvsysmon 64 524288
lvmvol /dev/system lvtmp 512 4194304
lvmvol /dev/system lvusr 768 6291456
lvmvol /dev/system lvvar 512 4194304
lvmvol /dev/system lvvarlog 512 4194304
lvmvol /dev/system lvvarlogaudit 512 4194304
lvmvol /dev/system lvswap 4096 33554432
lvmvol /dev/system lvsplunk 512 4194304

# Filesystems (only ext2,ext3,ext4,vfat,xfs,reiserfs,btrfs are supported).

# Format: fs <device> <mountpoint> <fstype> [uuid=<uuid>] [label=<label>]

[<attributes>]
fs /dev/mapper/system-lvhome /home btrfs
uuid=eb6aceaf-3f18-44ab-8f6d-e09fb53c23a5 label=
options=rw,nodev,relatime,space_cache,subvolid=256,subvol=/@
fs /dev/mapper/system-lvopt /opt btrfs
uuid=d9d5c837-a203-4429-9cd2-60d56c066636 label=
options=rw,relatime,space_cache,subvolid=256,subvol=/@
fs /dev/mapper/system-lvperflog /perflog btrfs
uuid=109d9b2c-75b1-4974-8b72-d63e6af5dff7 label=
options=rw,relatime,space_cache,subvolid=256,subvol=/@
fs /dev/mapper/system-lvroot / btrfs
uuid=464b2ed0-f4f1-4b21-b96d-88c378c6596f label=
options=rw,relatime,space_cache,subvolid=256,subvol=/@
fs /dev/mapper/system-lvsplunk /opt/splunk btrfs
uuid=b98df3f8-5182-45ff-957e-ad5c586188f8 label=
options=rw,relatime,space_cache,subvolid=5,subvol=/
fs /dev/mapper/system-lvsysmon /sysmon btrfs
uuid=796d681c-5e05-4839-9f52-b4f992196533 label=
options=rw,relatime,space_cache,subvolid=256,subvol=/@
fs /dev/mapper/system-lvtmp /tmp btrfs
uuid=7cc20f81-c6e6-4026-adce-d45cc00c1add label=
options=rw,nosuid,nodev,noexec,relatime,space_cache,subvolid=257,subvol=/@
fs /dev/mapper/system-lvusr /usr btrfs
uuid=21c14351-fb17-4f75-906f-4a105c26f042 label=
options=rw,relatime,space_cache,subvolid=257,subvol=/@
fs /dev/mapper/system-lvvar /var btrfs
uuid=452dee04-198b-49d2-b13a-6e8c746da033 label=
options=rw,relatime,space_cache,subvolid=257,subvol=/@
fs /dev/mapper/system-lvvarlog /var/log btrfs
uuid=ffaa67d3-9b07-4445-853b-ba9b04ad98e8 label=
options=rw,relatime,space_cache,subvolid=5,subvol=/
fs /dev/mapper/system-lvvarlogaudit /var/log/audit btrfs
uuid=b9da829e-89fd-497c-9cc1-21a54c7a912f label=
options=rw,relatime,space_cache,subvolid=5,subvol=/
fs /dev/sda1 /boot xfs uuid=e365109f-a007-4382-a736-02d9ed8c33dc label=
 options=rw,relatime,attr2,inode64,noquota

# Btrfs default subvolume for /dev/mapper/system-lvhome at /home

# Format: btrfsdefaultsubvol <device> <mountpoint> <btrfs_subvolume_ID>

<btrfs_subvolume_path>
btrfsdefaultsubvol /dev/mapper/system-lvhome /home 256 @

# Btrfs normal subvolumes for /dev/mapper/system-lvhome at /home

# Format: btrfsnormalsubvol <device> <mountpoint> <btrfs_subvolume_ID>

<btrfs_subvolume_path>
btrfsnormalsubvol /dev/mapper/system-lvhome /home 256 @

# Btrfs default subvolume for /dev/mapper/system-lvopt at /opt

# Format: btrfsdefaultsubvol <device> <mountpoint> <btrfs_subvolume_ID>

<btrfs_subvolume_path>
btrfsdefaultsubvol /dev/mapper/system-lvopt /opt 256 @

# Btrfs normal subvolumes for /dev/mapper/system-lvopt at /opt

# Format: btrfsnormalsubvol <device> <mountpoint> <btrfs_subvolume_ID>

<btrfs_subvolume_path>
btrfsnormalsubvol /dev/mapper/system-lvopt /opt 256 @

# Btrfs default subvolume for /dev/mapper/system-lvperflog at /perflog

# Format: btrfsdefaultsubvol <device> <mountpoint> <btrfs_subvolume_ID>

<btrfs_subvolume_path>
btrfsdefaultsubvol /dev/mapper/system-lvperflog /perflog 256 @

# Btrfs normal subvolumes for /dev/mapper/system-lvperflog at /perflog

# Format: btrfsnormalsubvol <device> <mountpoint> <btrfs_subvolume_ID>

<btrfs_subvolume_path>
btrfsnormalsubvol /dev/mapper/system-lvperflog /perflog 256 @

# Btrfs default subvolume for /dev/mapper/system-lvroot at /

# Format: btrfsdefaultsubvol <device> <mountpoint> <btrfs_subvolume_ID>

<btrfs_subvolume_path>
btrfsdefaultsubvol /dev/mapper/system-lvroot / 256 @

# Btrfs normal subvolumes for /dev/mapper/system-lvroot at /

# Format: btrfsnormalsubvol <device> <mountpoint> <btrfs_subvolume_ID>

<btrfs_subvolume_path>
btrfsnormalsubvol /dev/mapper/system-lvroot / 256 @

# Btrfs default subvolume for /dev/mapper/system-lvsplunk at /opt/splunk

# Format: btrfsdefaultsubvol <device> <mountpoint> <btrfs_subvolume_ID>

<btrfs_subvolume_path>
btrfsdefaultsubvol /dev/mapper/system-lvsplunk /opt/splunk 5 /

# Btrfs default subvolume for /dev/mapper/system-lvsysmon at /sysmon

# Format: btrfsdefaultsubvol <device> <mountpoint> <btrfs_subvolume_ID>

<btrfs_subvolume_path>
btrfsdefaultsubvol /dev/mapper/system-lvsysmon /sysmon 256 @

# Btrfs normal subvolumes for /dev/mapper/system-lvsysmon at /sysmon

# Format: btrfsnormalsubvol <device> <mountpoint> <btrfs_subvolume_ID>

<btrfs_subvolume_path>
btrfsnormalsubvol /dev/mapper/system-lvsysmon /sysmon 256 @

# Btrfs default subvolume for /dev/mapper/system-lvtmp at /tmp

# Format: btrfsdefaultsubvol <device> <mountpoint> <btrfs_subvolume_ID>

<btrfs_subvolume_path>
btrfsdefaultsubvol /dev/mapper/system-lvtmp /tmp 257 @

# Btrfs normal subvolumes for /dev/mapper/system-lvtmp at /tmp

# Format: btrfsnormalsubvol <device> <mountpoint> <btrfs_subvolume_ID>

<btrfs_subvolume_path>
btrfsnormalsubvol /dev/mapper/system-lvtmp /tmp 257 @

# Btrfs default subvolume for /dev/mapper/system-lvusr at /usr

# Format: btrfsdefaultsubvol <device> <mountpoint> <btrfs_subvolume_ID>

<btrfs_subvolume_path>
btrfsdefaultsubvol /dev/mapper/system-lvusr /usr 257 @

# Btrfs normal subvolumes for /dev/mapper/system-lvusr at /usr

# Format: btrfsnormalsubvol <device> <mountpoint> <btrfs_subvolume_ID>

<btrfs_subvolume_path>
btrfsnormalsubvol /dev/mapper/system-lvusr /usr 257 @

# Btrfs default subvolume for /dev/mapper/system-lvvar at /var

# Format: btrfsdefaultsubvol <device> <mountpoint> <btrfs_subvolume_ID>

<btrfs_subvolume_path>
btrfsdefaultsubvol /dev/mapper/system-lvvar /var 257 @

# Btrfs normal subvolumes for /dev/mapper/system-lvvar at /var

# Format: btrfsnormalsubvol <device> <mountpoint> <btrfs_subvolume_ID>

<btrfs_subvolume_path>
btrfsnormalsubvol /dev/mapper/system-lvvar /var 257 @

# Btrfs default subvolume for /dev/mapper/system-lvvarlog at /var/log

# Format: btrfsdefaultsubvol <device> <mountpoint> <btrfs_subvolume_ID>

<btrfs_subvolume_path>
btrfsdefaultsubvol /dev/mapper/system-lvvarlog /var/log 5 /

# Btrfs default subvolume for /dev/mapper/system-lvvarlogaudit at

/var/log/audit

# Format: btrfsdefaultsubvol <device> <mountpoint> <btrfs_subvolume_ID>

<btrfs_subvolume_path>
btrfsdefaultsubvol /dev/mapper/system-lvvarlogaudit /var/log/audit 5 /

# All mounted btrfs subvolumes (including mounted btrfs default subvolumes

and mounted btrfs snapshot subvolumes).

# Determined by the findmnt command that shows the mounted

btrfs_subvolume_path.

# Format: btrfsmountedsubvol <device> <subvolume_mountpoint>

<mount_options> <btrfs_subvolume_path>
btrfsmountedsubvol /dev/mapper/system-lvroot /
rw,relatime,space_cache,subvolid=256,subvol=/@ @
btrfsmountedsubvol /dev/mapper/system-lvusr /usr
rw,relatime,space_cache,subvolid=257,subvol=/@ @
btrfsmountedsubvol /dev/mapper/system-lvsysmon /sysmon
rw,relatime,space_cache,subvolid=256,subvol=/@ @
btrfsmountedsubvol /dev/mapper/system-lvhome /home
rw,nodev,relatime,space_cache,subvolid=256,subvol=/@ @
btrfsmountedsubvol /dev/mapper/system-lvopt /opt
rw,relatime,space_cache,subvolid=256,subvol=/@ @
btrfsmountedsubvol /dev/mapper/system-lvtmp /tmp
rw,nosuid,nodev,noexec,relatime,space_cache,subvolid=257,subvol=/@ @
btrfsmountedsubvol /dev/mapper/system-lvvar /var
rw,relatime,space_cache,subvolid=257,subvol=/@ @
btrfsmountedsubvol /dev/mapper/system-lvperflog /perflog
rw,relatime,space_cache,subvolid=256,subvol=/@ @
btrfsmountedsubvol /dev/mapper/system-lvtmp /var/tmp
rw,nosuid,nodev,noexec,relatime,space_cache,subvolid=257,subvol=/@ @
btrfsmountedsubvol /dev/mapper/system-lvsplunk /opt/splunk
rw,relatime,space_cache,subvolid=5,subvol=/ /
btrfsmountedsubvol /dev/mapper/system-lvvarlog /var/log
rw,relatime,space_cache,subvolid=5,subvol=/ /
btrfsmountedsubvol /dev/mapper/system-lvvarlogaudit /var/log/audit
rw,relatime,space_cache,subvolid=5,subvol=/ /
swap /dev/mapper/system-lvswap uuid=c20e6d9a-4280-4bd6-acf9-04a1f4f45107
label=

Adrian987654321 commented at 2016-11-04 15:54:

Sorry about that, I pasted them in to the email last time.

I've attached them instead this time.

Thanks in advance.
Adrian

gozora commented at 2016-11-04 16:04:

:-/ not sure if Github can extract images from mail ...
Never mind, will try couple of test restores on SLES12 with two disks, an see hot it goes ...

Adrian987654321 commented at 2016-11-04 16:05:

01-screen
02-screen
03-screen

Adrian987654321 commented at 2016-11-04 16:05:

I can now attach images to the call as per above.

gozora commented at 2016-11-04 16:06:

nice đź‘Ť

gozora commented at 2016-11-04 16:11:

oh, at first glance i can see some BTRFS messages (nor really my favorite FS), but as far as I remember @jsmeix did some btrfs improvements recently,
@Adrian987654321 would it be possible from your site to give ReaR 1.19 a try?

gozora commented at 2016-11-04 17:10:

@Adrian987654321 reading your disklayout.conf, maybe I got it all wrong, but are you using btrfs on top of each logical volume?
If not, could you share your filesystem layout/setup with me?

Like outputs from:
mount
btrfs subvolume list -a /
vgs
lvs
pvs
df -h
...

rpasche commented at 2016-11-05 13:10:

@Adrian987654321 For me, this looks like disks are "detected" in the "wrong" way. On hardware, I would suspect, that the two disks are connected to different controllers using different driver modules and that these modules are loaded in a different order on boot within recovery, resulting in this fault.

But on a VM....hmm..

Are the disks using different controllers within the VM? Can you give as the SCSI IDs of the disks?

Adrian987654321 commented at 2016-11-07 11:34:

Please find the output from the commands you required.

sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs
(rw,nosuid,size=8223172k,nr_inodes=2055793,mode=755)
securityfs on /sys/kernel/security type securityfs
(rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,noexec)
devpts on /dev/pts type devpts
(rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup
(rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/cpuset type cgroup
(rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup
(rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
cgroup on /sys/fs/cgroup/memory type cgroup
(rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/devices type cgroup
(rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/freezer type cgroup
(rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/blkio type cgroup
(rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/perf_event type cgroup
(rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/hugetlb type cgroup
(rw,nosuid,nodev,noexec,relatime,hugetlb)
/dev/mapper/system-lvroot on / type btrfs
(rw,relatime,space_cache,subvolid=256,subvol=/@)
/dev/mapper/system-lvusr on /usr type btrfs
(rw,relatime,space_cache,subvolid=257,subvol=/@)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs
(rw,relatime,fd=31,pgrp=1,timeout=300,minproto=5,maxproto=5,direct)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
/dev/sda1 on /boot type xfs (rw,relatime,attr2,inode64,noquota)
/dev/mapper/system-lvsysmon on /sysmon type btrfs
(rw,relatime,space_cache,subvolid=256,subvol=/@)
/dev/mapper/system-lvhome on /home type btrfs
(rw,nodev,relatime,space_cache,subvolid=256,subvol=/@)
/dev/mapper/system-lvopt on /opt type btrfs
(rw,relatime,space_cache,subvolid=256,subvol=/@)
/dev/mapper/system-lvtmp on /tmp type btrfs
(rw,nosuid,nodev,noexec,relatime,space_cache,subvolid=257,subvol=/@)
/dev/mapper/system-lvvar on /var type btrfs
(rw,relatime,space_cache,subvolid=257,subvol=/@)
/dev/mapper/system-lvperflog on /perflog type btrfs
(rw,relatime,space_cache,subvolid=256,subvol=/@)
/dev/mapper/system-lvtmp on /var/tmp type btrfs
(rw,nosuid,nodev,noexec,relatime,space_cache,subvolid=257,subvol=/@)
/dev/mapper/system-lvvarlog on /var/log type btrfs
(rw,relatime,space_cache,subvolid=5,subvol=/)
/dev/mapper/system-lvvarlogaudit on /var/log/audit type btrfs
(rw,relatime,space_cache,subvolid=5,subvol=/)
binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime)
/dev/mapper/datavg-splunklv on /opt/splunk type xfs
(rw,relatime,attr2,inode64,noquota)
/dev/mapper/datavg-worklv on /workspace type xfs
(rw,relatime,attr2,inode64,noquota)

utvlfidj12:~ # btrfs subvolume list -a /
ID 256 gen 5188 top level 5 path <FS_TREE>/@

utvlfidj12:~ # vgs
  VG     #PV #LV #SN Attr   VSize  VFree
  datavg   1   2   0 wz--n- 80.00g 40.00g
  system   1  11   0 wz--n- 49.73g 16.00g

utvlfidj12:~ # lvs
  LV            VG     Attr       LSize   Pool Origin Data%  Meta%  Move
Log Cpy%Sync Convert
  splunklv      datavg -wi-ao----  35.00g
  worklv        datavg -wi-ao----   5.00g
  lvhome        system -wi-ao----   3.00g
  lvopt         system -wi-ao----   1.50g
  lvperflog     system -wi-ao---- 512.00m
  lvroot        system -wi-ao----   1.49g
  lvswap        system -wi-ao----  16.00g
  lvsysmon      system -wi-ao---- 256.00m
  lvtmp         system -wi-ao----   2.00g
  lvusr         system -wi-ao----   3.00g
  lvvar         system -wi-ao----   2.00g
  lvvarlog      system -wi-ao----   2.00g
  lvvarlogaudit system -wi-ao----   2.00g

utvlfidj12:~ # pvs
  PV         VG     Fmt  Attr PSize  PFree
  /dev/sda2  system lvm2 a--  49.73g 16.00g
  /dev/sdb1  datavg lvm2 a--  80.00g 40.00g

utvlfidj12:~ # df -h
Filesystem                        Size  Used Avail Use% Mounted on
/dev/mapper/system-lvroot         1.5G  353M  974M  27% /
devtmpfs                          7.9G     0  7.9G   0% /dev
tmpfs                             7.9G     0  7.9G   0% /dev/shm
tmpfs                             7.9G   41M  7.9G   1% /run
tmpfs                             7.9G     0  7.9G   0% /sys/fs/cgroup
/dev/mapper/system-lvusr          3.0G  1.3G  1.6G  44% /usr
/dev/sda1                         264M   50M  214M  19% /boot
/dev/mapper/system-lvsysmon       256M  4.3M  252M   2% /sysmon
/dev/mapper/system-lvhome         3.0G   17M  2.9G   1% /home
/dev/mapper/system-lvopt          1.5G  368M  1.2G  24% /opt
/dev/mapper/system-lvtmp          2.0G   17M  1.8G   1% /tmp
/dev/mapper/system-lvvar          2.0G  426M  1.4G  24% /var
/dev/mapper/system-lvperflog      512M   65M  448M  13% /perflog
/dev/mapper/system-lvvarlog       2.0G   26M  1.8G   2% /var/log
/dev/mapper/system-lvvarlogaudit  2.0G   27M  1.8G   2% /var/log/audit
/dev/mapper/datavg-splunklv        35G   33M   35G   1% /opt/splunk
/dev/mapper/datavg-worklv         5.0G   33M  5.0G   1% /workspace

gozora commented at 2016-11-07 11:57:

Hmm, so btrfs on top of LVM indeed.
I've never worked with such setup before.
@Adrian987654321 can you tell me if you was able to successfully restore your OS?

jsmeix commented at 2016-11-07 13:03:

Only a note FYI:
Last week I was not in the office.
I will have a look here hopefully tomorrow...

jsmeix commented at 2016-11-07 13:18:

In general kernel device names like /dev/sda and /dev/sdb
can appear in any ordering.

Basically it is just luck which of two disks gets
the device node /dev/sda versus /dev/sdb.

A solution might be to optionally? no longer use kernel device nodes
but instead/additionally? more higher level names like the symlinks in
/dev/disk/by-id
/dev/disk/by-label
/dev/disk/by-partlabel
/dev/disk/by-partuuid
/dev/disk/by-path
/dev/disk/by-uuid

Some drawbacks of using disk/by-* symlinks are:

On basically same replacement hardware some of them
are not same, in particular disk WWNs or WWIDs in disk/by-id
should be different for any replacement disk.

On a new replacement disk there are neither partitions nor filesystems
so that there are neither partition labes nor partition UUIDs nor
filesystem labels nor filesystem UUIDs so that neither disk/by-partlabel
nor disk/by-partuuid not disk/by-label nor disk/by-uuid can be used.

Therefore - as far as I can imagine right now - using
kernel device nodes like /dev/sda and /dev/sdb
works sufficiently well in very most cases - except
exceptional cases like this one.

As far as I see the only thing that could be actually useful is disk/by-path
because on basically same replacement hardware one could assume
that the replacement disks are connected via the same physical paths
as it was on the original system so that "basically same replacement hardware"
means in particular that the same physical paths are used on the original system
and on the replacement hardware.

Currently ReaR uses only the disk size to find out
which disk kernel device node on the replacement hardware
matches which disk kernel device node on the original system
and if more than one matches ReaR goes into migration mode,
see the "MIGRATION_MODE" description in disklayout.conf.

When in addition to the disk size also disk/by-path would be used
and when there is more than one disk with same size, then
ReaR could match according to the physical paths and only if there is
no match by path, ReaR would have to go into migration mode,
cf. https://github.com/rear/rear/issues/2050

gozora commented at 2016-11-07 13:23:

@jsmeix I'm afraid that despite description, swapped disks are not the main problem here. (see screen shot).
I'm waiting for reply to my comment ...

gozora commented at 2016-11-07 13:24:

@jsmeix did you ever tested ReaR with btrfs on LVM ?

jsmeix commented at 2016-11-07 13:27:

Good grief!
"btrfs on LVM"
I never ever used that.
Currently I have no experience with such complicated setups.

Adrian987654321 commented at 2016-11-07 16:34:

Yes this looks to be the case.

The two disks are using two different two different driver modules. I don't
have access to copy what that modules are but both are different.

Adrian987654321 commented at 2016-11-07 16:37:

I've asked a college to install version 1.19 which he will do in the
morning and then test to see if it fixes the issue of the mount failure of
/dev/mapper/system-lvroot

​Thanks for all of your help.

Adrian​

Adrian987654321 commented at 2016-11-07 16:39:

So far no. The restore fails when mounting /dev/mapper/system-lvroot.

jsmeix commented at 2016-11-08 13:09:

I am sneaking up on the issue step by step:

For me with SLE12-SP2 on a QEMU/KVM virtual machine
with a single 20GB virtual harddisk /dev/sda
with btrfs on top of LVM both "rear mkbackup" and
then on a second same virtual machine "rear recover"
just work with the current ReaR GitHub master code.

The next step will be with two virtual harddisks...

I am using
usr/share/rear/conf/examples/SLE12-SP2-btrfs-example.conf
as template for my etc/rear/local.conf

During initial system installation with YaST
I clicked in the YaST "Suggested Partitioning" dialog
on "Edit Proposal Settings" and then I selected the
"LVM-based Proposal" which results LVM with
the default SLE12-SP2 btrfs structure on top of it.
I do no manual special LVM or btrfs configuration.
I use the YaST proposal as is.

Some details:

d108:~/rear # cat /etc/issue      
Welcome to SUSE Linux Enterprise Server 12 SP2  (x86_64) ...
d108:~/rear # findmnt -t btrfs -o TARGET,SOURCE
TARGET                    SOURCE
/                         /dev/mapper/system-root[/@/.snapshots/1/snapshot]
|-/var/lib/pgsql          /dev/mapper/system-root[/@/var/lib/pgsql]
|-/var/lib/machines       /dev/mapper/system-root[/@/var/lib/machines]
|-/.snapshots             /dev/mapper/system-root[/@/.snapshots]
|-/srv                    /dev/mapper/system-root[/@/srv]
|-/var/lib/mysql          /dev/mapper/system-root[/@/var/lib/mysql]
|-/var/opt                /dev/mapper/system-root[/@/var/opt]
|-/var/tmp                /dev/mapper/system-root[/@/var/tmp]
|-/var/lib/named          /dev/mapper/system-root[/@/var/lib/named]
|-/var/lib/mailman        /dev/mapper/system-root[/@/var/lib/mailman]
|-/opt                    /dev/mapper/system-root[/@/opt]
|-/var/lib/libvirt/images /dev/mapper/system-root[/@/var/lib/libvirt/images]
|-/boot/grub2/i386-pc     /dev/mapper/system-root[/@/boot/grub2/i386-pc]
|-/var/lib/mariadb        /dev/mapper/system-root[/@/var/lib/mariadb]
|-/var/cache              /dev/mapper/system-root[/@/var/cache]
|-/tmp                    /dev/mapper/system-root[/@/tmp]
|-/usr/local              /dev/mapper/system-root[/@/usr/local]
|-/var/crash              /dev/mapper/system-root[/@/var/crash]
|-/var/log                /dev/mapper/system-root[/@/var/log]
|-/home                   /dev/mapper/system-root[/@/home]
|-/var/spool              /dev/mapper/system-root[/@/var/spool]
`-/boot/grub2/x86_64-efi  /dev/mapper/system-root[/@/boot/grub2/x86_64-efi]
d108:~/rear # grep -v ^# etc/rear/local.conf 
OUTPUT=ISO
BACKUP=NETFS
BACKUP_OPTIONS="nfsvers=3,nolock"
BACKUP_URL=nfs://10.160.4.244/nfs
NETFS_KEEP_OLD_BACKUP_COPY=yes
REQUIRED_PROGS=( "${REQUIRED_PROGS[@]}" snapper chattr lsattr )
COPY_AS_IS=( "${COPY_AS_IS[@]}" /usr/lib/snapper/installation-helper /etc/snapper/config-templates/default )
BACKUP_PROG_INCLUDE=( '/var/cache/*' '/var/lib/mailman/*' '/var/tmp/*' '/var/lib/pgsql/*' '/usr/local/*' '/opt/*' '/var/lib/libvirt/images/*' '/boot/grub2/i386/*' '/var/opt/*' '/srv/*' '/boot/grub2/x86_64/*' '/var/lib/mariadb/*' '/var/spool/*' '/var/lib/mysql/*' '/tmp/*' '/home/*' '/var/log/*' '/var/lib/named/*' '/var/lib/machines/*' )
POST_RECOVERY_SCRIPT=( 'if snapper --no-dbus -r $TARGET_FS_ROOT get-config | grep -q "^QGROUP.*[0-9]/[0-9]" ; then snapper --no-dbus -r $TARGET_FS_ROOT set-config QGROUP= ; snapper --no-dbus -r $TARGET_FS_ROOT setup-quota && echo snapper setup-quota done || echo snapper setup-quota failed ; else echo snapper setup-quota not used ; fi' )
SSH_ROOT_PASSWORD="rear"
USE_DHCLIENT="yes"
KEEP_BUILD_DIR=""
d108:~/rear # usr/sbin/rear -d -D mkbackup
Relax-and-Recover 1.19 / Git
Using log file: /root/rear/var/log/rear/rear-d108.log
mkdir: created directory '/root/rear/var/lib'
mkdir: created directory '/root/rear/var/lib/rear'
mkdir: created directory '/root/rear/var/lib/rear/output'
Creating disk layout
Creating root filesystem layout
Copying files and directories
Copying binaries and libraries
Copying kernel modules
Creating initramfs
Making ISO image
Wrote ISO image: /root/rear/var/lib/rear/output/rear-d108.iso (150M)
Copying resulting files to nfs location
Encrypting disabled
Creating tar archive '/tmp/rear.9pxiJl1HGxwcHjf/outputfs/d108/backup.tar.gz'
Archived 823 MiB [avg 7600 KiB/sec]OK
Archived 823 MiB in 112 seconds [avg 7532 KiB/sec]
d108:~/rear # cat var/lib/rear/layout/disklayout.conf
# Disk /dev/sda
# Format: disk   
disk /dev/sda 21474836480 msdos
# Partitions on /dev/sda
# Format: part      /dev/
part /dev/sda 21473787904 1048576 primary boot,lvm /dev/sda1
lvmdev /dev/system /dev/sda1 Klj7cB-Hhw4-h2RX-MYoI-fh17-NhkB-a5CfxZ 41940992
lvmgrp /dev/system 4096 5119 20967424
lvmvol /dev/system root 4746 38879232 
lvmvol /dev/system swap 371 3039232 
# Filesystems (only ext2,ext3,ext4,vfat,xfs,reiserfs,btrfs are supported).
# Format: fs    [uuid=] [label=

jsmeix commented at 2016-11-08 13:37:

@Adrian987654321
in your initial comment
https://github.com/rear/rear/issues/1057#issue-186547940
you wrote

cat /etc/rear/local.conf
# Create Relax-and-Recover rescue media as ISO image
OUTPUT=ISO
BACKUP=TSM

Is that really your whole /etc/rear/local.conf file content?

If that is really your whole /etc/rear/local.conf
it cannot work.

For the special SUSE btrfs default structure use one of the
usr/share/rear/conf/examples/SLE*example.conf
files as template.

Adrian987654321 commented at 2016-11-08 14:38:

I'm not in a position where I can test today but tomorrow

I'm hoping to have someone who can test using this version of rear.

rear-1.19-22.git201611071733.x86_64.rpm
http://download.opensuse.org/repositories/Archiving:/Backup:/Rear:/Snapshot/SLE_12_SP1/x86_64/rear-1.19-22.git201611071733.x86_64.rpm

I
​'ll let you know as soon as i do if it runs successfully.

Adrian987654321 commented at 2016-11-08 14:41:

Just double checked and yes that is all there is in the file.

cat /etc/rear/local.conf

# Create Relax-and-Recover rescue media as ISO image

OUTPUT=ISO
BACKUP=TSM

I'll get one of the files use suggest tested tomorrow with version 1.19.22
and let you know the outcome.

Thanks for your help.

jsmeix commented at 2016-11-08 16:23:

Also with two virtual harddisks
using SLE12-SP2 on a QEMU/KVM virtual machine
with one 10GB virtual harddisk /dev/sda and
a second 15GB virtual harddisk /dev/sdb
with btrfs on top of LVM both "rear mkbackup" and
then on a second same virtual machine "rear recover"
just work with the current ReaR GitHub master code.

But during "rear recover" I got no message about
interchanged harddisk devices.

My next step is to do "rear recover" on another virtual machine
where the first disk is 15GB and the second one 10GB
(i.e. with interchanged virtual harddisks).

During initial system installation with YaST
I used the YaST "expert patitioner" to
first make on each whole disk a single partition
(i.e. I got /dev/sda1 and /dev/sdb1)
then make a volume group of those two partitions
and in that volume group I created three logical volumes:
1.
a 15GB logical "root" volume with
the SLE12-SP2 default btrfs structure
2.
a 2GB logical "swap" volume
3.
a 5GB logical "home" volume with XFS.

jsmeix commented at 2016-11-08 16:40:

For me "rear recover" also works well
on another virtual machine where the
first disk is 15GB and the second one 10GB
(i.e. with interchanged virtual harddisks).

I got two times a question what to do and in both cases
I simply replied with a "5" (i.e. "Continue recovery").

RESCUE d44:~ # rear -d -D recover
Relax-and-Recover 1.19 / Git
Using log file: /var/log/rear/rear-d44.log
Starting required daemons for NFS: RPC portmapper (portmap or rpcbind) and rpc.statd if available.
Started RPC portmapper 'rpcbind'.
RPC portmapper 'rpcbind' available.
Started rpc.statd.
RPC status rpc.statd available.
NOTICE: Will do driver migration
Calculating backup archive size
Backup archive size is 825M     /tmp/rear.D4Tf1ZaNWHcMJ4c/outputfs/d44/backup.tar.gz (compressed)
Comparing disks.
Device sda has size 16106127360, 10737418240 expected
Device sdb has size 10737418240, 16106127360 expected
Switching to manual disk layout configuration.
This is the disk mapping table:
    /dev/sda /dev/sdb
    /dev/sdb /dev/sda
Please confirm that '/var/lib/rear/layout/disklayout.conf' is as you expect.
++ select choice in '"${choices[@]}"'
1) View disk layout (disklayout.conf)  3) View original disk space usage      5) Continue recovery
2) Edit disk layout (disklayout.conf)  4) Go to Relax-and-Recover shell       6) Abort Relax-and-Recover
#? 5
++ case "$REPLY" in
++ break
Partition primary on /dev/sdb: size reduced to fit on disk.
Partition primary on /dev/sda: size reduced to fit on disk.
Doing SLES12 special btrfs subvolumes setup because the default subvolume path contains '@/.snapshots/'
Please confirm that '/var/lib/rear/layout/diskrestore.sh' is as you expect.
++ select choice in '"${choices[@]}"'
1) View restore script (diskrestore.sh)  3) View original disk space usage        5) Continue recovery
2) Edit restore script (diskrestore.sh)  4) Go to Relax-and-Recover shell         6) Abort Relax-and-Recover
#? 5
++ case "$REPLY" in
++ break
Start system layout restoration.
Creating partitions for disk /dev/sdb (msdos)
Creating partitions for disk /dev/sda (msdos)
Creating LVM PV /dev/sdb1
Creating LVM PV /dev/sda1
Creating LVM VG myvolumegroup
Creating LVM volume myvolumegroup/myhomelogicalvolume
  Logical volume "myhomelogicalvolume" created.
Creating LVM volume myvolumegroup/myrootlogicalvolume
  Logical volume "myrootlogicalvolume" created.
Creating LVM volume myvolumegroup/myswaplogicalvolume
  Logical volume "myswaplogicalvolume" created.
Creating filesystem of type btrfs with mount point / on /dev/mapper/myvolumegroup-myrootlogicalvolume.
btrfs-progs v4.5.3+20160729
See http://btrfs.wiki.kernel.org for more information.
Performing full device TRIM (15.00GiB) ...
Label:              (null)
UUID:               92db603c-2653-47bd-b517-54025cb60306
Node size:          16384
Sector size:        4096
Filesystem size:    15.00GiB
Block group profiles:
  Data:             single            8.00MiB
  Metadata:         DUP               1.01GiB
  System:           DUP              12.00MiB
SSD detected:       no
Incompat features:  extref, skinny-metadata
Number of devices:  1
Devices:
   ID        SIZE  PATH
    1    15.00GiB  /dev/mapper/myvolumegroup-myrootlogicalvolume
Mounting filesystem /
Create subvolume '/mnt/local/@'
Create subvolume '/mnt/local//@/boot/grub2/i386-pc'
Create subvolume '/mnt/local//@/boot/grub2/x86_64-efi'
Create subvolume '/mnt/local//@/opt'
Create subvolume '/mnt/local//@/srv'
Create subvolume '/mnt/local//@/tmp'
Create subvolume '/mnt/local//@/usr/local'
Create subvolume '/mnt/local//@/var/cache'
Create subvolume '/mnt/local//@/var/crash'
Create subvolume '/mnt/local//@/var/lib/libvirt/images'
Create subvolume '/mnt/local//@/var/lib/machines'
Create subvolume '/mnt/local//@/var/lib/mailman'
Create subvolume '/mnt/local//@/var/lib/mariadb'
Create subvolume '/mnt/local//@/var/lib/mysql'
Create subvolume '/mnt/local//@/var/lib/named'
Create subvolume '/mnt/local//@/var/lib/pgsql'
Create subvolume '/mnt/local//@/var/log'
Create subvolume '/mnt/local//@/var/opt'
Create subvolume '/mnt/local//@/var/spool'
Create subvolume '/mnt/local//@/var/tmp'
Running snapper/installation-helper:
step 1 device:/dev/mapper/myvolumegroup-myrootlogicalvolume
temporarily mounting device
copying/modifying config-file
creating filesystem config
creating snapshot
setting default subvolume
done
Creating filesystem of type xfs with mount point /home on /dev/mapper/myvolumegroup-myhomelogicalvolume.
meta-data=/dev/mapper/myvolumegroup-myhomelogicalvolume isize=256    agcount=4, agsize=327680 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=0        finobt=0, sparse=0
data     =                       bsize=4096   blocks=1310720, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
Mounting filesystem /home
Creating swap on /dev/mapper/myvolumegroup-myswaplogicalvolume
Disk layout created.
Decrypting disabled
Restoring from '/tmp/rear.D4Tf1ZaNWHcMJ4c/outputfs/d44/backup.tar.gz'
Restored 2191 MiB [avg 72395 KiB/sec]OK
Restored 2191 MiB in 32 seconds [avg 70133 KiB/sec]
Restore the Mountpoints (with permissions) from /var/lib/rear/recovery/mountpoint_permissions
Patching file 'boot/grub2/grub.cfg'
Patching file 'boot/grub2/device.map'
Patching file 'etc/sysconfig/bootloader'
Patching file 'etc/fstab'
Patching file 'etc/mtools.conf'
Patching file 'etc/smartd.conf'
Patching file 'etc/sysconfig/smartmontools'
Patching file 'etc/security/pam_mount.conf.xml'
Installing GRUB2 boot loader
snapper setup-quota done
Finished recovering your system. You can explore it under '/mnt/local'.

The result is as expected.

On the original system I have

# parted -l 2>/dev/null | grep '^Disk /'            
Disk /dev/sda: 10.7GB
Disk /dev/sdb: 16.1GB
Disk /dev/mapper/myvolumegroup-myhomelogicalvolume: 5369MB
Disk /dev/mapper/myvolumegroup-myswaplogicalvolume: 2147MB
Disk /dev/mapper/myvolumegroup-myrootlogicalvolume: 16.1GB

On the recovered system (with interchanged disks) I have:

# parted -l 2>/dev/null | grep '^Disk /'
Disk /dev/sda: 16.1GB
Disk /dev/sdb: 10.7GB
Disk /dev/mapper/myvolumegroup-myhomelogicalvolume: 5369MB
Disk /dev/mapper/myvolumegroup-myswaplogicalvolume: 2147MB
Disk /dev/mapper/myvolumegroup-myrootlogicalvolume: 16.1GB

For me everything works well.

schabrolles commented at 2016-11-09 11:13:

This works as soon as the disks have different size ... But you can still have some swapped disk during restoration if you have several disk with the same size.
What about using lsblk command to store disk_name, size, serial. It could help to identify the REAL good disk based on serial instead of only size.

# lsblk -ro TYPE,NAME,SIZE,SERIAL | grep disk
disk sdcr 500G 600507680180851458000000000017a4
disk sddh 500G 60050768018085145800000000001797
disk sdcs 500G 60050768018085145800000000001796
disk sddi 500G 60050768018085145800000000001798

Adrian987654321 commented at 2016-11-09 14:33:

I have installed version 1.19

rear-1.19-22.git201611071733.x86_64.rpm

and have used the SLES12 SP1 config file as suggested from

/usr/share/rear/conf/examples/

and run the tests again. The ISO image is created correctly but the recovery fails on the same step with the same error message.

Incompat features: extref, skinny-metadata

Mounting filesystem /
An error occured during the layout recreation

This web page may be the issue

https://btrfs.wiki.kernel.org/index.php/Manpage/mkfs.btrfs

Search for
Incompat features: extref, skinny-metadata

SMALL FILESYSTEMS AND LARGE NODESIZE

The combination of small filesystem size and large nodesize is not recommended in general and can lead to various ENOSPC-related issues during mount time or runtime.

Since mixed block group creation is optional, we allow small filesystem instances with differing values for sectorsize and nodesize to be created and could end up in the following situation:

Searching the net https://btrfs.wiki.kernel.org/index.php/Manpage/mkfs.btrfs

So the output of mkfs.btrfs -O list-all is

Filesystem features available:
mixed-bg - mixed data and metadata block groups (0x4)
extref - increased hardlink limit per file to 65536 (0x40, default)X
raid56 - raid56 extended format (0x80)
skinny-metadata - reduced-size metadata extent refs (0x100, default)
no-holes - no explicit hole extents for files (0x200)

Question seems to be how to turn off these two features (extref and skinny-metadata) for the recovery.

Adrian987654321 commented at 2016-11-09 14:38:

Sorry forgot to add

mkfs.btrfs -O ^extref ^skinny-metadata

should work when the LV is created.

jsmeix commented at 2016-11-09 15:37:

@schabrolles
regarding using any hardware specific IDs:
Have in mind that "rear recover" must work on replacement hardware
where any hardware specific IDs could be different compared to
the original system where "rear mkbackup" was run.

Currently I have no good idea how one could automatically
find out on replacement hardware what the actually right
disks are.

jsmeix commented at 2016-11-09 15:45:

@Adrian987654321
in general regarding how to do special hacks during recovery
for recreating the disk layout:
You can edit the diskrestore.sh script during "rear recover",
see my
https://github.com/rear/rear/issues/1057#issuecomment-259188698

Please confirm that '/var/lib/rear/layout/diskrestore.sh' is as you expect.
++ select choice in '"${choices[@]}"'
1) View restore script (diskrestore.sh)  3) View original disk space usage        5) Continue recovery
2) Edit restore script (diskrestore.sh)  4) Go to Relax-and-Recover shell         6) Abort Relax-and-Recover
#?

I.e. when you get this question reply with "2" to adapt
the diskrestore.sh script - therein you can change the mkfs
command for btrfs as you need it in your particular case.

For some general documentation about how to play around
with the diskrestore.sh script during "rear recover" see
https://github.com/rear/rear/blob/master/doc/user-guide/06-layout-configuration.adoc
and
https://github.com/rear/rear/blob/master/doc/user-guide/08-troubleshooting.adoc

schabrolles commented at 2016-11-09 16:37:

@jsmeix
I think we should check first if the same disk exists based on HW ID,
If we can't find the same HW ID, use the size to get the best candidate disk (like rear is doing today)

When restoring on same hardware, we should keep exactly the same disks.
When restoring on different hardware, find the best disk to map based on size.

rpasche commented at 2016-11-10 08:32:

@schabrolles
As you said...on same hardware, you should keep the same disks. And the disks should also be recognized the same way. For me, the main problem right now is the setup of @Adrian987654321 currently running system. It looks somehow broken to me.

@Adrian987654321
Can you please provide output of lsscsi and dmesg and lspci -v on the running system (before rear mkbackup)

I just want to know the SCSI IDs of the disks and whether there are on differnt controllers using different modules.

Adrian987654321 commented at 2016-11-10 08:43:

Thanks for the info.

In this case I am trying to run a test restore on the same VM as the ISO
came from.

jsmeix commented at 2016-11-10 08:50:

@schabrolles
I fully agree with your proposal
https://github.com/rear/rear/issues/1057#issuecomment-259460160
and therefore I created
https://github.com/rear/rear/issues/1063

rpasche commented at 2016-11-10 08:56:

@schabrolles @jsmeix
I also agree. Checking the HW IDs is a good idea.

jsmeix commented at 2016-11-10 09:21:

@Adrian987654321
I have a general question:
In https://github.com/rear/rear/issues/1057#issue-186547940
you wrote

Running on a VM with two disks,
one 50GiB and the other 80GiB.

I wonder why you have two virtual harddisks on that virtual machine.
Why can't you simply use a single virtual 130GiB harddisk?
Or more generally:
Why LVM with virtual harddisks?
Perhaps my question is stupid because I have no experience
with LVM (except my few tetsing attempts above).
I think LVM is mainly useful to combine several smaller physical
disks into one bigger pool (a volume group) which can then
be used as if it was a single big disk.

rpasche commented at 2016-11-10 09:34:

@jsmeix
We also use a second disk if someone requires bigger "data", that has nothing to do with the "OS". We use LVM to be able to resize the filesystems. Also moving data from one filesystem on LVM to aonother (via pvmove - on production running system) is also pretty cool.
A second disk could also be placed on another datastore with - possibly - better performance. Just one option.

Adrian987654321 commented at 2016-11-10 09:57:

The info requested is as follows

lsscsi
[0:0:0:0] disk VMware Virtual disk 1.0 /dev/sda
[0:0:1:0] disk VMware Virtual disk 1.0 /dev/sdb
[2:0:0:0] cd/dvd NECVMWar VMware IDE CDR10 1.00 /dev/sr0

​dmesg - see attached file

​lspci -- see file attached.

00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (rev 01)
Subsystem: VMware Virtual Machine Chipset
Flags: bus master, medium devsel, latency 0
Kernel driver in use: agpgart-intel

00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX AGP bridge (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, 66MHz, medium devsel, latency 0
Bus: primary=00, secondary=01, subordinate=01, sec-latency=64
Kernel modules: shpchp

00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
Subsystem: VMware Virtual Machine Chipset
Flags: bus master, medium devsel, latency 0

00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01) (prog-if 8a [Master SecP PriP])
Subsystem: VMware Virtual Machine Chipset
Flags: bus master, medium devsel, latency 64
[virtual] Memory at 000001f0 (32-bit, non-prefetchable) [size=8]
[virtual] Memory at 000003f0 (type 3, non-prefetchable)
[virtual] Memory at 00000170 (32-bit, non-prefetchable) [size=8]
[virtual] Memory at 00000370 (type 3, non-prefetchable)
I/O ports at 1060 [size=16]
Kernel driver in use: ata_piix
Kernel modules: ata_piix, pata_acpi, ata_generic

00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
Subsystem: VMware Virtual Machine Chipset
Flags: medium devsel, IRQ 9
Kernel modules: i2c_piix4

00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)
Subsystem: VMware Virtual Machine Communication Interface
Flags: bus master, medium devsel, latency 64, IRQ 16
I/O ports at 1080 [size=64]
Memory at febfe000 (64-bit, non-prefetchable) [size=8K]
Capabilities: [40] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [58] MSI-X: Enable+ Count=2 Masked-
Kernel driver in use: vmw_vmci
Kernel modules: vmw_vmci

00:0f.0 VGA compatible controller: VMware SVGA II Adapter (prog-if 00 [VGA controller])
Subsystem: VMware SVGA II Adapter
Flags: bus master, medium devsel, latency 64, IRQ 16
I/O ports at 1070 [size=16]
Memory at ec000000 (32-bit, prefetchable) [size=64M]
Memory at fe000000 (32-bit, non-prefetchable) [size=8M]
[virtual] Expansion ROM at c0000000 [disabled] [size=32K]
Capabilities: [40] Vendor Specific Information: Len=00 <?>
Kernel driver in use: vmwgfx
Kernel modules: vmwgfx

00:10.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)
Subsystem: VMware LSI Logic Parallel SCSI Controller
Flags: bus master, medium devsel, latency 64, IRQ 17
I/O ports at 1400 [size=256]
Memory at feba0000 (64-bit, non-prefetchable) [size=128K]
Memory at febc0000 (64-bit, non-prefetchable) [size=128K]
[virtual] Expansion ROM at c0008000 [disabled] [size=16K]
Kernel driver in use: mptspi
Kernel modules: mptspi

00:11.0 PCI bridge: VMware PCI bridge (rev 02) (prog-if 01 [Subtractive decode])
Flags: bus master, medium devsel, latency 64
Bus: primary=00, secondary=02, subordinate=02, sec-latency=68
I/O behind bridge: 00002000-00003fff
Memory behind bridge: fd600000-fdffffff
Prefetchable memory behind bridge: 00000000ebb00000-00000000ebffffff
Capabilities: [40] Subsystem: VMware PCI bridge

00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=03, subordinate=03, sec-latency=0
I/O behind bridge: 00004000-00004fff
Memory behind bridge: fd500000-fd5fffff
Prefetchable memory behind bridge: 00000000eba00000-00000000ebafffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=04, subordinate=04, sec-latency=0
I/O behind bridge: 00008000-00008fff
Memory behind bridge: fd100000-fd1fffff
Prefetchable memory behind bridge: 00000000eb600000-00000000eb6fffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=05, subordinate=05, sec-latency=0
I/O behind bridge: 0000c000-0000cfff
Memory behind bridge: fcd00000-fcdfffff
Prefetchable memory behind bridge: 00000000eb200000-00000000eb2fffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:15.3 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=06, subordinate=06, sec-latency=0
Memory behind bridge: fc900000-fc9fffff
Prefetchable memory behind bridge: 00000000eae00000-00000000eaefffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:15.4 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=07, subordinate=07, sec-latency=0
Memory behind bridge: fc500000-fc5fffff
Prefetchable memory behind bridge: 00000000eaa00000-00000000eaafffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:15.5 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=08, subordinate=08, sec-latency=0
Memory behind bridge: fc100000-fc1fffff
Prefetchable memory behind bridge: 00000000ea600000-00000000ea6fffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:15.6 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=09, subordinate=09, sec-latency=0
Memory behind bridge: fbd00000-fbdfffff
Prefetchable memory behind bridge: 00000000ea200000-00000000ea2fffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:15.7 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=0a, subordinate=0a, sec-latency=0
Memory behind bridge: fb900000-fb9fffff
Prefetchable memory behind bridge: 00000000e9e00000-00000000e9efffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:16.0 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=0b, subordinate=0b, sec-latency=0
I/O behind bridge: 00005000-00005fff
Memory behind bridge: fd400000-fd4fffff
Prefetchable memory behind bridge: 00000000eb900000-00000000eb9fffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:16.1 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=0c, subordinate=0c, sec-latency=0
I/O behind bridge: 00009000-00009fff
Memory behind bridge: fd000000-fd0fffff
Prefetchable memory behind bridge: 00000000eb500000-00000000eb5fffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:16.2 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=0d, subordinate=0d, sec-latency=0
I/O behind bridge: 0000d000-0000dfff
Memory behind bridge: fcc00000-fccfffff
Prefetchable memory behind bridge: 00000000eb100000-00000000eb1fffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:16.3 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=0e, subordinate=0e, sec-latency=0
Memory behind bridge: fc800000-fc8fffff
Prefetchable memory behind bridge: 00000000ead00000-00000000eadfffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:16.4 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=0f, subordinate=0f, sec-latency=0
Memory behind bridge: fc400000-fc4fffff
Prefetchable memory behind bridge: 00000000ea900000-00000000ea9fffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:16.5 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=10, subordinate=10, sec-latency=0
Memory behind bridge: fc000000-fc0fffff
Prefetchable memory behind bridge: 00000000ea500000-00000000ea5fffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:16.6 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=11, subordinate=11, sec-latency=0
Memory behind bridge: fbc00000-fbcfffff
Prefetchable memory behind bridge: 00000000ea100000-00000000ea1fffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:16.7 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=12, subordinate=12, sec-latency=0
Memory behind bridge: fb800000-fb8fffff
Prefetchable memory behind bridge: 00000000e9d00000-00000000e9dfffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:17.0 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=13, subordinate=13, sec-latency=0
I/O behind bridge: 00006000-00006fff
Memory behind bridge: fd300000-fd3fffff
Prefetchable memory behind bridge: 00000000eb800000-00000000eb8fffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:17.1 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=14, subordinate=14, sec-latency=0
I/O behind bridge: 0000a000-0000afff
Memory behind bridge: fcf00000-fcffffff
Prefetchable memory behind bridge: 00000000eb400000-00000000eb4fffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:17.2 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=15, subordinate=15, sec-latency=0
I/O behind bridge: 0000e000-0000efff
Memory behind bridge: fcb00000-fcbfffff
Prefetchable memory behind bridge: 00000000eb000000-00000000eb0fffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:17.3 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=16, subordinate=16, sec-latency=0
Memory behind bridge: fc700000-fc7fffff
Prefetchable memory behind bridge: 00000000eac00000-00000000eacfffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:17.4 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=17, subordinate=17, sec-latency=0
Memory behind bridge: fc300000-fc3fffff
Prefetchable memory behind bridge: 00000000ea800000-00000000ea8fffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:17.5 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=18, subordinate=18, sec-latency=0
Memory behind bridge: fbf00000-fbffffff
Prefetchable memory behind bridge: 00000000ea400000-00000000ea4fffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:17.6 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=19, subordinate=19, sec-latency=0
Memory behind bridge: fbb00000-fbbfffff
Prefetchable memory behind bridge: 00000000ea000000-00000000ea0fffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:17.7 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=1a, subordinate=1a, sec-latency=0
Memory behind bridge: fb700000-fb7fffff
Prefetchable memory behind bridge: 00000000e9c00000-00000000e9cfffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:18.0 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=1b, subordinate=1b, sec-latency=0
I/O behind bridge: 00007000-00007fff
Memory behind bridge: fd200000-fd2fffff
Prefetchable memory behind bridge: 00000000eb700000-00000000eb7fffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:18.1 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=1c, subordinate=1c, sec-latency=0
I/O behind bridge: 0000b000-0000bfff
Memory behind bridge: fce00000-fcefffff
Prefetchable memory behind bridge: 00000000eb300000-00000000eb3fffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:18.2 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=1d, subordinate=1d, sec-latency=0
Memory behind bridge: fca00000-fcafffff
Prefetchable memory behind bridge: 00000000eaf00000-00000000eaffffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:18.3 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=1e, subordinate=1e, sec-latency=0
Memory behind bridge: fc600000-fc6fffff
Prefetchable memory behind bridge: 00000000eab00000-00000000eabfffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:18.4 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=1f, subordinate=1f, sec-latency=0
Memory behind bridge: fc200000-fc2fffff
Prefetchable memory behind bridge: 00000000ea700000-00000000ea7fffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:18.5 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=20, subordinate=20, sec-latency=0
Memory behind bridge: fbe00000-fbefffff
Prefetchable memory behind bridge: 00000000ea300000-00000000ea3fffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:18.6 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=21, subordinate=21, sec-latency=0
Memory behind bridge: fba00000-fbafffff
Prefetchable memory behind bridge: 00000000e9f00000-00000000e9ffffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

00:18.7 PCI bridge: VMware PCI Express Root Port (rev 01) (prog-if 00 [Normal decode])
Flags: bus master, fast devsel, latency 0
Bus: primary=00, secondary=22, subordinate=22, sec-latency=0
Memory behind bridge: fb600000-fb6fffff
Prefetchable memory behind bridge: 00000000e9b00000-00000000e9bfffff
Capabilities: [40] Subsystem: VMware PCI Express Root Port
Capabilities: [48] Power Management version 3
Capabilities: [50] Express Root Port (Slot+), MSI 00
Capabilities: [8c] MSI: Enable+ Count=1/1 Maskable+ 64bit+
Kernel driver in use: pcieport
Kernel modules: shpchp

03:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)
Subsystem: VMware VMXNET3 Ethernet Controller
Physical Slot: 160
Flags: bus master, fast devsel, latency 0, IRQ 18
Memory at fd5fb000 (32-bit, non-prefetchable) [size=4K]
Memory at fd5fc000 (32-bit, non-prefetchable) [size=4K]
Memory at fd5fe000 (32-bit, non-prefetchable) [size=8K]
I/O ports at 4000 [size=16]
[virtual] Expansion ROM at fd500000 [disabled] [size=64K]
Capabilities: [40] Power Management version 3
Capabilities: [48] Express Endpoint, MSI 00
Capabilities: [84] MSI: Enable- Count=1/1 Maskable- 64bit+
Capabilities: [9c] MSI-X: Enable+ Count=25 Masked-
Capabilities: [100] Device Serial Number ff-56-50-00-15-25-94-fe
Kernel driver in use: vmxnet3
Kernel modules: vmxnet3

schabrolles commented at 2016-11-10 10:21:

@Adrian987654321

Could you please also run the following command:
lsblk -ro TYPE,NAME,SIZE,SERIAL | grep disk

rpasche commented at 2016-11-10 10:27:

@Adrian987654321
I don't see dmesg output right now, but from lsscsi this looks just normal. 2 disks on one controller. Nothing special.
Confused.

Adrian987654321 commented at 2016-11-10 10:36:

Good questions. I'm new to Linux lots of experience with another OSes so
don't know all the answers.

Two disk are normally on two data stores in VMWare and possibly on two or
more SANs. There are lots of things you can do with LVM whihc makes it very
good to use.

Adrian

On 10 November 2016 at 09:21, Johannes Meixner notifications@github.com
wrote:

@Adrian987654321 https://github.com/Adrian987654321
I have a general question:
In #1057 (comment)
https://github.com/rear/rear/issues/1057#issue-186547940
you wrote

Running on a VM with two disks,
one 50GiB and the other 80GiB.

I wonder why you have two virtual harddisks on that virtual machine.
Why can't you simply use a single virtual 130GiB harddisk?
Or more generally:
Why LVM with virtual harddisks?
Perhaps my question is stupid because I have no experience
with LVM (except my few tetsing attempts above).
I think LVM is mainly useful to combine several smaller physical
disks into one bigger pool (a volume group) which can then
be used as if it was a single big disk.

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/rear/rear/issues/1057#issuecomment-259640399, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AWHlG9TS1iQRyD6IMXvy0QIcLtjVLmT8ks5q8uIVgaJpZM4KmJm6
.


This email is confidential and may contain copyright material of the John Lewis Partnership.
If you are not the intended recipient, please notify us immediately and delete all copies of this message.
(Please note that it is your responsibility to scan this message for viruses). Email to and from the
John Lewis Partnership is automatically monitored for operational and lawful business reasons.


John Lewis plc
Registered in England 233462
Registered office 171 Victoria Street London SW1E 5NN

Websites: http://www.johnlewis.com
http://www.waitrose.com
http://www.johnlewis.com/insurance
http://www.johnlewispartnership.co.uk


Adrian987654321 commented at 2016-11-10 12:39:

dmesg.txt

Adrian987654321 commented at 2016-11-10 12:40:

I've added the dmesg file. It doesn't like the file without an extension so it's now dmesg.txt not just dmesg.

Adrian987654321 commented at 2016-11-10 12:40:

lsblk -ro TYPE,NAME,SIZE,SERIAL | grep disk
disk fd0 4K
disk sda 50G
disk sdb 70G


Adrian Edwards
Infrastructure and Security Partnership IT
John Lewis Partnership
123 Victoria Street, London, SW1E 6DW
adrian.edwards@j adrian.edwards@partnershipservices.co.ukohnleiws.co.uk

On 10 November 2016 at 10:21, SĂ©bastien Chabrolles <notifications@github.com

wrote:

@Adrian987654321 https://github.com/Adrian987654321

Could you please also run the following command:
lsblk -ro TYPE,NAME,SIZE,SERIAL | grep disk

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/rear/rear/issues/1057#issuecomment-259653769, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AWHlG9fn8-gTYOJYEMtE3BIsR5XdRS0wks5q8vBFgaJpZM4KmJm6
.


This email is confidential and may contain copyright material of the John Lewis Partnership.
If you are not the intended recipient, please notify us immediately and delete all copies of this message.
(Please note that it is your responsibility to scan this message for viruses). Email to and from the
John Lewis Partnership is automatically monitored for operational and lawful business reasons.


John Lewis plc
Registered in England 233462
Registered office 171 Victoria Street London SW1E 5NN

Websites: http://www.johnlewis.com
http://www.waitrose.com
http://www.johnlewis.com/insurance
http://www.johnlewispartnership.co.uk


rpasche commented at 2016-11-10 13:17:

@Adrian987654321
I'm still confused. One more thing. Just to be sure. Please show output of lsscsi within recovery. Just login with "root" but don't perform a "rear recover"

This should list

0:0:1:0   /dev/sda
0:0:0:0   /dev/sdb

Adrian987654321 commented at 2016-11-10 14:44:

OK - will do but it isn't likely to happen until tomorrow.

rpasche commented at 2016-11-10 14:58:

@Adrian987654321
No problem. I just want to understand, what is happening on your system. Your setup looks just as mine (VM disk setup) so this "problem" might hit me too, one day.

schabrolles commented at 2016-11-10 15:14:

@Adrian987654321
Thanks, for running lsblk command, it seems it doesn't work with virtual devices.

Adrian987654321 commented at 2016-11-11 11:39:

The outoput from lsscsi command on boot is at attached in the image.

Note I've installed SLES12 SP2 in the hope that it might fix the issue. Sadly it hasn't.
lsscsi_on_bootpng

rpasche commented at 2016-11-11 15:41:

@Adrian987654321
Now here is some kind of problem.
You earlier wrote

lsscsi
[0:0:0:0] disk VMware Virtual disk 1.0 /dev/sda
[0:0:1:0] disk VMware Virtual disk 1.0 /dev/sdb
[2:0:0:0] cd/dvd NECVMWar VMware IDE CDR10 1.00 /dev/sr0

This means, your /dev/sda in your running system is the 50 GB disk and /dev/sdb must be your 70 GB disk.
Now....in recovery, disk [0:0:0:0] should "still" be your 50 GB disk and [0:0:1:0] must be your 70 GB disk. The 0:0 and 0:1 are the SCSI ID. You can see these within the properties of the VM disks.

There is no way this can be switched by rear. Only way this can be switched is within the VM properties (the assignment of the disks and there SCSI IDs).

Are you using VMware snapshots to test this rear backup and recovery?

Adrian987654321 commented at 2016-11-11 17:02:

Ah, I think i know why that is.

I have to ask someone in another team to recreate my test VM from a template and then I run the rear process to create the ISO. I then attach it to the VM reboot and go through the rear restore process which fails. I couldn't restart the VM from the ISO image after the restore failure. I spoke to our VM team this morning and they suggest changing the boot options the be CD-ROM followed by hard disk.

Lesson learnt, don't change the boot options after creating the ISO image. When everything is working normally that would never happen.

I'll change the boot options back and try again later.

Adrian987654321 commented at 2016-11-11 17:09:

I've rebooted and set the boot order back to what it was.
10-lsscsi_on_boot

Adrian987654321 commented at 2016-11-11 17:21:

I've rerun the restore with the disks in the correct order. Looking through the log the logical volumes are created correctly it seems to be the last stage where the / filesystem is mount that is causing the problem.

A WARNING of /dev/btrfs-control failed to open and at the end btrfs subvolume set-default: too few arguements.

Not sure now what i should try.
12_bootlog
11_bootlog

rpasche commented at 2016-11-11 17:51:

hmm...where the hell is the mail I sent as answer?

rpasche commented at 2016-11-11 17:52:

No. Boot order "should" not be the problem.

My thinking was, that you "had" a setup with the first disk, 70 GB and the second disk with 50. That state snapshotted. Then you switched the disks (for whatever reason) and installed the system (first disk now 50 and second 70). Now you run rear mkbackup. But now, before you recover, you revert back to the snapshot and boot from iso. This would result in this problem.

But as you can imagine.. this is quite ugly testing

But the template thing might also be the problem. But as long as the VM is setup and it does not change between backup and restore, this problem should not occur.

Adrian987654321 commented at 2016-11-14 08:51:

Thanks for the info. The two disks stay in the same order it was just the boot order that was changed. As I've worked out how to get VMWare to boot from the ISO multiple times the disks are back in the same order as when the rear backup was taken.

Adrian987654321 commented at 2016-11-16 10:47:

This is the full log of the lat restore run. It seems the issue is with mounting the / filesystem but I'm not sure where to look. Any ideas?

Adrian987654321 commented at 2016-11-16 10:48:

rear-ulvwasaw01.txt

Adrian987654321 commented at 2016-11-16 10:58:

This looks like it might be a candidate for the problem, near the bottom of the log file.

The file /var/lib/rear/layout/fs_uuid_mapping doesn't exist.

+++ new_uuid=ca8d3ee8-ffa8-4f0c-89a7-e2c340c79f2a
+++ '[' 464b2ed0-f4f1-4b21-b96d-88c378c6596f '!=' ca8d3ee8-ffa8-4f0c-89a7-e2c340c79f2a ']'
+++ grep -q 464b2ed0-f4f1-4b21-b96d-88c378c6596f /var/lib/rear/layout/fs_uuid_mapping
grep: /var/lib/rear/layout/fs_uuid_mapping: No such file or directory

Adrian987654321 commented at 2016-11-16 11:11:

One other thing that might be an issue is /tmp has noexec on the mount. From /etc/fstab

/dev/mapper/system-lvtmp /tmp btrfs defaults,nodev,nosuid,noexec 1 2

Perhaps this is stopping the scripts from running correctly, don't know.

jsmeix commented at 2016-11-16 11:17:

@Adrian987654321
your
https://github.com/rear/rear/files/594363/rear-ulvwasaw01.txt
ends with

+++ subvolumeID=
+++ btrfs subvolume set-default /mnt/local/
btrfs subvolume set-default: too few arguments
usage: btrfs subvolume set-default  

You do not have a btrfs default subvolume set
where one is usually expected.

The matching code is in
layout/prepare/GNU/Linux/130_include_mount_filesystem_code.sh
which generates the code in the diskrestore.sh script
and then the generated code in the diskrestore.sh script
fails in your particular case.

I assume you did a manual btrfs setup and not
a standard SUSE btrfs setup as it comes out of YaST?

Adapting
/usr/share/rear/layout/prepare/GNU/Linux/130_include_mount_filesystem_code.sh
as you need it in your particular case should help.

Adrian987654321 commented at 2016-11-16 11:57:

Do you know how I can create a btrfs default subvolume?

jsmeix commented at 2017-01-18 13:06:

The error message in the above
https://github.com/rear/rear/issues/1057#issuecomment-260920515
is the same as in
https://github.com/rear/rear/issues/1036#issuecomment-273049971
so that this issue is basically a duplicate of
https://github.com/rear/rear/issues/1036

jsmeix commented at 2017-11-28 12:45:

With https://github.com/rear/rear/pull/1593 merged
"swaps disks /dev/sda <==> /dev/sdb"
issues can an should be avoided.


[Export of Github issue for rear/rear.]