#390 Issue closed: Failed Restore

gary-schiltz opened issue at 2014-04-16 20:26:

See thread http://pikachu.3ti.be/pipermail/rear-users/2014-April/002890.html

gary-schiltz commented at 2014-04-16 20:45:

https://drive.google.com/file/d/0B6762AWv6lxAcXFQaS1GSldBVGM/edit?usp=sharing

The link above contains a tarball of files from the failed installation, including /var/lib/rear, var/log/rear, and the output of df and fdisk -l.

gdha commented at 2014-04-17 15:01:

# Create /dev/md1 (raid)
LogPrint "Creating software RAID /dev/md1"
test -b /dev/md1 && mdadm --stop /dev/md1

echo "Y" | mdadm --create /dev/md1 --force --level=raid1 --raid-devices=2 --uuid=46d7ae8d:7a025204:ed331226:d6a736d2 --name=1 /dev/sda2 /dev/sdb2  >&2

and it stops at the point (before running mdadm --create):

+++ echo -e 'Creating software RAID /dev/md1'
+++ test -b /dev/md1
+++ mdadm --stop /dev/md1
mdadm: Cannot get exclusive access to /dev/md1:Perhaps a running process, mounted filesystem or active volume group?
2014-04-16 13:09:16 An error occured during layout recreation.

That means that the test test -b /dev/md1 is too hard or should be rewritten somehow.
@rear/owners Any ideas?

jhoekx commented at 2014-04-17 18:34:

Sometimes there are udev rules that start the array.

I would be interested in cat /proc/mounts and pvs.

If there's nothing in there, try to get lsof on the rescue image and add it to the diskrestore.sh script right before we stop the array.

gary-schiltz commented at 2014-04-18 03:56:

I had a couple of extra drives, so I restored to them (it was a RAID 1 array) and it worked. So, I used one of the drives that worked, paired with each of the first two in order to find out which one was not not working properly. Now the array is rebuilding. So, it appears this issue was not a bug, but faulty hardware. Should I close it?


[Export of Github issue for rear/rear.]