#377 Issue closed: Root file system is 100% full after recovery

Labels: waiting for info, support / question

gbeckett opened issue at 2014-03-08 00:43:

I'm using Rear 1.15 on a RHEL 5.8 server. It has internal disks for the /boot and vg00 LVM volume group and SAN storage from an EMC VNX array and we are using Multipath for the 6 SAN storage devices.
Using the following local.conf variables I have been able to successfully backup the server.
OUTPUT=ISO
BACKUP=NETFS
BACKUP_URL="nfs://#.#.#.#/admin/rear/"
NETFS_KEEP_OLD_BACKUP_COPY=2

When I attempt to recover the server it complains with the following error for all the devices.


No code has been generated to restore device pv:/dev/mpath/mpath3p1 (lvmdev).
Please add code to the /var/lib/rear/layout diskrestore.sh to manually install it or choose abort.


I found issue #228 and edited the disklayout.sh file and changed the mpath to mapper as directed for each of my multipath deivces. But when I run the recover command I get the same errors.
Now I've tried this a number of times, both with editing disklayout.sh and changing those lines and "not" changing them, but I get the same result.
That being, the server is recovered, I can boot it, the LVM volumes are created, my multipath devices are there, BUT the root volume group vg00 is 100% FULL. I can increase the size of root with a lvresize and a fsadm resize and all works. It was initially 4GB and had to increase it to 5GB.
Then next time I attempted the recover, root was again 100% FULL so I had to increase the size from 5GB to 6GB. I cannot find where the extra data is being stored. I thought it was the inodes, but a df -i / indicates that only 1% of the inodes are being used. I've scanned the root file system and have compared it against another server identically built (but has not been Rear recovered) but I cannot find the problem.
Can anyone make some suggests with respect to the mpath errors and the root file system being 100% full?
Thank you very much.
Gary

gdha commented at 2014-03-10 10:12:

Did you change lvm.conf file? Was the multipath.conf file modified? Can we see your local.conf file and the files under /var/lib/rear/layout/?
Could it be that the recover process restores data to / instead of a SAN disk file system?

gbeckett commented at 2014-03-20 23:57:

Hi sorry for the delay.
No, I have not modified the "lvm.conf" file. Though I read through the RH Multipath doc indicated I'm still not sure what the correct set of parameters/switches to use to filter.

My local.conf file has the following.
OUTPUT=ISO
BACKUP=NETFS
BACKUP_URL="nfs://10.67.138.34/admin/rear1/"
NETFS_KEEP_OLD_BACKUP_COPY=2

The files in the /var/lib/rear/layout are fairly verbose. Is there a way to up load the files rather then dumping the contents in this log?

As for you last question, yes I guess its possible that something is being recovered into the root file system, but like I said, I search through it a cannot seem to find where or what it would be.

Thanks for your help. Its very much appreciated.
Please advise.

gdha commented at 2014-04-01 10:40:

@gbeckett you can drop them in gist.github.com and reference those in this issue

gdha commented at 2015-06-03 11:09:

@gbeckett I guess you give up with rear? SAN disks are normally not touched during recovery, but as I never saw any evidence from your side I cannot say much more about it.
I'll close this issue - if you have some new input you may re-open this request


[Export of Github issue for rear/rear.]