#648 Issue closed
: NETFS backup --> ISO_MAX_SIZE¶
Labels: support / question
multipathmaster opened issue at 2015-08-31 17:51:¶
So, fiddling around a bit more with the NETFS backup as we have a deliverable for backing up everything to an ISO. I was wondering, I was using the ISO_MAX_SIZE of 3500 to keep the backup.tar.gz file under the 4gig limit. I was under the impression that it would only split the backup.tar.gz and then would dd those (however it uses the /usr/share/rear/NETFS/default/50*), but what I found out is that it actually will create separate ISO files.
Can I merge then these ISO images into one by doing the following?
mount -o loop image1.iso /path/to/mountpoint/1
mount -o loop image2.iso /path/to/mountpoint/2
mount -o loop image3.iso /path/to/mountpoint/3
and then finally, once the images are mounted do the following:
mkisofs -o image.iso /path/to/mountpoint/1 /path/to/mountpoint/2 /path/to/mountpoint3
thus merging the 3 ISO images into one big ISO. I guess my question is have y'all been playing around with this concept? Or is there another flag to tell it to split the backup.tar.gz but DO NOT SPLIT THE ISO IMAGE (avoiding the whole cannot copy a file larger than 4GiB error, which is present within machines that are using mkisofs (the genisomage softlink to /etc/alternatives).
I do know that mondo will do such a behaviour, but I did not know whether this is actually built into ReaR, or if y'all simply do the steps necessary to utilize ISO_MAX_SIZE.
THANKS! -- MpathMaster
schlomo commented at 2015-08-31 19:50:¶
What happens if you set ISO_MAX_SIZE to something large?
multipathmaster commented at 2015-09-01 12:24:¶
It doesn't matter what I set ISO_MAX_SIZE to. if the backup.tar.gz
exceeds 4GiB, it will not be able to write this file to the ISO image.
What I have done so far is in
/usr/share/rear/backup/NETFS/default/50*
is to make a new variable called SPLIT_COMMAND2 as well as defining a
new variable in /etc/rear/site.conf called BKUP_MAX_SIZE=3500MB which
I call upon in the SPLIT_COMMAND2, which replaces the original
SPLIT_COMMAND with the following.
head of 50_make_backup.sh -->
50_make_backup.sh¶
function set_tar_features {
# Default tar options
TAR_OPTIONS=
# Test for features in tar
# true if at supports the --warning option (v1.23+)
FEATURE_TAR_WARNINGS=
local tar_version=$(get_version tar --version)
if version_newer "$tar_version" 1.23; then
FEATURE_TAR_WARNINGS="y"
TAR_OPTIONS="$TAR_OPTIONS --warning=no-xdev"
fi
FEATURE_TAR_IS_SET=1
SPLIT_COMMAND2="split --bytes=${BKUP_MAX_SIZE} - ${backuparchive}."
}
see where I have SPLIT_COMMAND2 set?
then, within the case statement for ${BACKUP_PROG} i have set it here...
case "$(basename ${BACKUP_PROG})" in
# tar compatible programs here
(tar)
set_tar_features
Log $BACKUP_PROG $TAR_OPTIONS --sparse --block-number --totals
--verbose
--no-wildcards-match-slash --one-file-system
--ignore-failed-read $BACKUP_PROG_OPTIONS
$BACKUP_PROG_X_OPTIONS
${BACKUP_PROG_BLOCKS:+-b $BACKUP_PROG_BLOCKS}
$BACKUP_PROG_COMPRESS_OPTIONS
-X $TMP_DIR/backup-exclude.txt -C / -c -f -
$(cat $TMP_DIR/backup-include.txt) $LOGFILE |
$BACKUP_PROG_CRYPT_OPTIONS BACKUP_PROG_CRYPT_KEY |
$SPLIT_COMMAND2
$BACKUP_PROG $TAR_OPTIONS --sparse --block-number --totals --verbose
--no-wildcards-match-slash --one-file-system
--ignore-failed-read $BACKUP_PROG_OPTIONS
$BACKUP_PROG_X_OPTIONS
${BACKUP_PROG_BLOCKS:+-b $BACKUP_PROG_BLOCKS}
$BACKUP_PROG_COMPRESS_OPTIONS
-X $TMP_DIR/backup-exclude.txt -C / -c -f -
$(cat $TMP_DIR/backup-include.txt) $LOGFILE |
$BACKUP_PROG_CRYPT_OPTIONS $BACKUP_PROG_CRYPT_KEY |
$SPLIT_COMMAND2
;;
to issue $SPLIT_COMMAND2
this will now split the backup.tar.gz properly, and will compress all of them into the ISO.
However that being said, I was really hoping y'all had a better/easier
way to accomplish 8-15 gig sized ISO's that have
backup.tar.gz.aa/ab/ac/ad/etc... for a much larger image (similar to
mondo).
So by doing what I have done above, I no longer need to worry about
multiple ISO files, it is just one large ISO image, i'm banking on the
fact that during the 40_restore piece for NETFS, that you guys have the
for i in ls ${BUILDDIR}/blah/blah/isofs/backup/*tar.gz
, which should
grab every single backup.tar.gz.XX, and because of globbing within the
shell, it should restore those via tar in the same order as created,
however I haven't tested this out yet, and it may be that I need to
modify the 40_restore script as well.
Let me know Schlomo! -- MpathMaster
gdha commented at 2015-09-01 13:21:¶
@multipathmaster @schlomo See also issue #581
multipathmaster commented at 2015-09-01 14:10:¶
Thanks for the info GDHA, I went ahead and added my 2 cents to that post as well. Really comes down if we can add the BKUP_MAX_SIZE option within the conf file and utilize that other way of doing the split. Again I really need to test the restore ability, and I am currently awaiting an end user to give me the okay to reboot a machine and at least do a dry run of this concept, i believe it will work, but really need to test it out.
schlomo commented at 2015-09-01 16:15:¶
@multipathmaster from an architectural point of view I think that forcing ISOs bigger than 4GB is an uphill battle. Splitting and merging could be a viable option if you get it to work in a stable matter. I will comment on your code after it works and when you will send us a pull request :-)
Better would be IMHO to try to avoid using ISO altogether (as it is the "wrong" tool for your job and you might find it difficult to replace OS components that are in the way) and take a step back to look at the big picture to maybe find another way where and how to store the data. But that might be a problem depending on your environment, feel free to continue this topic with more details privately over my direct email.
multipathmaster commented at 2015-09-01 16:31:¶
Schlomo, we are not going down this pathway without a solid reason. Reason being, these are systems that are isolated on the network with only a 1.5 meg WAN connection back to the main network. Because of this, we cannot do our normal TSM restore. However, that being said, we ARE taking advantage of the TSM backups/restores, especially within our DC environment, as well as utlizing multipath and boot from SAN. These systems all check out, thus far with testing, but this particular issue that I am bringing up is simply put (working with the hand that you were dealt). Ideally this isn't poker, but again, to restore 4+ gigs of data from TSM over a 1.5mbit connection = failure, and it isn't an option. However, creating an ISO with all of the backup components that are necessary, while using a locally stored datastore will actually benefit us greatly. Just a thought. After we test this thing out a bit later on today, I will definitely put in the scripts that I have changed, and really just making a bit more functionality out of the $SPLIT_COMMAND variable that was already available. I will keep you guys in the loop for sure! Thanks for the response Schlomo! -- MpathMaster
schlomo commented at 2015-09-01 17:15:¶
@multipathmaster I understand your situation. What I meant is that if you never plan to burn the ISO onto a physical media then IMHO you don't need an ISO.
For example, if you use VMware and you have some automation to transfer the ISO from within a VM to the VMware ESX datastore then you could probably also manage to attach a VMDK hard disk to the VM and use that with the USB support in ReaR.
Or, why not simply setup a local machine/VM to act as a NFS server for backups? The amount of storage that this needs will be the same (or even less since ReaR will write backups there directly without needing buffer space on the source VM) and your backup/restore times will be much faster.
If you provide more details about your environment maybe we can think of even more ways how to solve your objective (local disaster recovery backups).
multipathmaster commented at 2015-09-15 14:21:¶
So restoring from this methodology is a no go. When you boot from the rescue media, after creation, the rear program itself doesn't know where the backup.tar.gz is. It keeps looking in somewhere in /tmp, when my site.conf file specifically has the BACKUP_URL already specified.
Specific error is the following: ERROR: Backup archive '/tmp/rear/<random_name>/outputfs/isolinux/backup.tar.gz' not found !
Which I know this to be the case, because this isn't where I am setting where the backup.tar.gz should exist. I cannot seem to find where it is getting this variable from. This is very frustrating, as I have this thing working from the aspect of splitting the backup.tar.gz into smaller files that do not exceed 3500mb, and I can create an ISO image much larger than 4 gigs.
I am about to give up the ghost on this one.
@schlomo,
We cannot use VMware and have a datastore to share all of these backups.
We literally have one VMware server for every place of business that we
support. Spinning up new datastores or firing up new machines and/or new
technology to support that would literally cost us millions.
NFS server is in the same boat, it sounds great, until you lose power on the VMware host and now where do those backups go(single point of failure)?
Willing to sit down with one of you guys and attempt to show you guys how we are trying to get this one use case to work. Or whatever you guys want to do, at this point though, I'm stuck between a rock and a hard place because I can't understand when I boot into the recovery image, why all of the variables would all the sudden not work anymore (if I define my backup_url and every other variable in my site.conf or my local.conf or my os.conf, then shouldn't these variables carry forward when you boot from this same image and simply run rear recover?).
Thanks -- MultipathMaster
schlomo commented at 2015-09-24 14:00:¶
@multipathmaster I guess that we cannot really help you debugging the scripts from afar.
With regard to having a single point of failure: As far as I understand your environment with the single VMware server (really "VMware server" and not "ESX server"???) I think that you will have a SPOF in any kind of design.
With regard to continuity planning I think that I would need to see the risks you want to cover in order to be able to help. However, this kind of help really exceeds the "community support" which we can offer you through this GitHub issue. If you want to sit down with one of us then please contact us privately. I am sure that we can help you to achieve your goals, both through design work and through remote debugging & coding.
multipathmaster commented at 2015-10-06 13:41:¶
Yes, you are correct schlomo, this would extend much further away from community help on this. At this point, we are going to stop development on our single nodes that we have to have backing up, and we are most likely going to stick with Mondo at this point. For our DC systems, we have multipath/boot from SAN working great with RHEL7 (which is super awesome). I am currently now focusing on retro-fitting rhel6 with the same thing as our RHEL7 instances, and so at this point, you can go ahead and close this issue. Thanks for your continued support!!! -- MpathMaster
[Export of Github issue for rear/rear.]