#721 Issue closed: excessive delay with mkbackup on oracle ASM server

towster opened issue at 2015-11-30 19:00:

I ran into an issue with doing a backup of a server with oracle ASM. The backups were taking an excessively long time (like 8+ hours). I tracked it down watching the logs to see what was taking so long. Basically there were nearly 100,000 files in "/dev/oracleasm/iid" directory and this list was getting appended to a list within rear to be copied.

I was able to work around the issue by doing the following.

  1. add the "iid" directory to the exclude list - I didnt exclude anything before so I took default and simply added the "iid" directory
    COPY_AS_IS_EXCLUDE=( dev/shm dev/shm/* dev/.udev $VAR_DIR/output/* dev/oracleasm/iid/* )
  2. manually tar the "iid" directory to the server (it was tiny)
  3. ran mkbackup (this took about 5 minutes)
  4. restore the OS
  5. while still booted to the rescue ISO I extracted the "iid" directory back to its original location
  6. reboot server

Everything worked fine. We actually only backup rootvg with rear. The actual database volumes are replicated by storage team. Once the data was presented to the server the DBA had no issues starting the databases.

schlomo commented at 2015-11-30 21:48:

I don't think that the ReaR rescue system needs the /dev/oracleasm devices whatsoever.

Can you try if your use case also works if you exclude /dev/oracleasm and not manually restore that?

Rendanic commented at 2015-12-01 04:59:

This is a known bug in oracleasm. Please exclude the directory from the backup as it includes runtime information for ASM. The files are removed after a restart of the oracleasm service or a reboot of the host.
Solution: Exclude the whole /dev/oracleasm

schlomo commented at 2015-12-01 08:08:

Can you please test and confirm that it works? Thanks a lot!

towster commented at 2015-12-04 18:27:

I can test it next time we do our DR testing but it wont be until march/april.


[Export of Github issue for rear/rear.]