#2143 Issue closed
: No "Saving RUNTIME_LOGFILE as LOGFILE" when Error() in SIMULTANEOUS_RUNNABLE_WORKFLOWS¶
Labels: enhancement
, cleanup
, minor bug
, no-issue-activity
dcz01 opened issue at 2019-05-10 06:35:¶
Relax-and-Recover (ReaR) Issue Template¶
Fill in the following items before submitting a new issue
(quick response is not guaranteed with free support):
- ReaR version ("/usr/sbin/rear -V"):
Relax-and-Recover 2.4 / 2018-06-21
- OS version ("cat /etc/rear/os.conf" or "lsb_release -a" or "cat /etc/os-release"):
OS_VENDOR=RedHatEnterpriseServer
OS_VERSION=6.10
# The following information was added automatically by the mkrescue workflow:
ARCH='Linux-i386'
OS='GNU/Linux'
OS_VERSION='6.10'
OS_VENDOR='RedHatEnterpriseServer'
OS_VENDOR_VERSION='RedHatEnterpriseServer/6.10'
OS_VENDOR_ARCH='RedHatEnterpriseServer/i386'
# End of what was added automatically by the mkrescue workflow.
- ReaR configuration files ("cat /etc/rear/site.conf" and/or "cat /etc/rear/local.conf"):
OUTPUT=ISO
OUTPUT_URL=file:///root/rear
BACKUP=NETFS
#BACKUP=TSM
BACKUP_PROG=tar
BACKUP_PROG_CRYPT_ENABLED=1
BACKUP_PROG_CRYPT_KEY=""
BACKUP_PROG_CRYPT_OPTIONS="/usr/bin/openssl aes256 -salt -k"
BACKUP_PROG_DECRYPT_OPTIONS="/usr/bin/openssl aes256 -d -k"
#BACKUP_URL=nfs://<IP-Adresse oder DNS-Name>/<Freigabepfad>
BACKUP_URL=cifs://NotesRechte/BRP-Backup
BACKUP_OPTIONS="cred=/etc/rear/cifs,sec=ntlmsspi"
BACKUP_TYPE=incremental
FULLBACKUPDAY="Sat"
BACKUP_PROG_EXCLUDE=( '/tmp/*' '/dev/shm/*' $VAR_DIR/output/\* '/opt/tivoli/tsm/rear/*' '/mnt/*' '/media/*' '/var/lib/pgsql/*/data/base/*' '/var/lib/pgsql/*/data/global/*' '/var/lib/pgsql/*/data/pg*/*' )
SSH_ROOT_PASSWORD='$1$HGjk3XUV$lid3Nd3k01Kht1mpMscLw1'
NON_FATAL_BINARIES_WITH_MISSING_LIBRARY='/opt/tivoli/tsm/client/ba/bin/libvixMntapi.so.1.1.0'
- Hardware (PC or PowerNV BareMetal or ARM) or virtual machine (KVM
guest or PoverVM LPAR):
Test virtual machine on VMware
- System architecture (x86 compatible or PPC64/PPC64LE or what exact
ARM device):
x86_64
- Firmware (BIOS or UEFI or Open Firmware) and bootloader (GRUB or
ELILO or Petitboot):
BIOS and GRUB
- Storage (local disk or SSD) and/or SAN (FC or iSCSI or FCoE) and/or
multipath (DM or NVMe):
Lokal Disk (one disk)
- Description of the issue (ideally so that others can reproduce
it):
When an error occur during backup with ReaR (NETFS) or maybe when creating the ISO image, the new created log file (/var/log/rear/rear-FBD01PSS.<PID>.log) is not saved at the end as the normal log file (/var/log/rear/rear-FBD01PSS.log).
This step should be also done when an error occur: Saving /var/log/rear/rear-FBD01PSS.<PID>.log as /var/log/rear/rear-FBD01PSS.log
- Workaround, if any:
none
- Attachments, as applicable ("rear -D mkrescue/mkbackup/recover" debug log files):
jsmeix commented at 2019-05-10 07:11:¶
This is a known limitation because currently the LOGFILE
and RUNTIME_LOGFILE
handling happens in usr/sbin/rear
there in particular via this code parts:
At the beginning
# Use RUNTIME_LOGFILE for the logfile that is actually used during runtime
# so that the user can specify a different LOGFILE in his local.conf file.
# During runtime use only the actually used RUNTIME_LOGFILE value.
# By default RUNTIME_LOGFILE is the LOGFILE from default.conf:
RUNTIME_LOGFILE="$LOGFILE"
# Set the right RUNTIME_LOGFILE value depending on whether or not
# this currently running instance can run simultaneously with another instance:
can_run_simultaneously=""
# LOCKLESS_WORKFLOWS can run simultaneously with another instance by using LOGFILE.lockless.
# FIXME: Only one lockless workflow can run otherwise the same LOGFILE.lockless
# would be used by more than one simultaneously running lockless workflows:
for lockless_workflow in "${LOCKLESS_WORKFLOWS[@]}" ; do
if test "$WORKFLOW" = "$lockless_workflow" ; then
RUNTIME_LOGFILE="$LOGFILE.lockless"
can_run_simultaneously="yes"
break
fi
done
# SIMULTANEOUS_RUNNABLE_WORKFLOWS are allowed to run simultaneously
# but cannot use LOGFILE.lockless instead they get a LOGFILE with PID
# see https://github.com/rear/rear/issues/1102
# When a workflow is both in LOCKLESS_WORKFLOWS and in SIMULTANEOUS_RUNNABLE_WORKFLOWS
# its right LOGFILE value is the one for SIMULTANEOUS_RUNNABLE_WORKFLOWS:
for simultaneous_runnable_workflow in "${SIMULTANEOUS_RUNNABLE_WORKFLOWS[@]}" ; do
if test "$WORKFLOW" = "$simultaneous_runnable_workflow" ; then
# Simultaneously runnable workflows require unique logfile names
# so that the PID is interposed in the LOGFILE value from default.conf
# which is used as RUNTIME_LOGFILE while the workflow is running
# and at the end it gets copied to a possibly used-defined LOGFILE.
# The logfile_suffix also works for logfile names without '.*' suffix
# (in this case ${LOGFILE##*.} returns the whole $LOGFILE value):
logfile_suffix=$( test "${LOGFILE##*.}" = "$LOGFILE" && echo 'log' || echo "${LOGFILE##*.}" )
RUNTIME_LOGFILE="${LOGFILE%.*}.$$.$logfile_suffix"
can_run_simultaneously="yes"
break
fi
done
...
LogPrint "Using log file: $RUNTIME_LOGFILE"
And at the end after the requested workflow was run
# Usually RUNTIME_LOGFILE=/var/log/rear/rear-$HOSTNAME.log
# The RUNTIME_LOGFILE name is set above from LOGFILE in default.conf
# but later user config files are sourced where LOGFILE can be set different
# so that the user config LOGFILE is used as final logfile name:
if test "$RUNTIME_LOGFILE" != "$LOGFILE" ; then
test "$WORKFLOW" != "help" && LogPrint "Saving $RUNTIME_LOGFILE as $LOGFILE"
cat "$RUNTIME_LOGFILE" > "$LOGFILE"
fi
Therefore the "Saving $RUNTIME_LOGFILE as $LOGFILE" part
is not run in case of an Error exit from within the running workflow
but usually this does not matter because usually
the RUNTIME_LOGFILE value is the same as the LOGFILE value.
This issue here happens only in case of an Error exit from within
running one of the SIMULTANEOUS_RUNNABLE_WORKFLOWS
which are defined in usr/share/rear/conf/default.conf
# SIMULTANEOUS_RUNNABLE_WORKFLOWS are also allowed to run simultaneously
# but cannot use LOGFILE.lockless as the LOCKLESS_WORKFLOWS.
# Instead the SIMULTANEOUS_RUNNABLE_WORKFLOWS get a LOGFILE with PID
# because simultaneously runnable workflows require unique logfile names
# so that the PID is interposed in the LOGFILE value from default.conf above,
# i.e. by default SIMULTANEOUS_RUNNABLE_WORKFLOWS use during runtime
# a logfile named /var/log/rear/rear-$HOSTNAME.$$.log that gets
# at the end copied to a final possibly used-defined LOGFILE,
# see /usr/sbin/rear and https://github.com/rear/rear/issues/1102
SIMULTANEOUS_RUNNABLE_WORKFLOWS=( 'mkbackuponly' 'restoreonly' )
I will have a look if it is possible with reasonable effort to also do
the "Saving $RUNTIME_LOGFILE as $LOGFILE" part in case of an
Error exit from one of the SIMULTANEOUS_RUNNABLE_WORKFLOWS.
My offhanded idea is to do the "Saving $RUNTIME_LOGFILE as $LOGFILE"
part
as one of the EXIT_TASKS but I need to check the details...
jsmeix commented at 2019-05-10 07:40:¶
I had a quick look at the logfile handling code parts
in various places in various scripts and I remember from the past
when I implemented things like RUNTIME_LOGFILE
and SIMULTANEOUS_RUNNABLE_WORKFLOWS because of
https://github.com/rear/rear/blob/master/doc/user-guide/11-multiple-backups.adoc
that the logfile handling code in ReaR became an overcomplicated pile
of awkward convoluted "worms" (you may just call it a "can of worms"
;-)
because I wanted to keep things behave backward compatible
as far as possible.
So from my current point of view I will not add
more worms into that can for ReaR 2.x where things
must behave backward compatible.
Instead I will completely clean up the logfile handling code
for ReaR 3.x cf.
https://github.com/rear/rear/issues/1390
where I can do things right from the beginning
to support SIMULTANEOUS_RUNNABLE_WORKFLOWS.
In the end this means no special code for
SIMULTANEOUS_RUNNABLE_WORKFLOWS
but the new logfile handling code works generically
also for SIMULTANEOUS_RUNNABLE_WORKFLOWS
Simply put:
The new logfile name will contain the PID in any case
(basically what RUNTIME_LOGFILE currently is
in case of SIMULTANEOUS_RUNNABLE_WORKFLOWS).
github-actions commented at 2020-06-27 01:33:¶
Stale issue message
dcz01 commented at 2021-08-12 08:06:¶
@gdha @jsmeix Please reopen because this bug is important when viewing
the /var/log/rear/rear-$HOSTNAME.log and the workflow failed, only the
old log is viewable and not the correct log with the error.
Bug also present in ReaR 2.6.
Maybe it could be part of ReaR 2.7?
github-actions commented at 2021-10-12 02:15:¶
Stale issue message
[Export of Github issue for rear/rear.]