#2250 Issue closed
: ReaR Restore Error : rpcbind unavailable¶
Labels: support / question
, fixed / solved / done
Ronjr21 opened issue at 2019-10-08 06:57:¶
Fill in the following items before submitting a new issue
(quick response is not guaranteed with free support):
-
ReaR version ("/usr/sbin/rear -V"): Relax-and-Recover 2.4
-
OS version ("cat /etc/rear/os.conf" or "lsb_release -a" or "cat /etc/os-release"): Debian 10
-
ReaR configuration files ("cat /etc/rear/site.conf" and/or "cat /etc/rear/local.conf"):
OUTPUT=ISO
BACKUP=NETFS
BACKUP_URL=nfs://192.168.1.37/test34 -
Hardware (PC or PowerNV BareMetal or ARM) or virtual machine (KVM guest or PoverVM LPAR): KVM
-
System architecture (x86 compatible or PPC64/PPC64LE or what exact ARM device): x86
-
Firmware (BIOS or UEFI or Open Firmware) and bootloader (GRUB or ELILO or Petitboot): BIOS and GRUB
-
Storage (local disk or SSD) and/or SAN (FC or iSCSI or FCoE) and/or multipath (DM or NVMe): local disk
-
Description of the issue (ideally so that others can reproduce it):
Similar to #1575. Error upon running rear -vd restore, below is the error log /var/log/rear-prx6.log. Created folder /run/rpcbind but didn't solve the issue.
-
Workaround, if any:
-
Attachments, as applicable ("rear -D mkrescue/mkbackup/recover" debug log files):
To paste verbatim text like command output or file content,
include it between a leading and a closing line of three backticks like
```
verbatim content
```
jsmeix commented at 2019-10-09 15:04:¶
@Ronjr21
please attach both a rear -D mkrescue/mkkbackup
and a rear -D recover
debug log file,
in particular regarding the latter see
"Debugging issues with Relax-and-Recover" at
https://en.opensuse.org/SDB:Disaster_Recovery
See also "Testing current ReaR upstream GitHub master code" at
https://en.opensuse.org/SDB:Disaster_Recovery
Ronjr21 commented at 2019-10-10 04:09:¶
Hi, attached debug log file
backup log
rear-prx6.log
restore log
rear-prx6.log
jsmeix commented at 2019-10-10 10:12:¶
The code that fails in the ReaR recovery system is
https://github.com/rear/rear/blob/master/usr/share/rear/verify/NETFS/default/050_start_required_nfs_daemons.sh#L50
# check that RPC portmapper service is available and wait for it as needed
# on some systems portmap/rpcbind can take some time to be accessible
# hence 5 attempts each second to check that RPC portmapper service is available
for attempt in $( seq 5 ) ; do
# on SLES11 and on openSUSE Leap 42.1 'rpcinfo -p' lists the RPC portmapper as
# program vers proto port service
# 100000 2 udp 111 portmapper
# 100000 4 tcp 111 portmapper
rpcinfo -p 2>/dev/null | grep -q 'portmapper' && { attempt="ok" ; break ; }
sleep 1
done
test "ok" = $attempt || Error "RPC portmapper '$portmapper_program' unavailable."
@Ronjr21
therefore I would like to know what on your original system
the following command outputs:
rpcinfo -p
Perhaps the above code that checks if RPC portmapper service is
available
does no longer work on Debian 10 and needs to be adapted.
I do not use Debian so I cannot try out or verify things on Debian.
You could also try out if "rear recover" works when you disable the
error exit
in that above code that checks if RPC portmapper service is available
by replacing in your
usr/share/rear/verify/NETFS/default/050_start_required_nfs_daemons.sh
the line
test "ok" = $attempt || Error "RPC portmapper '$portmapper_program' unavailable."
by
test "ok" = $attempt || LogPrint "RPC portmapper '$portmapper_program' unavailable."
so that it does no longer error out here.
Ronjr21 commented at 2019-10-11 07:39:¶
Changes made to 050_start_required_nfs_daemons.sh as suggested, but
there is error on creating disk layout on LVM now, attached debug log.
rear-prx6-lvm.log
pcahyna commented at 2019-10-11 08:33:¶
I think the LVM problem may be related to #2222 (you are using a thin pool).
jsmeix commented at 2019-10-11 09:44:¶
@Ronjr21
thank you for your prompt reply and
@pcahyna
thank you for having a look regarding LVM
(I am basically a LVM noob).
@Ronjr21
regarding LVM thin pool something is already fixed
in current ReaR GitHub master code.
Perhaps those fixes are already sufficient in your case
so that you should try out if things work for you
with current ReaR GitHub master code,
see "Testing current ReaR upstream GitHub master code" at
https://en.opensuse.org/SDB:Disaster_Recovery
jsmeix commented at 2019-10-11 10:07:¶
Curently I have no idea why
rpcinfo -p 2>/dev/null | grep -q 'portmapper'
does not succeed with zero exit code
but the same seems to work on the original system
according to the rpcinfo -p
output there
https://github.com/rear/rear/issues/2250#issuecomment-540952665
@Ronjr21
could you additionally change in your
usr/share/rear/verify/NETFS/default/050_start_required_nfs_daemons.sh
the line
rpcinfo -p 2>/dev/null | grep -q 'portmapper' && { attempt="ok" ; break ; }
to
rpcinfo -p | tee -a $RUNTIME_LOGFILE | grep -q 'portmapper' && { attempt="ok" ; break ; }
and re-run rear -D recover
and attach its new debug log file here.
Ronjr21 commented at 2019-10-11 10:57:¶
As attached, restore succeeded.
rear-prx6-done.log
jsmeix commented at 2019-10-11 12:21:¶
@Ronjr21
thanks for your prompt
https://github.com/rear/rear/files/3717145/rear-prx6-done.log
Therein is now (excerpts)
+ source /usr/share/rear/verify/NETFS/default/050_start_required_nfs_daemons.sh
...
++ for attempt in $( seq 5 )
++ rpcinfo -p
++ tee -a /var/log/rear/rear-prx6.log
++ grep -q portmapper
rpcinfo: can't contact portmapper: RPC: Remote system error - Connection refused
++ sleep 1
which nicely shows why in general 2>/dev/null
is unhelpful
cf.
https://github.com/rear/rear/pull/2142#discussion_r282784504
because it needlessly suppresses error messages in the log
that would be helpful to see that something fails.
Currently I do not know why in this particular case
rpcinfo
fails in the ReaR recovery system with
rpcinfo: can't contact portmapper: RPC: Remote system error - Connection refused
jsmeix commented at 2019-10-11 12:34:¶
Via
https://github.com/rear/rear/commit/af14e15db75bacd554d53dd1041d7852ceb8d9b9
all kind of '2>/dev/null' (i.e. also '&>/dev/null') were removed
(so that '&>/dev/null' is replaced by '1>/dev/null')
so that now we get error messages in the log,
cf.
https://github.com/rear/rear/issues/1395
therein in particular
https://github.com/rear/rear/issues/1395#issuecomment-311916095
Ronjr21 commented at 2019-10-14 02:02:¶
Thanks @jsmeix and @pcahyna for helping out, kudos!
jsmeix commented at 2019-10-14 10:18:¶
@Ronjr21
thank you for the feedback that now things work for you.
Could you describe in more detail what you actually changed
to make it work for you?
I would be interested to see if we could improve things in ReaR
so that such issues could be avoided in general in the future.
What did you do regarding the LVM thin pool issue?
Did you perhaps find out why in your particular case
rpcinfo
fails in the ReaR recovery system or do you
just ignore that?
Ronjr21 commented at 2019-10-14 10:33:¶
@jsmeix LVM thin partition is removed from source server as we do not intend to use it. FYI we are trying ReaR bare metal backup on Proxmox 6. As for rpcinfo I just ignore it. Server is able to obtain backup files from NFS during recovery.
[Export of Github issue for rear/rear.]