#1368 Issue closed: Suse 12 on power. ISO setting seems to be ignored

Labels: support / question, fixed / solved / done

nicovd737 opened issue at 2017-05-17 15:37:

Hello Community :-)

Sorry if i ask dummy question, but i'm a rear beginner.
I have installed RPM provided by rear website for Suse (no-arch one).
It's working but not creating any ISO file.

Please find my config:

# rear -V
Relax-and-Recover 1.17.1 / Git
# cat os.conf
OS_VENDOR=SUSE_LINUX
OS_VERSION=12

The Suse is on IBM Power server.

OUTPUT=ISO
BACKUP=NETFS
BACKUP_URL=file:///nfs/mksysb
AUTOEXCLUDE_MULTIPATH=y
BOOT_OVER_SAN=y
BACKUP_PROG_INCLUDE=( '/' )
BACKUP_PROG_EXCLUDE=( '/nfs' )

Result of rear -v mkbackup command:

# rear -v mkbackup
Relax-and-Recover 1.17.1 / Git
Using log file: /var/log/rear/rear-dtcgemasd07.log
Creating disk layout
Excluding component fs:/nfs/mksysb
Creating root filesystem layout
TIP: To login as root via ssh you need to set up /root/.ssh/authorized_keys or SSH_ROOT_PASSWORD in your configuration file
Copying files and directories
Copying binaries and libraries
Copying kernel modules
Creating initramfs
Copying resulting files to file location
Encrypting disabled
Creating tar archive '/nfs/mksysb/dtcgemasd07/backup.tar.gz'
Archived 1262 MiB [avg 8451 KiB/sec]OK
Archived 1262 MiB in 154 seconds [avg 8396 KiB/sec]

But no ISO image created.

I have tried to install release 2.00 with make install but it fails immediatly with a BUG with Grub2.

Any help will be very appreciated for this tool who seems to be very helpfull for us.

Many thanks
Nico

Relax-and-Recover (ReaR) Issue Template

Fill in the following items before submitting a new issue
(quick response is not guaranteed with free support):

  • rear version (/usr/sbin/rear -V):
  • OS version (cat /etc/rear/os.conf or lsb_release -a):
  • rear configuration files (cat /etc/rear/site.conf or cat /etc/rear/local.conf):
  • Are you using legacy BIOS or UEFI boot?
  • Brief description of the issue:
  • Work-around, if any:

gozora commented at 2017-05-17 16:05:

Hi @nicovd737,

Your ISO is most probably located in /var/lib/rear/output, but that is not the main problem ...
I'm guessing that your /nfs is NFS share, if so, let ReaR handle mounting/unmounting of network share by using config similar to this one:

BACKUP_URL=nfs://<NFS_server>/mnt/rear
OUTPUT_URL=nfs://<NFS_server>/mnt/rear/iso

With your current config you will most probably face some missing binaries during recover phase.

For more information read docu especially 04-scenarios.adoc.

V.

nicovd737 commented at 2017-05-17 16:23:

Hi @gozora

Thanks for your reply. If try the OUTPUT_URL settings but it's the same issue. Seems to totally forget to create the ISO file :-)

# rear -v mkbackup
Relax-and-Recover 1.17.1 / Git
Using log file: /var/log/rear/rear-dtcgemasd07.log
Creating disk layout
Creating root filesystem layout
TIP: To login as root via ssh you need to set up /root/.ssh/authorized_keys or SSH_ROOT_PASSWORD in your configuration file
Copying files and directories
Copying binaries and libraries
Copying kernel modules
Creating initramfs
Copying resulting files to nfs location
Encrypting disabled
Creating tar archive '/tmp/rear.dOLa2SeknUaI7iM/outputfs/dtcgemasd07/backup.tar.gz'
Archived 1195 MiB [avg 8218 KiB/sec]OK
Archived 1195 MiB in 150 seconds [avg 8164 KiB/sec]

I will try to use tar files provided by GitHub instead of using this RPM (release is quite old).
Thanks again for your quick reply.
Nico

gozora commented at 2017-05-17 16:27:

@nicovd737 no problem ;-)

If you don't manage to find solution by your self, please provide ReaR log file from rear mkrescue session located in /var/log/rear/

V.

nicovd737 commented at 2017-05-17 18:46:

Do you want the full log ? It's quite huge ;-)

gozora commented at 2017-05-17 18:47:

Yes,

Attach files by dragging & dropping, Choose Files selecting them, or pasting from the clipboard. ..

nicovd737 commented at 2017-05-17 18:49:

rear-dtcgemasd07.txt

gozora commented at 2017-05-17 19:17:

On my systems (all of them x86_64) ISO image is created by output/ISO/Linux-i386/820_create_iso_image.sh.
This part of ReaR is not executed on your Power HW.
To be honest I have never hands on experience with such HW :-( hence have no idea how boot works there.

You can have a look on issue https://github.com/rear/rear/issues/663, similar to yours, which includes patch https://github.com/rear/rear/pull/672 introduced in ReaR 1.18 (however I can't evaluate if it will help or not). I guess you can try to download ReaR 2.0 and check if it solves your problem ...

V.

gdha commented at 2017-05-18 07:37:

@nicovd737 the best support for PowerPC you will find in the latest snapshot - see http://download.opensuse.org/repositories/Archiving:/Backup:/Rear:/Snapshot/SLE_12/

nicovd737 commented at 2017-05-18 07:51:

Issue solved ! Many thanks @gdha :-)

jsmeix commented at 2017-05-18 07:59:

Simply put:
ReaR 1.17.1 is too old to be used on IBM Power.

Recently there have been many adaptions and enhancements
from @schabrolles regarding ReaR on Power (ppc64/ppc64le)
and there have been further adaptions and enhancements
also from @schabrolles regarding ReaR with MULTIPATH
and I think also regarding ReaR with SAN.

@nicovd737
because you use ReaR on Power with MULTIPATH and SAN
I think you basically must use the latest ReaR code.

FYI:

Regarding your BACKUP_URL=file:///nfs/mksysb
As far as I know this does not work.
I always use

OUTPUT=ISO
BACKUP=NETFS
BACKUP_OPTIONS="nfsvers=3,nolock"
BACKUP_URL=nfs://10.160.4.244/nfs

where 10.160.4.244 is the IP of my NFS server
so that ReaR mounts the NFS share.
Let ReaR mount the NFS share is in particular useful
during "rear recover" so that you do not need to manually
mount your NFS share from within the ReaR recovery system
before you can run "rear recover".

In general have a look at
https://en.opensuse.org/SDB:Disaster_Recovery
therein in particular all the sections from the start up to
"Disaster recovery with Relax-and-Recover (ReaR)"
plus all the later sections until the end starting at
"Virtual machines".

jsmeix commented at 2017-05-18 08:05:

@nicovd737
thanks for your prompt feedback
https://github.com/rear/rear/issues/1368#issuecomment-302328892
that is works with the current ReaR code.
Such feedback helps us a lot to know that it really works in practice!

@schabrolles
once more many thanks for your various adaptions and enhancements
regarding Power architecture and MULTIPATH and SAN.
It helps so much when experts in that areas contribute!

jsmeix commented at 2017-05-18 08:15:

@nicovd737
one more FYI:
In latest ReaR there are some example configs in
usr/share/rear/conf/examples/ in particular there is
a RHEL7-PPC64LE-Mulitpath-PXE-GRUB.conf
from @schabrolles which is probably also helpful
on SLES12 - of course you cannot just copy it, the
example configs are meant as hints what config variables
are of interest regarding a particular setup.
For example when you use the default SLES12 btrfs structure
you should additionally have a look at
SLE12-SP1-btrfs-example.conf or SLE12-SP2-btrfs-example.conf
(depending on whether you have SP1 or SP2 which are
incompatible in their default btrfs structures) and
if you use e.g. SAP-HANA or UEFI you should
check some more example configs...

nicovd737 commented at 2017-05-18 10:28:

Thanks a lot for your help. Very appreciated.

schabrolles commented at 2017-05-18 10:37:

@nicovd737 Here is an example of a local.conf for SLES12 on POWER I use.

AUTOEXCLUDE_MULTIPATH=n
BOOT_OVER_SAN=y

### write the rescue initramfs to USB and update the USB bootloader 
OUTPUT=ISO
BACKUP=NETFS
BACKUP_URL="nfs://10.7.19.177/rear"

## SLES12 SPECIFIC options for BTRFS
BACKUP_OPTIONS="nfsvers=3,nolock" 
REQUIRED_PROGS=( "${REQUIRED_PROGS[@]}" snapper chattr lsattr )
COPY_AS_IS=( "${COPY_AS_IS[@]}" /usr/lib/snapper/installation-helper /etc/snapper/config-templates/default )

btrfs_subvolume_without_snap=$(findmnt -n -r -t btrfs | cut -d ' ' -f 1 | grep -v '^/$' | egrep -v 'snapshots|crash' | sed -e "s/$/\/*'/" -e "s/^/'/" | tr '\n' ' ')
BACKUP_PROG_INCLUDE=( $btrfs_subvolume_without_snap )

POST_RECOVERY_SCRIPT=( 'if snapper --no-dbus -r $TARGET_FS_ROOT get-config | grep -q "^QGROUP.*[0-9]/[0-9]" ; then snapper --no-dbus -r $TARGET_FS_ROOT set-config QGROUP= ; snapper --no-dbus -r $TARGET_FS_ROOT setup-quota && echo snapper setup-quota done || echo snapper setup-quota failed ; else echo snapper setup-quota not used ; fi' )
Tu trouveras également plus d’infos et d’exemples dans /usr/share/rear/conf/examples

gdha commented at 2017-05-18 11:01:

@schabrolles it seems to me that an extra prep script would benefit rear users, especially for snapper

jsmeix commented at 2017-05-18 12:16:

@gdha
I guess such extra prep script stuff is more meant for me
because the usage of snapper together with btrfs on SLE12
is a very SUSE specific thing.

Because SUSE's default btrfs setup in SLE12 is so complicated
(or at least it looks so complicated to me) I have currently
no good idea how to automate that stuff so that
it works reliably with reasonable effort.

Therefore at least for now I must leave it to the (poor) user
to manually "do the right settings" in local.conf to avoid
that users get a wrong and misleading expectation
that "ReaR just works with SLE12 and btrfs".

Or in other and more general words:
Unless I can make an automatism working reliably
I won't implement an automatism.

My hope is that for the upcoming SLE15 ( 12 + 1 = 15 ;-)
the SUSE default btrfs setup stabilizes and then
I may try another attempt to make its usage easier in ReaR.

But when the SUSE default btrfs setup still changes
in incompatible ways I cannot do much because I will not
implement overcomplicated and oversophisticated code
that tries to "do things automatically right" for all those
different and incompatible SUSE default btrfs setups.

@schabrolles
thanks for your example!
By the way when reading it I wonder if the comment at

### write the rescue initramfs to USB and update the USB bootloader 
OUTPUT=ISO
BACKUP=NETFS
BACKUP_URL="nfs://10.7.19.177/rear" 

is a meanwhile false leftover from a former version?
I also wonder what special stuff your final command

Tu trouveras ...

actually does on your SLES12 on POWER system ;-)
cf. https://github.com/rear/rear/issues/700

schabrolles commented at 2017-05-18 13:00:

@jsmeix : :D .... copy / past from email ....
I will may be propose the same example with POWER + PXE for SUSE 12 with clean comment

Regarding the specifique setup for SLE12 with btrfs, I agree with @gdha. It should be part of a script somewhere to be automated...
Why not started with a script specific to SLE12, that just set the variables :

REQUIRED_PROGS=( "${REQUIRED_PROGS[@]}" snapper chattr lsattr )
COPY_AS_IS=( "${COPY_AS_IS[@]}" /usr/lib/snapper/installation-helper /etc/snapper/config-templates/default )

btrfs_subvolume_without_snap=$(findmnt -n -r -t btrfs | cut -d ' ' -f 1 | grep -v '^/$' | egrep -v 'snapshots|crash' | sed -e "s/$/\/*'/" -e "s/^/'/" | tr '\n' ' ')
BACKUP_PROG_INCLUDE=("${BACKUP_PROG_INCLUDE[@]}" $btrfs_subvolume_without_snap )

POST_RECOVERY_SCRIPT=( 'if snapper --no-dbus -r $TARGET_FS_ROOT get-config | grep -q "^QGROUP.*[0-9]/[0-9]" ; then snapper --no-dbus -r $TARGET_FS_ROOT set-config QGROUP= ; snapper --no-dbus -r $TARGET_FS_ROOT setup-quota && echo snapper setup-quota done || echo snapper setup-quota failed ; else echo snapper setup-quota not used ; fi' )

jsmeix commented at 2017-05-18 13:54:

There cannot be a "script specific to SLE12"
because for SLE12 there are several different
incompatible btrfs default setups:

SLE12-SP2 btrfs default setup is incompatible compared to
SLE12-SP1 btrfs default setup which is incompatible compared to
SLE12-GA (a.k.a. SLE12-SP0) btrfs default setup.

Therefore there are three different example configs:
For SLE12-SP2: conf/examples/SLE12-SP2-btrfs-example.conf
for SLE12-SP1: conf/examples/SLE12-SP1-btrfs-example.conf
for SLE12-GA/SP0: conf/examples/SLE12-btrfs-example.conf

Also there cannot be three scripts each one specific
to SLE12-SP2, SLE12-SP1, and SLE12-GA/SP0
(or in one script with three different cases therein).

Reason:

When a user has initially installed SLE12-GA/SP0 from scratch
(then initially he needs a SLE12-GA/SP0 specific ReaR setup)
and updated to SLE12-SP1 and to SLE12-SP2,
then - as far as I know - the usual update
only replaces RPMs with newer versions
but it leaves the btrfs structure unchanged
so that the user still needs a SLE12-GA/SP0 specific ReaR setup
(only what is in his backup is now newer files)
but - as far as I know - the updated system shows itself
as a SLE12-SP2 system.

When a user has initially installed SLE12-SP1 from scratch
(then initially he needs a SLE12-SP1 specific ReaR setup)
and updated to SLE12-SP2,
then - as far as I know - the usual update
only replaces RPMs with newer versions
but it leaves the btrfs structure unchanged
so that the user still needs a SLE12-SP1 specific ReaR setup
(only what is in his backup is now newer files)
but - as far as I know - the updated system shows itself
as a SLE12-SP2 system.

When a user has initially installed SLE12-SP2 from scratch
(then initially he needs a SLE12-SP2 specific ReaR setup)
and the system shows itself as a SLE12-SP2 system.

This means we cannot rely on what version strings
the system tells about itself.

We must inspect "the real thing",
i.e. the really existing btrfs stuff.

When we inspect the really existing btrfs stuff
it means the btrfs stuff in ReaR works generically.

And this is what I have currently implemented:
The btrfs stuff in ReaR works generically.

It works independent of what a particular system
tells in whatever strings as what it likes to be
recognized by users who look at that strings.

The current btrfs stuff in ReaR works only on real hard facts
(at least I tried to implement it in this way).

This is what I can maintain - at least to some extent - see below.

This way the current generically working btrfs stuff in ReaR
also works on Red Hat and on whatever other systems
without having specific scripts for each one.

But the current generically working btrfs stuff in ReaR
has some limitations.

It does not and cannot "just work" for any possible
btrfs structure one can set up manually.

In particular it fails - as far as I can imagine - for
interwoven btrfs subvolumes, see
https://github.com/rear/rear/issues/497#issuecomment-71650367
and the subsequent comment
https://github.com/rear/rear/issues/497#issuecomment-71652667

And I think somehow I messed up something
while I implemented support for the incompatible
SLE12-SP1 btrfs stuff (compared to SLE12-GA/SP0)
(since SLE12-SP1 snapper is 'in between')
cf. https://github.com/rear/rear/issues/556
so that there is currently this bug
https://github.com/rear/rear/issues/1036

I do appreciate any fixes, adaptions, and enhancements
that make the generically working btrfs support in ReaR
work better and more reliably in general.

In contrast I think Linux distribution specific scripts or
even special Linux distribution version specific scripts
will not really help because in the end any user on any
recent Linux system can manually create any btrfs structure
so that in the end only generically working btrfs scripts
could properly implement btrfs support in ReaR.

schlomo commented at 2017-05-18 14:09:

@jsmeix if SLES12 is incompatible between service packs then it seems to me that we should start to treat "12SP1" as the "version" for ReaR andnot just "12". That way we can have different scripts for that.

Maybe this is also a very good moment to go back to actually have different OS_VERSION and OS_MASTER_VERSION for SLES where we for example use OS_MASTER_VENDOR=SUSE and OS_MASTER_VERSION=12 for the stuff that is generic enough for all SLE12 and openSUSE12 while we use OS_VENDOR=SLE and OS_VERSION=12SP1 for SLE12-SP1 specifically. openSUSE specials can then go to OS_VENDOR=openSUSE or something like that.

jsmeix commented at 2017-05-18 14:13:

@schlomo
didn't you read my reasoning why
Linux distribution version specific scripts
cannot work in general?

A SLES12-SP2 user can maunally setup a btrfs structure
that matches the one in SLES12GA/SP0 or in SLES12-SP1
or in Red Hat or in Ubuntu or anything else whatever he likes.

Linux distribution version specific scripts will "just fail".

schlomo commented at 2017-05-18 14:14:

Erwischt :-(

jsmeix commented at 2017-05-18 14:15:

OK :-)

jsmeix commented at 2017-05-18 14:44:

To avoid misunderstanding:

I think what could of course be done is to automate things like

REQUIRED_PROGS=( "${REQUIRED_PROGS[@]}" snapper chattr lsattr )
COPY_AS_IS=( "${COPY_AS_IS[@]}" /usr/lib/snapper/installation-helper /etc/snapper/config-templates/default )

because I assume it is possible to autodetect sufficiently reliably
whether or not that stuff is needed during "rear recover" and
if that stuff is copied into the recovery system regardless
that it is not actually needed there it should cause no failure
(just the recovery system is bigger than actually needed).

jsmeix commented at 2017-07-10 14:51:

An addendum regarding whether or not
a SLE12-SP1 system that was upgraded to SLE12-SP2
shows itself as a SLE12-SP2 system or still as SLE12-SP1:

When upgrading with plain "zypper dup" the /etc/os-release file
is left unchanged so that afterwards it still shows itself as SLE12-SP1.

One needs to do a so called "service pack migration", see
the "Service Pack Migration" section in the SUSE documentation
in particular therein see "Migrating with Zypper" at
https://www.suse.com/documentation/sles-12/book_sle_deployment/data/sec_update_migr_zypper_onlinemigr.html

This is another reason why scripts won't work reliably
if they only check version strings to behave different
(the better way is to check "the real thing" directly).


[Export of Github issue for rear/rear.]