#2429 Issue closed: Implement iSCSI support (currently iSCSI needs special manual setup)

Labels: enhancement, documentation, no-issue-activity

jsmeix opened issue at 2020-06-17 09:56:

For a first idea how to implement iSCSI support see below
https://github.com/rear/rear/issues/2429#issuecomment-646500447

If that is not possible "rear mkrescue/mkbackup"
should at least tell the user when an iSCSI device is used
i.e. when that iSCSI device appears in disklayout.conf because then
"rear recover" may not "just work" because iSCSI is not (yet) fully supported
but needs some manual setup, cf.
https://github.com/rear/rear/issues/2429#issuecomment-645626221

  • ReaR version ("/usr/sbin/rear -V"): ReaR 2.6

  • Storage (local disk or SSD) and/or SAN (FC or iSCSI or FCoE) and/or multipath (DM or NVMe): iSCSI

  • Storage layout ("lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,SIZE,MOUNTPOINT" or "lsblk" as makeshift):
    see https://github.com/rear/rear/issues/2428#issuecomment-645236747
    (excerpt):

# lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,SIZE,MOUNTPOINT
NAME                      KNAME     PKNAME    TRAN   TYPE  FSTYPE             SIZE MOUNTPOINT
/dev/sda                  /dev/sda            sata   disk                      50G 
|-/dev/sda1               /dev/sda1 /dev/sda         part  swap               995M [SWAP]
`-/dev/sda2               /dev/sda2 /dev/sda         part  btrfs               49G /
/dev/sdb                  /dev/sdb            iscsi  disk  linux_raid_member  512M 
`-/dev/mapper/iscsi1_lun1 /dev/dm-1 /dev/sdb         mpath linux_raid_member  512M 
/dev/sdc                  /dev/sdc            iscsi  disk  linux_raid_member  256M 
`-/dev/mapper/iscsi1_lun2 /dev/dm-0 /dev/sdc         mpath linux_raid_member  256M 
/dev/sdd                  /dev/sdd            iscsi  disk  linux_raid_member  512M 
`-/dev/mapper/iscsi1_lun1 /dev/dm-1 /dev/sdd         mpath linux_raid_member  512M 
/dev/sde                  /dev/sde            iscsi  disk  linux_raid_member  256M 
`-/dev/mapper/iscsi1_lun2 /dev/dm-0 /dev/sde         mpath linux_raid_member  256M 
/dev/sdf                  /dev/sdf            iscsi  disk  linux_raid_member    5G 
`-/dev/mapper/site_A_1    /dev/dm-2 /dev/sdf         mpath linux_raid_member    5G 
/dev/sdg                  /dev/sdg            iscsi  disk  linux_raid_member  512M 
`-/dev/mapper/iscsi1_lun1 /dev/dm-1 /dev/sdg         mpath linux_raid_member  512M 
/dev/sdh                  /dev/sdh            iscsi  disk  linux_raid_member  512M 
`-/dev/mapper/site_A_2    /dev/dm-3 /dev/sdh         mpath linux_raid_member  512M 
/dev/sdi                  /dev/sdi            iscsi  disk  linux_raid_member  512M 
`-/dev/mapper/site_A_3    /dev/dm-4 /dev/sdi         mpath linux_raid_member  512M 
/dev/sdj                  /dev/sdj            iscsi  disk  linux_raid_member  256M 
`-/dev/mapper/iscsi1_lun2 /dev/dm-0 /dev/sdj         mpath linux_raid_member  256M 
/dev/sdk                  /dev/sdk            iscsi  disk  linux_raid_member    5G 
`-/dev/mapper/site_A_1    /dev/dm-2 /dev/sdk         mpath linux_raid_member    5G 
/dev/sdl                  /dev/sdl            iscsi  disk  linux_raid_member  512M 
`-/dev/mapper/site_A_2    /dev/dm-3 /dev/sdl         mpath linux_raid_member  512M 
/dev/sdm                  /dev/sdm            iscsi  disk  linux_raid_member  512M 
`-/dev/mapper/site_A_3    /dev/dm-4 /dev/sdm         mpath linux_raid_member  512M 
/dev/sdn                  /dev/sdn            iscsi  disk  linux_raid_member    5G 
`-/dev/mapper/site_A_1    /dev/dm-2 /dev/sdn         mpath linux_raid_member    5G 
/dev/sdo                  /dev/sdo            iscsi  disk  linux_raid_member  512M 
`-/dev/mapper/site_A_2    /dev/dm-3 /dev/sdo         mpath linux_raid_member  512M 
/dev/sdp                  /dev/sdp            iscsi  disk  linux_raid_member  512M 
`-/dev/mapper/site_A_3    /dev/dm-4 /dev/sdp         mpath linux_raid_member  512M 
/dev/sr0                  /dev/sr0            ata    rom                     1024M 
/dev/sdq                  /dev/sdq            iscsi  disk  linux_raid_member  512M 
`-/dev/mapper/iscsi2_lun1 /dev/dm-5 /dev/sdq         mpath linux_raid_member  512M 
/dev/sdr                  /dev/sdr            iscsi  disk  linux_raid_member  256M 
`-/dev/mapper/iscsi2_lun2 /dev/dm-6 /dev/sdr         mpath linux_raid_member  256M 
/dev/sds                  /dev/sds            iscsi  disk  linux_raid_member  512M 
`-/dev/mapper/iscsi2_lun1 /dev/dm-5 /dev/sds         mpath linux_raid_member  512M 
/dev/sdt                  /dev/sdt            iscsi  disk  linux_raid_member  256M 
`-/dev/mapper/iscsi2_lun2 /dev/dm-6 /dev/sdt         mpath linux_raid_member  256M 
/dev/sdu                  /dev/sdu            iscsi  disk  linux_raid_member    5G 
`-/dev/mapper/site_B_1    /dev/dm-7 /dev/sdu         mpath linux_raid_member    5G 
/dev/sdv                  /dev/sdv            iscsi  disk  linux_raid_member  512M 
`-/dev/mapper/site_B_2    /dev/dm-8 /dev/sdv         mpath linux_raid_member  512M 
/dev/sdw                  /dev/sdw            iscsi  disk  linux_raid_member  512M 
`-/dev/mapper/site_B_3    /dev/dm-9 /dev/sdw         mpath linux_raid_member  512M 
/dev/sdx                  /dev/sdx            iscsi  disk  linux_raid_member  512M 
`-/dev/mapper/iscsi2_lun1 /dev/dm-5 /dev/sdx         mpath linux_raid_member  512M 
/dev/sdy                  /dev/sdy            iscsi  disk  linux_raid_member  256M 
`-/dev/mapper/iscsi2_lun2 /dev/dm-6 /dev/sdy         mpath linux_raid_member  256M 
/dev/sdz                  /dev/sdz            iscsi  disk  linux_raid_member    5G 
`-/dev/mapper/site_B_1    /dev/dm-7 /dev/sdz         mpath linux_raid_member    5G 
/dev/sdaa                 /dev/sdaa           iscsi  disk  linux_raid_member  512M 
`-/dev/mapper/site_B_2    /dev/dm-8 /dev/sdaa        mpath linux_raid_member  512M 
/dev/sdab                 /dev/sdab           iscsi  disk  linux_raid_member  512M 
`-/dev/mapper/site_B_3    /dev/dm-9 /dev/sdab        mpath linux_raid_member  512M 
/dev/sdac                 /dev/sdac           iscsi  disk  linux_raid_member    5G 
`-/dev/mapper/site_B_1    /dev/dm-7 /dev/sdac        mpath linux_raid_member    5G 
/dev/sdad                 /dev/sdad           iscsi  disk  linux_raid_member  512M 
`-/dev/mapper/site_B_2    /dev/dm-8 /dev/sdad        mpath linux_raid_member  512M 
/dev/sdae                 /dev/sdae           iscsi  disk  linux_raid_member  512M 
`-/dev/mapper/site_B_3    /dev/dm-9 /dev/sdae        mpath linux_raid_member  512M

jsmeix commented at 2020-06-17 12:14:

Because this issue is not about to add iSCSI support (which needs sponsorship)
but only to tell the user when an iSCSI device is in disklayout.conf
I think I could implement relatively easily a test that checks
if a device in disklayout.conf appears as iscsi TRAN in lsblk output
and show a user information+confirmation dialog in that case,
i.e. something similar as the 'BACKUP_URL_ISO_PROCEED_...'
user information+confirmation dialog in
prep/default/040_check_backup_and_output_scheme.sh
cf. the comment in that script
https://github.com/rear/rear/blob/master/usr/share/rear/prep/default/040_check_backup_and_output_scheme.sh#L12

gozora commented at 2020-06-17 21:09:

So I did some tests on same VM from https://github.com/rear/rear/issues/2428. But this time I've added iSCSI LUN /dev/sdd, so the final setup looks something like this:

[root@centos7 ~]# lsblk -ipo NAME,KNAME,PKNAME,TRAN,TYPE,FSTYPE,SIZE,MOUNTPOINT
NAME                        KNAME     PKNAME    TRAN   TYPE  FSTYPE             SIZE MOUNTPOINT
/dev/sda                    /dev/sda            sata   disk                       8G 
|-/dev/sda1                 /dev/sda1 /dev/sda         part  vfat               200M /boot/efi
|-/dev/sda2                 /dev/sda2 /dev/sda         part  xfs                  1G /boot
`-/dev/sda3                 /dev/sda3 /dev/sda         part  LVM2_member        6.8G 
  |-/dev/mapper/centos-root /dev/dm-0 /dev/sda3        lvm   xfs                  6G /
  `-/dev/mapper/centos-swap /dev/dm-1 /dev/sda3        lvm   swap               820M [SWAP]
/dev/sdb                    /dev/sdb            sata   disk  mpath_member         8G 
`-/dev/mapper/disk_2        /dev/dm-2 /dev/sdb         mpath linux_raid_member    8G 
  `-/dev/md0                /dev/md0  /dev/dm-2        raid1 xfs                  8G /data
/dev/sdc                    /dev/sdc            sata   disk  mpath_member         8G 
`-/dev/mapper/disk_1        /dev/dm-3 /dev/sdc         mpath linux_raid_member    8G 
  `-/dev/md0                /dev/md0  /dev/dm-3        raid1 xfs                  8G /data
/dev/sdd                    /dev/sdd            iscsi  disk  xfs                  1G /bigdata
/dev/sr0                    /dev/sr0            ata    rom                     1024M

With following local.conf

BACKUP=NETFS
BACKUP_OPTIONS="nfsvers=3,nolock"
OUTPUT=ISO

BACKUP_URL=nfs://backup/mnt/rear
OUTPUT_URL=nfs://backup/mnt/rear/iso

BACKUP_PROG_EXCLUDE+=( /mnt )

SSH_FILES="yes"
SSH_UNPROTECTED_PRIVATE_KEYS="yes"

PROGS+=( /usr/libexec/openssh/sftp-server )
COPY_AS_IS+=( /usr/libexec/openssh/sftp-server /etc/iscsi /var/lib/iscsi )

USE_RESOLV_CONF="no"
USE_DHCLIENT="no"

NETWORKING_PREPARATION_COMMANDS=( 'ip addr add 192.168.56.200/24 dev enp0s8' 'ip link set dev enp0s8 up' 'return' )

BOOT_OVER_SAN="yes"
AUTOEXCLUDE_MULTIPATH="no"

REQUIRED_PROGS+=( iscsid iscsiadm )

I've successfully restored structure and data to /bigdata residing on iSCSI LUN.
Only modification comparing to non-iSCSI setup is to copy Iscsi daemon configuration (/etc/iscsi) and LUN connection config files (/var/lib/iscsi) + add iscsid and iscsiadm (open-iscsi administration utility) into ReaR recovery system.

After boot into ReaR rescue system, one need to start iscsid, login to iSCSI target and run rear recover.

RESCUE centos7:~ # lsscsi
[3:0:0:0]    disk    ATA      VBOX HARDDISK    1.0   /dev/sda 
[4:0:0:0]    disk    ATA      VBOX HARDDISK    1.0   /dev/sdb 
[7:0:0:0]    disk    ATA      VBOX HARDDISK    1.0   /dev/sdc 
[8:0:0:0]    cd/dvd  VBOX     CD-ROM           1.0   /dev/sr0

RESCUE centos7:~ # iscsid

RESCUE centos7:~ # /sbin/iscsiadm -m node --loginall=automatic
Logging in to [iface: default, target: iqn.2020-06-sk.virtual:backup, portal: 192.168.56.7,3260] (multiple)
Login to [iface: default, target: iqn.2020-06-sk.virtual:backup, portal: 192.168.56.7,3260] successful.

RESCUE centos7:~ # lsscsi
[3:0:0:0]    disk    ATA      VBOX HARDDISK    1.0   /dev/sda 
[4:0:0:0]    disk    ATA      VBOX HARDDISK    1.0   /dev/sdb 
[7:0:0:0]    disk    ATA      VBOX HARDDISK    1.0   /dev/sdc 
[8:0:0:0]    cd/dvd  VBOX     CD-ROM           1.0   /dev/sr0 
[10:0:0:0]   storage IET      Controller       0001  -        
[10:0:0:1]   disk    IET      VIRTUAL-DISK     0001  /dev/sdd

RESCUE centos7:~ # rear recover
Relax-and-Recover 2.5 / Git
Running rear recover (PID 503)
...

Since this setup is quite straightforward and relatively easy, I'd keep things as they currently are.
Because only thing that ReaR needs to have full featured iSCSI support is startup script for iSCSI daemon starting after network is setup (/etc/scripts/system-setup), which is minor effort task ...

V.

jsmeix commented at 2020-06-18 06:17:

@gozora
many thanks for your explanatory report about using ReaR with iSCSI!

It helps so much when someone who sits in front of such a "beast"
provides a comprehensible report how ReaR behaves in this case,
what exactly is needed to make it work, and how things look like.

gdha commented at 2020-06-18 07:11:

@gozora Great - nice test. It only requires a prep script to make it really supported ;-)

gozora commented at 2020-06-18 07:57:

@gdha I was thinking about something like introduce some variables into default.conf.

maybe:

ISCSI_DAEMON="iscsid -f"
ISCSI_LOGIN_COMMAND="iscsiadm -m node --loginall=automatic"

Which could be launched after successful network setup.
Shouldn't be a big problem to get this done.

V.

jsmeix commented at 2020-06-18 08:14:

@gozora
perhaps as a first test something like

NETWORKING_PREPARATION_COMMANDS=(
'ip addr add 192.168.56.200/24 dev enp0s8'
'ip link set dev enp0s8 up'
'sleep 1'
'iscsid -f'
'sleep 1'
'iscsiadm -m node --loginall=automatic'
'return'
)

may do it in your particular case.
Those sleep 1 are meant to give things some time to startup completely.

gozora commented at 2020-06-18 08:31:

Damn! I've spent an hour yesterday figuring out how to inject code that would execute after network is setup and couldn't make up anything.
I'll certainly test it today afternoon.

Thanks!

V.

jsmeix commented at 2020-06-18 08:41:

@gozora
there is also PRE_RECOVERY_SCRIPT that gets evaluated by
usr/share/rear/setup/default/010_pre_recovery_script.sh
so that PRE_RECOVERY_SCRIPT is run each time "rear recover" is run
which could make a difference when one re-runs "rear recover" if it failed
in contrast to NETWORKING_PREPARATION_COMMANDS
that are run only once during recovery system startup via
/etc/scripts/system-setup.d/60-network-devices.sh

gozora commented at 2020-06-18 08:47:

@jsmeix
Will try that out too.

Thanks!

V.

gozora commented at 2020-06-18 17:52:

Adding:

NETWORKING_PREPARATION_COMMANDS=( 'ip addr add 192.168.56.200/24 dev enp0s8'
'ip link set dev enp0s8 up' 
'sleep 1'
'iscsid'
'sleep 1'
'iscsiadm -m node --loginall=automatic'
'return' )

into local.conf works as charm.
After ReaR recovery system boot, I have iSCSI LUN present without further configuration.

$ ssh restore
Warning: Permanently added 'restore,192.168.56.200' (ECDSA) to the list of known hosts.

Welcome to Relax-and-Recover. Run "rear recover" to restore your system !

RESCUE centos7:~ # lsscsi
[3:0:0:0]    disk    ATA      VBOX HARDDISK    1.0   /dev/sda 
[4:0:0:0]    disk    ATA      VBOX HARDDISK    1.0   /dev/sdb 
[7:0:0:0]    disk    ATA      VBOX HARDDISK    1.0   /dev/sdc 
[8:0:0:0]    cd/dvd  VBOX     CD-ROM           1.0   /dev/sr0 
[10:0:0:0]   storage IET      Controller       0001  -        
[10:0:0:1]   disk    IET      VIRTUAL-DISK     0001  /dev/sdd

gozora commented at 2020-06-18 18:12:

Since iSCSI can be configured using standard ReaR channels and one don't need to do any additional hacking after ReaR recovery system boot. I'd say that should not even warn user and let him just Relax-and-Recover ;-)

V.

gozora commented at 2020-06-18 18:14:

Do you think that iSCSI setup in ReaR is worth to document?

V.

jsmeix commented at 2020-06-19 07:19:

I replaced the word "warn" by "tell" in the title and in my above entries.
I was uncomfortable with the word "warn" when I used it because I agree with
https://blog.schlomo.schapiro.org/2015/04/warning-is-waste-of-my-time.html
but I had no better idea what other word I could use here.

I think as long as "rear recover" does not "just work" with iSCSI
ReaR should tell the user during "rear mkrescue/mkbackup"
when an iSCSI device appears in disklayout.conf because then
the user must manually setup some (simple) additional things.

jsmeix commented at 2020-06-19 07:22:

@gozora
perhaps we are lucky and there appears a ReaR user with iSCSI
who is faster in implementing automagic iSCSI support in ReaR
than we need do document how to setup things manually?
;-)

gozora commented at 2020-06-19 07:30:

From error->warnign->tell that is nice deescalation, but I like the final version.
Over the years here, I think there was no one with iSCSI, so doing automatic iSCSI setup in ReaR look kind of waste of time especially if you can do manual setup relatively easy.

V.

jsmeix commented at 2020-06-19 08:04:

What I have in mind with "automagic iSCSI support in ReaR" is that
when an iSCSI device appears in disklayout.conf
then

COPY_AS_IS+=( "${ISCSI_COPY_AS_IS[@]}" )
REQUIRED_PROGS+=( "${ISCSI_REQUIRED_PROGS[@]}" )

happens automatically with new config variables in default.conf

ISCSI_COPY_AS_IS=( /etc/iscsi /var/lib/iscsi )
ISCSI_REQUIRED_PROGS=( iscsid iscsiadm )

and
there is a new recovery system startup script like
usr/share/rear/skel/default/etc/scripts/system-setup.d/63-iscsi.sh
(that runs after /etc/scripts/system-setup.d/60-network-devices.sh
and etc/scripts/system-setup.d/62-routing.sh)
which should start iscsid and login to iSCSI target
by using new config variables in default.conf
(default.conf is sourced by skel/default/etc/scripts/system-setup)
like

ISCSI_DAEMON_COMMANDS=( 'sleep 1' 'iscsid -f' )
ISCSI_LOGIN_COMMANDS=( 'sleep 1' 'iscsiadm -m node --loginall=automatic' )

so that the user has the final power to specify what he needs
for his particular iSCSI system.

Because I read 'login' I assume some extra care is needed
how ReaR deals with possibly confidential login data
(are passwords used for login to iSCSI target?)
to avoid that secrets are included in the recovery system
(by default the recovery system must be free of secrets)
and to check if perhaps an interactive login to iSCSI target works
(can iscsiadm do an interactive login via user password prompt?)

gozora commented at 2020-06-19 08:13:

@jsmeix agree on your implementation idea, I had similar idea too.

With the login you are right, I normally don't setup login passwords on iSCSI LUNs but the option is there. I'm not sure now but it is possible that things like Kerberos are even possible :-).
Connection information at least on my Centos7 are stored in /var/lib/iscsi (some versions ago it was /etc/iscsi so yet another random location fun to solve).

can iscsiadm do an interactive login via user password prompt

Yes it can, but it is also designed to work right from the boot ...

V.

jsmeix commented at 2020-06-19 08:16:

As far as I see there is one part of the manual iSCSI setup
that can get tricky for the user when he wants iSCSI setup
happen automatically right in his recovery system so that
only calling plain "rear recover" would be sufficient.

When the user has a more complicated networking setup
where he needs to use ReaR's generated
/etc/scripts/system-setup.d/60-network-devices.sh
then he cannot use NETWORKING_PREPARATION_COMMANDS
to start iscsid and login to iSCSI target.
Perhaps PRE_RECOVERY_SCRIPT can be used for that
but that looks all more like a workaround than an intended way
how such things should be done in ReaR.

jsmeix commented at 2020-06-19 08:23:

After the ReaR 2.6 release was completely done
(i.e. when we can do ReaR 2.7 development)
I will implement some initial iSCSI support
(i.e. hopefully I can start next week with that)
so that you @gozora could test and play around with it.

gozora commented at 2020-06-19 08:33:

@jsmeix you look to be interested in iSCSI topic ;-).
Btw. iSCSI is cheap way how you can simulate complex disk layouts (which are normally done over Fibre Channel). All you need to do is to preset from iSCSI server LUN over multiple network interfaces and you can join them using multipathd on client site and treat them like standard disks...

V.

jsmeix commented at 2020-06-19 08:47:

@gozora
I am afraid I am not so much interested in iSCSI
as in not getting issues from certain kind of people about
"unacceptable poor performance of ReaR on iSCSI systems"
or how else those kind of people may describe it when something
does not meet their "just works perfectly out of the box" expectations.

gozora commented at 2020-06-19 08:54:

@jsmeix :-) OK that explains a lot :-)

V.

jsmeix commented at 2020-06-19 09:26:

With such explicit ISCSI_... config variables iSCSI related things
are made clear to the user (the user is excpected to read default.conf)
and the user gets a predefined way in ReaR how to deal with iSCSI
that he can adapt and enhance as needed for his particular use case.

github-actions commented at 2020-10-14 01:49:

Stale issue message


[Export of Github issue for rear/rear.]