#3017 Issue open: Recovery system fails to start up on Fedora 39 (systemd errors)

Labels: bug

GreasyMonkee opened issue at 2023-06-25 09:12:

  • ReaR version ("/usr/sbin/rear -V"):
    rear-2.6-9.fc38.x86_64

  • If your ReaR version is not the current version, explain why you can't upgrade:

  • OS version ("cat /etc/os-release" or "lsb_release -a" or "cat /etc/rear/os.conf"):

  • ReaR configuration files ("cat /etc/rear/site.conf" and/or "cat /etc/rear/local.conf"):

  • Hardware vendor/product:
    HPE DL380p

  • System architecture:
    x86_64

  • Firmware:
    BIOS, GRUB2

  • Storage :
    Local Disk

  • Storage layout: Not able to get any log for the disk layout

  • Description of the issue:

After re-creating the RAID 5 array (6 x 2TB WD Green HDD's),
same settings as used for the previous 12 months
(including the generation of the three ReaR backup copies),
the server fails to boot using either the backup or backup with Autorecover,
with Kernel 6.3.0.0.rc5 or 6.4.0-0.rc2.
In all cases I cannot get it to progress past getty@.service,
with it failing with the following message:

[$TIME] systemd [1]: /usr/lib/systemd/system/getty@.service:44: Failed to resolve instance name in DefaultInstance="%I": Invalid slot

I am not clear where the problem is coming from,
or where the DefaultInstance %I is referring to,
so I am seeking the assistance of some more knowledgable folks here
to try and find out the next steps in solving this.

It is clear that it is referring to line 44 of the file that is found at
https://github.com/rear/rear/blob/master/usr/share/rear/skel/default/usr/lib/systemd/system/getty%40.service

however I am not clear on where or what it is looking at for the "DefaultInstance" - the "Invalid slot" appears to indicate it is looking at some hardware-related location in the server, however I am not sure why that may have changed, as there has been no hardware changes for the past 12 months

  • Workaround, if any:
    None - I have not found a way to get the server to start. It is not able to start even with Recovery.Target or Emergency.Target added into the GRUB config line via the ReaR terminal acreen (TAB at the selection)

  • Attachments, as applicable : Images captured from screen

Boot_from_ReaR

Bootfail_ReaR

You can drag-drop log files into this editor to create an attachment
or paste verbatim text like command output or file content
by including it between a leading and a closing line of
three backticks like this:

verbatim content

pcahyna commented at 2023-06-26 09:37:

Does the problem appear when booting the ReaR rescue system, or after a recovery when booting the recovered system?

pcahyna commented at 2023-06-26 11:08:

I suppose that "the server fails to boot using either the backup or backup with Autorecover" means booting the rescue image, but this is not entirely clear. In this case, you should not add emergency.target nor rescue.target to the kernel command line, because the rescue image does not support those targets AFAIK. I suspect that the getty messages are related to a terminal that can not be found, but getty.service is referred to only as getty@tty0.service, and tty0 should exist always ...

GreasyMonkee commented at 2023-06-26 12:05:

Correct - it is whilst trying to recover the server due to a "collection of things going wrong", with DNF5 upgrade, php8.2, etc - so my decision was "well, I have got three back-ups, so wipe the disk array and re-load", but lo and behold, not are working......

I have just tried all the saved images on another machine that has a functional load of Fedora 37 Server, and the same errors occur on that machine.

So given that the backups also fail on a second machine, in the same way, it does not appear to be a hardware or Array configuration issue on my main machine.

pcahyna commented at 2023-06-26 12:20:

Have you ever booted the rescue image successfully on that server in the past?

And what happens when you don't add the emergency.target part to the command line?

pcahyna commented at 2023-06-26 12:21:

I suppose that you could ssh to the rescue system even if getty on its console does not start.

pcahyna commented at 2023-06-26 12:22:

and indeed, it does not look related to the RAID array change.

GreasyMonkee commented at 2023-06-26 12:29:

Have you ever booted the rescue image successfully on that server in the past?

Yes, I have restored from a ReaR backup multiple occasions on the same hardware, including after re-building the disk array, when additional disks were added in.

And what happens when you don't add the emergency.target part to the command line?

The last logs I can see are that sysinit.service fails, and it appears to be related to the fact the Getty@.service is failing, the machine locks up and will not respond to any keyboard commands (as expected, for a Getty problem)

GreasyMonkee commented at 2023-06-26 12:33:

I suppose that you could ssh to the rescue system
even if getty on its console does not start?

That is a good question - I will need to explore that. I an no guru on SSH and such, if you have any guidance on how that could be done. The machine has no OS, or configuration on it, so how could I get the rescue image to the dead machine?

pcahyna commented at 2023-06-26 12:36:

The rescue image is the thing that you are booting, so I see you have been able to get that to the machine. After you boot, try ssh to the machine, it could respond even if the console is locked up (assuming that you have network configured of course).

sysinit.service fails

do you have the error messages from sysinit.service? It should run even before getty is started.

GreasyMonkee commented at 2023-06-26 12:58:

I will try to SSH into it later today, tied up with real work at the moment.

sysinit.service only gave a fail message, I tried to get more verbosity to see more clues, but I could not get any more.

GreasyMonkee commented at 2023-06-26 15:16:

I attempted to SSH into from another device (laptop), there is no Network up after 5 minutes (PING: Host unreachable)

Below is the image from the final messages seen in the startup, where it just "stops".

Final
fail-point

GreasyMonkee commented at 2023-06-26 15:17:

Any ideas from the team would be greatly appreciated.

pcahyna commented at 2023-06-26 15:59:

I would try to pass some systemd debug options on the kernel command line to find out what's wrong. E.g. systemd.log_level=debug systemd.log_target=console. I have never used this though - I have never seen before a case where ReaRs sysinit.service fails and even getty.service does not work.
Note that Failed to resolve instance name in DefaultInstance="%I": Invalid slot does not appear anymore - I suspect it was related to your attempt with emergency.target.
By the way, do you have a way to attach a serial console to the system?

GreasyMonkee commented at 2023-06-26 17:05:

I am running Fedora headless, through the serial console - I share office space with the server cabinet.......

Shall see what I can get on the console as a capture, but my SysAdmin skills are very poor (I sill consider my self as a "noob")

pcahyna commented at 2023-06-26 17:15:

Ah, the screen pictures are from the serial console? I thought that it is was PC console (monitor on VGA + keyboard). And what is the serial console device, is it ttyS0? In that case edit the kernel command line to remove console=tty0 (because tty0 is the PC console, not the serial console).

GreasyMonkee commented at 2023-06-26 17:34:

Yes - sorry, my bad for not being clear...... I checked another Fedora server that is all at default, the serial terminal is tty1, so I am assuming that it should be the same as the one I am trying to ressurect, as I had not changed it from the default

GreasyMonkee commented at 2023-06-26 18:44:

I have managed to get my terminal to scroll back through the logs..... finally.

Below it can be seen that the problem with DefaultInstance="%I" is still present when the target is not set to emergency.target or recovery.target.

This appears to be the problem, as it is causing /etc to not be populated, and it fails to start.

Startup_Page0

Startup_Page1

pcahyna commented at 2023-06-27 08:45:

I checked notes from my experiments with the ReaR rescue image bootup... there were messages

systemd[1]: /usr/lib/systemd/system/getty@.service:46: Failed to resolve instance name in DefaultInstance="%I": Invalid slot 
systemd[1]: /usr/lib/systemd/system/getty@.service:46: Failed to resolve instance name in DefaultInstance="%I": Invalid slot 
systemd[1]: Failed to populate /etc with preset unit settings, ignoring: Invalid slot

and the rescue system was working properly nevertheless. From this I conclude that the messages are harmless and not the source of your problem.

pcahyna commented at 2023-06-27 08:47:

serial terminal is tty1

tty1 is not a serial terminal, it is the first virtual console. So, are you using a serial terminal or not?

GreasyMonkee commented at 2023-06-27 09:18:

My apologies - I am not a Sys Admin, so maybe I am using the wrong terminology.

I know that I have a plugged into the multi-pin D socket for a VGA terminal on the server box - it has always worked, and when I run the command "tty" on a different server, running Fedora 37, it replies "/dev/tty1". I assume that the system with the problems that I am experiencing is likewise, as it was set up and configured in the same manner.

Thanks for all of your input, and your notes/experiments, so I will consider that to not be the source of my problems.

I have tried to load the backup (all three from different dates) onto the secondary server, I get the same result, so I conclude it is not something related to hardware.

I will need to keep digging.

pcahyna commented at 2023-06-27 12:55:

Have you tried the systemd.log_level=debug systemd.log_target=console parameters to get more details on what's wrong?

GreasyMonkee commented at 2023-06-27 13:02:

When I tried it, the console did not show anything - probably that I was not looking at "console" but something else (I need to investigate more, and see what I can pull up)

I am sure the answers are there, just that I am not looking in the right place......

pcahyna commented at 2023-06-27 13:05:

Maybe only systemd.log_level=debug then? Also, try deleting all the console=ttySwhathever stuff as obviously you are not using a serial console.

GreasyMonkee commented at 2023-06-27 22:50:

Small steps forward.....

On another HP server, I ran ReaR, got quite a few errors about various files not found, etc, however that gave me another ReaR backup file that I could try on my main machine, and YES, it booted up :-)

Also I found the backup logs from the three backups that fail, so now I have a "recipe" for a backup that has worked, I will have to go through line-by-line to see where things are going wrong.

One thing of note was that in the backup that worked, there was no "CONSOLE=" entries in the bootloader execute line - I am baffled as to where they came from or why they appear in the other three backups.

I also manually set the VGA Mode setting "VGA=" to 795, that gave a much more readable set of text for the logs,

No_CONSOLE_VGA795

From here it looks like the first thing that fails is systemd-udevd.service.

Thanks you very much pcahyna, for nudging me along the path to a point where I am starting to see the "forest instead of trees" and with the working and non-working logs, hopefully I can make progress toward resolving this issue.

UPDATE: I found this Red Hat issue

https://access.redhat.com/solutions/6317011

which looks remarkably similar, however I do not know how I might be able to edit the .conf file that is refered to when it is within a backup - any suggestions on how this may be done would be greatly appreciated.

I will leave this open at present, so I can post updates or ask further questions on this specific topic.

Cheers.

GreasyMonkee commented at 2023-06-28 12:26:

Is there any way of getting the machine to start (using fedora live USB or similar), to be able to access/read the journalctl or systemctl messages, whilst trying to boot off the ReaR recovery USB?

With all of the Console entries deleted, I have tried systemd.log_level=debug and also init=/bin/bash to try and see further messages than what I have added above, but I still get the same set of messages returned, without and further verbosity or way of accessing further information?

GreasyMonkee commented at 2023-07-03 15:36:

@jsmeix - Seeking your support for this problem.

I have three ReaR back-ups, which appear to have all been successful, however after a complete hard-disk array re-creation (due to later problems with DNF and PHP, not related to this issue), none of the backups will restore to the same disk arrangement from which the back-ups were taken.

I have had no issue at all with restoring backups in the same manner over many years with ReaR, on the same machine, as well as previous servers, so it is confusing to me why it will now not restore.

I am not able to get any debug logs, however I do have the complete logs that were saved on the USB drive with the created backup (rear-servnet.log), the latest instance is attached for your reference.

rear-servnet.log

If there is any further information required, or if I can assist your efforts in any way, please feel free to contact me at gathrees1960@gmail.com - your knowledge and expertise is always appreciated.

pcahyna commented at 2023-07-04 10:42:

Hi @GreasyMonkee sorry for the late reply. If you have an image that boots on your server, you could restore your server by using /var/lib/rear/layout and the backup (backup.tar.gz if you are using NETFS) from your non-working image inside your working image. (/var/lib/rear/layout encodes the storage layout of the system that you need to recreate and backup.tar.gz contains the backup of the files.)

pcahyna commented at 2023-07-04 11:27:

UPDATE: I found this Red Hat issue

https://access.redhat.com/solutions/6317011

which looks remarkably similar, however I do not know how I might be able to edit the .conf file that is refered to when it is within a backup - any suggestions on how this may be done would be greatly appreciated.

It should not be the case - ReaR uses this nsswitch.conf file: https://github.com/rear/rear/blob/master/usr/share/rear/skel/default/etc/nsswitch.conf

jsmeix commented at 2023-07-04 13:34:

I am not a Red Hat or Fedora user so I cannot actually help
when issues are specific for Red Hat or Fedora.

This issue is not really clear to me
but as far as I see it seems it is about
that the ReaR recovery system fails to start up
on replacement hardware that is somewhat different
(in particular with a re-created RAID array)
compared to the original system where "rear mkbackup" was run.

What happens is that on the replacement hardware
the ReaR recovery system boot menu is shown and
GRUB loads the ReaR recovery system kernel and initrd
and the ReaR recovery system kernel starts but then
during systemd startup phase various things go wrong
with various inexplicable systemd error messages.

A very generic thing that I am wondering about is
whether or not that replacement hardware can
successfully start up any other installation system?

For example an original Fedora installation image
or an installation image of another Linux distribution?

@GreasyMonkee
does it work - only as a test - to install Fedora
from scratch with an original Fedora installation image
on that replacement hardware?

If this works, you may (as some kind of "last resort")
install the exact right Fedora version that matches exactly
the one on your original system where "rear mkbackup" was run
from scratch with an original Fedora installation image
on that replacement hardware and afterwards "overwrite" that
by restoring the backup of the files that you made with ReaR.
Probably this may not result a perfectly clean system
but perhaps (or hopefully) it may result a usable system.

By the way:
In your
https://github.com/rear/rear/files/11938341/rear-servnet.log
it seems you run "rear mkbackup"
but no "backup" stage is run because:

# grep ' Running ' rear-servnet.log

2023-06-04 18:45:09.578362732 Running rear mkbackup (PID 3483910)
2023-06-04 18:45:09.692531050 Running 'init' stage
2023-06-04 18:45:09.731380685 Running workflow mkbackup on the normal/original system
2023-06-04 18:45:09.758998908 Running mkbackup workflow
2023-06-04 18:45:09.766836060 Running 'prep' stage
2023-06-04 18:45:16.104897140 Running 'layout/save' stage
2023-06-04 18:45:22.648027996 Running 'rescue' stage
2023-06-04 18:45:26.676248229 Running 'build' stage
2023-06-04 18:46:20.800012709 Running 'pack' stage
2023-06-04 18:46:46.039429233 Running 'output' stage

Perhaps all is well - but at least it looks strange.
In contrast for example on my system I get:

# usr/sbin/rear mkbackup

# grep ' Running ' var/log/rear/rear-linux-h9wr.log
2023-07-04 15:27:34.072253131 Running rear mkbackup (PID 31280 date 2023-07-04 15:27:33)
2023-07-04 15:27:35.685224693 Running 'init' stage
2023-07-04 15:27:36.041051624 Running workflow mkbackup on the normal/original system
2023-07-04 15:27:36.068516253 Running mkbackup workflow
2023-07-04 15:27:36.077492471 Running 'prep' stage
2023-07-04 15:27:37.209541443 Running 'layout/save' stage
2023-07-04 15:27:40.338678498 Running 'rescue' stage
2023-07-04 15:27:42.114612502 Running 'build' stage
2023-07-04 15:28:09.402770255 Running 'pack' stage
2023-07-04 15:28:22.380478272 Running 'output' stage
2023-07-04 15:28:23.611574736 Running 'backup' stage
2023-07-04 15:28:33.904032065 Running exit tasks

pcahyna commented at 2023-07-04 13:50:

it seems you run "rear mkbackup"
but no "backup" stage is run because:

I suspect the log ends here because it is at this point that it gets copied to the medium.

pcahyna commented at 2023-07-04 13:54:

which is here:
https://github.com/rear/rear/blob/0ab7f19455c599335beedda7300ae5ea752fab71/usr/share/rear/output/USB/Linux-i386/830_copy_kernel_initrd.sh#L16

GreasyMonkee commented at 2023-07-04 14:14:

This issue is not really clear to me
but as far as I see it seems it is about
that the ReaR recovery system fails to start up
on replacement hardware that is somewhat different
(in particular with a re-created RAID array)
compared to the original system where "rear mkbackup" was run.

Sorry, I may not have been clear in my writing - the hardware layout is identical, nothing was changed - it was that the array was cleared and re-created to remove the image which was damaged (due to PHP upgrade and the new DNF_5 - which has no roll-back for history in the initial release......)

I have installed a clean Fedora 37 version which is working as a test. I had tried to install a Fedora Rawhide (Fedora 39) image, but there are some apparently known issues with GRUB boot-loader (tried different versions of RUFUS.exe to create the bootable image, all result in "452 - out of range" error).

As suggested, as a last resort I am moving everything over from the de-compressed "backup.tar.gz" from the latest backup, t o this work able version so that I can hopefully recover most of the functions.

I will also try pcahyna's suggestions as well.

Thanks to you both for your time, shall update if I find anything

GreasyMonkee commented at 2023-07-04 14:21:

An "off-topic" rant by someone else who is also experiencing, and troubleshooting the "452 - out of range" issue
https://github.com/pbatard/rufus/issues/2233

jsmeix commented at 2023-07-04 14:44:

@pcahyna
thank you so much for the explanation
why that log file looks bad but is OK!

I will fix that piece of code (I know it but I had no idea
that this piece of code has such bad effects when one does
not have it in mind when looking at a log file from a user)
to make it clear that this log file is an unfinished stub.

pcahyna commented at 2023-07-04 14:48:

@jsmeix does the file need to be an unfinished stub? Perhaps the copy could happen later, immediately before unmounting the medium?

pcahyna commented at 2023-07-04 14:50:

Perhaps the copy could happen later, immediately before unmounting the medium?

... which would not help much, because, IIUC, the "rescue" stage unmounts it and "backup" mounts it again. Sigh.

pcahyna commented at 2023-07-04 14:53:

I believe that the log is being copied again later, here:
https://github.com/rear/rear/blob/0ab7f19455c599335beedda7300ae5ea752fab71/usr/share/rear/output/default/950_copy_result_files.sh#L46

jsmeix commented at 2023-07-04 15:03:

Via
https://github.com/rear/rear/commit/81db19ff4f4f2e9a97b195834ae4a4226a1342c3
I fixed things for now with a minimal code change in
output/USB/Linux-i386/830_copy_kernel_initrd.sh
i.e. simply have the

LogPrint "Saving current (unfinished) $RUNTIME_LOGFILE as $USB_PREFIX/$logfile_basename"

before the current (unfinished) RUNTIME_LOGFILE gets copied
so the last message in the copied log file should
hopefully sufficiently indicate that it is
an unfinished copy of RUNTIME_LOGFILE.

GreasyMonkee commented at 2023-07-04 18:29:

From the other thread I referenced from Pete Batard, there are some serious issues with Fedora (and Ubuntu) together with GRUB, so that may well be precipitating the issue.

If it is indeed an issue with GRUB, is there a way that the modified GRUB files can be "inserted" or such for an existing ReaR backup?

I know it is a long-shot, but I am prepared to give anything a try.......

GreasyMonkee commented at 2023-07-09 09:08:

UPDATE 08-07-2023.

I created a completely new installation of Fedora Rawhide (FC39), and added a couple of applications, plus required ones for running ReaR.

I then created a ReaR backup using a new USB drive, formatted using "rear format". When I attempted to boot from the created backup, the exact same behaviour as documented above has occurred.

Fedora Rawhide is still using Relax and Recover version 2.6, not sure if there is an issue from the use of this slightly older release (latest is 2.7 from the ReaR documentation I have found)

The config file used was site.conf, with the following options set:

OUTPUT=USB
BACKUP=NETFS
BACKUP_URL=usb:///dev/disk/by-label/REAR-000

Attached below is the debug log (using -D to capture all the messages).

20230708-1723_servnet_rear_debug.log

I can provide the complete backup file, if required for de-bugging purposes.

I am submitting this is the hope that whatever is the issue may be identified and corrected - I no longer am hopeful of recovering the data from the original backups where the problem first occurred, but to save others the same pain of rebuilding a system from "scratch" when the backups do not work.

jsmeix commented at 2023-07-10 06:06:

@GreasyMonkee
thank you for reproducing it with a
completely new installation of Fedora Rawhide (FC39)!

I assume when you reproduced it in your "UPDATE 08-07-2023"
https://github.com/rear/rear/issues/3017#issuecomment-1627655311
you did not do some "disk array re-creation" on the
replacement hardware (or replacement virtual machine)
where you booted the ReaR recovery system?

I.e. I assume when you reproduced it in your "UPDATE 08-07-2023"
your replacement system was not somewhat different
compared to the original system where "rear mkbackup" was run?

When my assumption is true it means this issue changes
from a special one
"ReaR image(s) will not load after disk array re-creation"
to a generic one
"Recovery system fails to start up on FC39 (systemd errors)"

GreasyMonkee commented at 2023-07-10 10:58:

you did not do some "disk array re-creation" on the
replacement hardware (or replacement virtual machine)
where you booted the ReaR recovery system?

Correct, no changes, or anything between the taking of the logs and attempting to reboot with the ReaR backup.

When my assumption is true it means this issue changes
from a special one
"ReaR image(s) will not load after disk array re-creation"
to a generic one
"Recovery system fails to start up on FC39 (systemd errors)"

Yes, feel free to change the name of the issue to accurately reflect the situation (or do I need to do that?)

pcahyna commented at 2023-07-19 10:51:

I am able to reproduce the issue myself now - not sure yet what the root cause is.

pcahyna commented at 2023-07-20 17:24:

ok I know what's wrong. After mounting /sys in the recovery system, all systemd units fail with status=219/CGROUP. Quick and dirty fix is here: https://github.com/pcahyna/rear/pull/12/commits/29c6f3c57727c4c6bb8cbeb0d41baca56145c6b7

pcahyna commented at 2023-07-20 17:27:

FTR: I debugged this by setting console=tty1 systemd.debug-shell=1 systemd.unit=getty@tty1.service on the kernel command line and when I got shell, I started the units one by one.

GreasyMonkee commented at 2023-07-20 18:43:

ok I know what's wrong. After mounting /sys in the recovery system, all systemd units fail with status=219/CGROUP. Quick and dirty fix is here: https://github.com/pcahyna/rear/commit/29c6f3c57727c4c6bb8cbeb0d41baca56145c6b7

Great :-)

I will test that change later tonight and let you know how it goes.

GreasyMonkee commented at 2023-07-20 21:37:

To clarify, did you do the change on a functional system, make a new backup and then able to restore that file, or

you modified the result contained within the backup.tar.gz file of a backup which failed to restore, and then able to successfully restore it?

I tried the latter, and it was not successful.

pcahyna commented at 2023-07-21 09:52:

I made a completely new backup. To use your existing backup, you should not modify the content of backup.tar.gz, but of initrd. Let me find out how to do this

pcahyna commented at 2023-07-21 10:16:

So the recipe is: find initrd.cgz from your bootable medium and unpack it somewhere:

cd /var/tmp
mkdir rear-root
cd rear-root
zcat ~/initrd.cgz | cpio -i

change the file

vi etc/scripts/boot

and repack the initrd

find . ! -name "*~" | cpio -H newc --create --quiet | gzip > ~/initrd-new.cgz

replace initrd.cgz on your bootable medium by initrd-new.cgz and boot (keep a backup copy of the original bootable medium of course).

GreasyMonkee commented at 2023-07-21 15:12:

Many thanks for the procedure to go through, will run it later tonight and let you know :-)

pcahyna commented at 2023-07-26 16:07:

@GreasyMonkee did you succeed?

GreasyMonkee commented at 2023-07-26 16:24:

@GreasyMonkee did you succeed?

Yes, with some twists and turns......

By changing the initrd contents, the checksum fails.

for some reason I could not nail down, I was getting a situation where there was no bootloader being created on the recovered system.

To get around that issue, I then loaded another one of the three backups on the menu selection from ReaR following the intial boot.

I have confirmed that the modification of "commenting out" the line in the skeleton file, and generating a new backup, that restores correctly without any modification of the saved initrd file.

pcahyna commented at 2023-07-27 15:24:

By changing the initrd contents, the checksum fails.

Oops, I have not thought of that. Otherwise, are you satisfied with the outcome now?

GreasyMonkee commented at 2023-07-27 19:48:

Yes.

Thank you very much for your assistance, with your help I have managed to:

  • Recover files and databases that were critical
  • found a resolution to address the issue which caused the problem

If I understood correctly, there has been a change in the Fedora rawhide package, to ReaR 2.6-11.fc39 ?

I am OK for this topic to be closed.

pcahyna commented at 2023-07-28 09:26:

If I understood correctly, there has been a change in the Fedora rawhide package, to ReaR 2.6-11.fc39

There has been no meaningful change, what you are seeing is just a dummy Release number bump. The Fedora maintainer is unresponsive.

I will reopen the issue, just not as a support question, because it is still a bug.

pcahyna commented at 2023-07-28 09:27:

Glad to hear that you managed to recover the system.

GreasyMonkee commented at 2023-07-29 07:48:

There has been no meaningful change, what you are seeing is just a dummy Release number bump. The Fedora maintainer is unresponsive.

I will reopen the issue, just not as a support question, because it is still a bug.

If you wish I will open a Bugzilla report for it, and reference this thread?


[Export of Github issue for rear/rear.]