Announcement

Collapse
No announcement yet.

Red Hat Changing How They Handle Their Minor Release Betas

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by aviallon View Post

    sssd caching probably?
    Probably but I haven't found the solution yet. One of the strange symptoms is, that the login via ssh is on par with RHEL but the following sudo is definitely slower. I cross checked version, configs, release notes and issues but haven't found the solution yet but I return every couple of months and invest a bit of time to search for it. Guess next time it will be a wireshark session.

    Comment


    • #12
      Makes no sense.

      A paying customer will most likely be running GA releases of RHEL. Which means using RHEL in production or testing environments.

      And running a RHEL beta in a testing environment is pointless if the beta is going to differ from the GA. What works in testing on beta need not necessarily work in GA, especially if things have been changed during that period.

      Just test on GA and be done with it.

      Comment


      • #13
        Originally posted by pWe00Iri3e7Z9lHOX2Qx View Post
        I get what you are saying and generally agree, but I think there's a bit more nuance here. The only image currently available is from today, so the chances are approaching 0% that anyone who comments in this forum has tested it. If the builds are daily, what would an answer like "I tested one two weeks ago and it didn't work" really mean to the one asking the question? Maybe that particular build had a problem, but > 90% of the time they are fine. My point was mainly that with bleeding edge dev builds, it's probably more useful to just try it yourself. And I acknowledge a huge caveat there for people with slow / metered / expensive internet connections where downloading a large ISO just to screw around with isn't feasible.
        Fair enough, sorry to have brought it up.

        I did download both the small and the larger DVD ISO today and tried to fire them up in vbox, but neither would boot for me. I haven't investigated further yet. I'll burn the larger ISO to a USB later and try to boot a laptop. Maybe CentOS Stream doesn't like vbox? I'm not familiar with CentOS Stream and I don't see where the documentation is for these CentOS Stream 10 ISOs.

        Comment


        • #14
          Originally posted by slalomsk8er View Post

          Probably but I haven't found the solution yet. One of the strange symptoms is, that the login via ssh is on par with RHEL but the following sudo is definitely slower. I cross checked version, configs, release notes and issues but haven't found the solution yet but I return every couple of months and invest a bit of time to search for it. Guess next time it will be a wireshark session.
          I needed to add
          Code:
          Defaults !fqdn
          to visudo (/etc/sudoers) on Arch to keep a timeouting nfs share and other network weirdness from stalling sudo. You should also check /etc/hosts and make sure your actual hostname is in there tho.

          Comment


          • #15
            I hope for the future they consider using A/B partitioning schemes. That seems like it's coming from out of nowhere, but I've always wanted an OS that supported A/B schemes to run setups like A=Stable B=Testing C=Unstable etc. If there's gonna be less ISOs, which a Live ISO can be seen as a fixed B or C partition for testing purposes, it'd be nice if the OS itself supported A/B for backup and testing purposes.

            Bonus points if they can do that with systemd-homed support so that B/C/D volumes will use a copy of the A $HOME so we don't footgun ourselves, hose our $HOME, doing bare metal testing. Even better, testing was successful so we can trigger an update on the A volume from B as well as "backport" any updated configs from B to A before KVM-ing into the updated A volume. It's 2024. We shouldn't be rebooting for these things. Hand thigns from A to B and then B to A. We'll call it the ABBA method.

            Comment


            • #16
              Originally posted by skeevy420 View Post
              I hope for the future they consider using A/B partitioning schemes. That seems like it's coming from out of nowhere, but I've always wanted an OS that supported A/B schemes to run setups like A=Stable B=Testing C=Unstable etc. If there's gonna be less ISOs, which a Live ISO can be seen as a fixed B or C partition for testing purposes, it'd be nice if the OS itself supported A/B for backup and testing purposes.

              Bonus points if they can do that with systemd-homed support so that B/C/D volumes will use a copy of the A $HOME so we don't footgun ourselves, hose our $HOME, doing bare metal testing. Even better, testing was successful so we can trigger an update on the A volume from B as well as "backport" any updated configs from B to A before KVM-ing into the updated A volume. It's 2024. We shouldn't be rebooting for these things. Hand thigns from A to B and then B to A. We'll call it the ABBA method.
              That already exists, it's called Boot Environments, You need a Filesystem like ZFS or Btrfs and it currently works best on FreeBSD but you can do something vaguely similar on Linux with BTRFS and snapper however the papercuts are many such as not being able to roll back to move forward without mucking about with the permissions on the snapshots.

              Comment


              • #17
                Originally posted by Luke_Wolf View Post

                That already exists, it's called Boot Environments, You need a Filesystem like ZFS or Btrfs and it currently works best on FreeBSD but you can do something vaguely similar on Linux with BTRFS and snapper however the papercuts are many such as not being able to roll back to move forward without mucking about with the permissions on the snapshots.
                I'm not trying to sound like an asshole here, but I know they exist, what they're called, and the file systems they can be used with. That's part of the reason I use a ZFS root. Practically no Linux distribution consideres them outside of ones that offer BTRFS with Snapper snapshots and that just isn't the same thing. Anecdotally, until an EL or well funded distribution starts offering that form of update management, environments and live patching combined, it's doubtful that other distributions will.

                Comment


                • #18
                  Originally posted by skeevy420 View Post

                  I'm not trying to sound like an asshole here, but I know they exist, what they're called, and the file systems they can be used with. That's part of the reason I use a ZFS root. Practically no Linux distribution consideres them outside of ones that offer BTRFS with Snapper snapshots and that just isn't the same thing. Anecdotally, until an EL or well funded distribution starts offering that form of update management, environments and live patching combined, it's doubtful that other distributions will.
                  I'd, argue openSUSE-style snapper works much better than "real" Android-like A/B partitioning. For one thing it saves you having to update another partition that's older than the current one, which pretty much guarantees it must take longer (unless you're doing images like silverblue/kionite anyways).

                  What I'm trying to say is: What about about A/B partitioning is so good it shadows the benefits of snapshots, which are plenty (less disk space, faster, multiple snapshots)?

                  Btw I'm really not against A/B in general, I've even setup my laptop like that once: shared home and bootloader, update the other system through chroot and set next boot, make permanent if the system reaches graphical.target. Worked well enough I guess.
                  Last edited by fallingcats; 21 February 2024, 01:51 PM.

                  Comment


                  • #19
                    The theoretical advantage of A/B partitioning is that since the partitions are completely separate filesystems, and the known-good partition is mounted read-only and never intentionally modified, it's more robust against unknown unknowns (flaky hardware, for example). But that seems more valuable for client devices that usually lack ECC and active maintenance, and where still being able to boot and access email and emergency services even when updates fail is more of a benefit than a liability.

                    Comment


                    • #20
                      Originally posted by fallingcats View Post

                      I'd, argue openSUSE-style snapper works much better than "real" Android-like A/B partitioning. For one thing it saves you having to update another partition that's older than the current one, which pretty much guarantees it must take longer (unless you're doing images like silverblue/kionite anyways).

                      What I'm trying to say is: What about about A/B partitioning is so good it shadows the benefits of snapshots, which are plenty (less disk space, faster, multiple snapshots)?

                      Btw I'm really not against A/B in general, I've even setup my laptop like that once: shared home and bootloader, update the other system through chroot and set next boot, make permanent if the system reaches graphical.target. Worked well enough I guess.
                      Perfect world with OpenZFS is that A/B partitions will use reflinks so they shouldn't take up much more space than using snapshots. Bcachefs and BTRFS outta be able to utilize reflinks like that, too. With modern file systems, the old ways with multiple partitions for A/B isn't really necessary. According to the RHEL manuals, using BTRFS and Snapper is unsupported on production systems.

                      Anyways, the only reason I care is that if they're going to make fewer ISOs per year then it makes sense that there be an alternate way to test the upcoming releases and updates on your physical hardware without risking your current setup.

                      Am I the only one here who runs a live ISO on their hardware before running dist upgrades?

                      Comment

                      Working...
                      X