Announcement

Collapse
No announcement yet.

A Look At The Most Promising Next-Gen Linux Software Update Mechanisms

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    This is crap for mobiles and tarded devices. The REAL future is NixOS and the NIX package manager.
    ​​

    Comment


    • #32
      Originally posted by F.Ultra View Post
      This you cannot do with Windows, in Windows you can use the MoveFileEx() function together with the MOVEFILE_DELAY_UNTIL_REBOOT flag which will replace the file on the next boot if it was busy when you tried to overwrite it.
      This happens because NTFS isn't POSIX-compliant, btw.
      POSIX filesystems allow to unlink and replace (or delete) a running/open file, and on linux this happens all the time.

      Comment


      • #33
        Originally posted by sdack View Post
        *lol* You are absolutely right in assuming that we've always had reboots, but it's about reducing reboots. Yet you believe what RedHat is doing has to be right, because it's RedHat who is doing it. This is one of the oldest fallacies there are.

        Just tell me at which point you threw out system uptime in favour of getting the latest software and accepted reboots as part of your regular maintenance, because this is what you've done.

        And of course I must be living in an alternative universe, because what other reason could I have to disagree? Next thing you'll be telling me is that I'm unworthy of RedHat's glorious creations.

        The more RedHat becomes like Microsoft the more its users become like Windows users. Think on that on.
        The system uptime is not the holy number some people make it up to be. I run several servers for the same task/service and simply remove the one to work on from the work pool so uptime for the service as a whole is unchanged regardless of the individual uptime of each server.

        Applying updates in an atomic fashion via say a pre-boot environment that would guarantee that the server would boot in a known and good state after an upgrade have it's benefits. The importance is in the "known and good state", such a thing is not guaranteed with an apt or yum style upgrade. Not to say that a apt or yum style upgrade necessarily creates an invalid state, I mean I have yet to encounter it on any of my servers (and I use apt and not some pre-boot), but the mere possibility could be devastating for some type of services (think of life or death situations or where a single error costs millions of dollars).

        Heck, there are environments where an upgraded server have to go through days of certification after each upgrade, rebooting the server will happen more than once already there. And these are the types of customers that Red Hat sells to (among others), the ones that would run screaming in the other direction if ever faced with something like Windows.

        Comment


        • #34
          Originally posted by starshipeleven View Post
          This happens because NTFS isn't POSIX-compliant, btw.
          POSIX filesystems allow to unlink and replace (or delete) a running/open file, and on linux this happens all the time.
          Yeah it was a real eye opener when I first tried out Linux all those years ago that you could have an application write to a file, another to read from it. Delete the file, put another file in there with completely new content and those two other applications would just go on like the delete and replace never happened . Also took my a while to figure out why I could kill applications when "installing" new versions of shared libraries when using cp...

          Comment


          • #35
            Originally posted by sdack View Post
            "... similar to OSTree but does not require reboots to activate bundles."

            I am beginning to wonder about RedHat's management when I read that they are in support of this. Call APT old, but not needing to reboot for 99% of all updates has always been a big win for me. I hate how Windows always wants to reboot and forces people to close running applications, to log back in and to start over. And here we have RedHat supporting just this kind of crap.

            My picture of RedHat is changing and I'm hoping M$ is going to buy them up. I bet RedHat's management would be excited about the takeover and make them feel like a reboot.
            I'm sorry but installing updates on a currently running system has always been something of a kludge on all distros, and while you can kinda get away with it if you're just running Linux as a minimal server it's absolutely going to break the running system if you're doing something more complex. KDE SC for example always required a full reboot because the running system just flat out breaks and KDE Frameworks or Plasma still do. If libinput gets updated that's also a reboot, basically anything where there's complex interactions between software... that's a reboot. So "You never have to reboot with Linux" no... that was always a stupid myth, that will always result in a more and more broken system until you finally restart. I've used all the big name distros at one point or another and this always what happens.

            That all said I think everyone but PC-BSD and possibly Solaris is stupid in how they handle updates. Everyone but them wants to attempt to update the live running system, which as previously stated always leads to breakage of the running system if it's doing anything complex. PC-BSD (and I presume, but don't know about Solaris, I can't be bothered to install it into a virtual machine to check) instead takes a snapshot of the running system and then installs the updates into this snapshot and then you reboot into this new snapshot whenever is convenient for you, as opposed to having a broken system dictating to you when you need to reboot (because updates never touch the live system). It also means that if the updates broke something you reboot into the old snapshot and everything is back like it was, no having to futz about with trying to roll back anything.

            Comment


            • #36
              Originally posted by starshipeleven View Post
              Hint: It's targeted at servers, so "users" are certified system administrators doing scheduled mainteneance, and when you do scheduled mainteneance to a server you know it will be either rebooted or go offline anyway.

              This won't. It's called "atomic" and does stuff from pre-boot environment for a reason.
              *lol* That's so backwards... Do you know why you are saying "does stuff ... for a reason"? Because you don't know what it does and why it does it. You assume that it's doing it and this becomes your justification for it being good. The idea that it could be wrong what they are doing simply doesn't occur to you, because you haven't thought this far yet and so you find the thought of criticism a strange one. I tell you why they are going backwards. It's because they don't want to put an effort into uninstalling packages.

              Their approach is to only support installation routines and when it fails do they rollback the file system. This of course only works when you also reboot the machine. It saves the effort of uninstalling packages and shutting down processes individually and having to test the routines, too. The price for their dumbed-down solution is that of having to do regular reboots. To anyone who finds this acceptable, please, reboot your Linux and insert a Windows DVD.

              We are however technology-wise much further than what they are proposing to be the future. We deliberately do not want the reboot method and reject it, because there are smarter, more strategic, less invasive and less disruptive methods of doing it. These are more complex methods, but this is why we choose Linux, because we don't shy away from it.

              Do take a look at the example scenario they've given. Ask yourself, who installs updates like this in a production environment?! Of course, there are countless Windows users who have no other choice and who can't help themselves, but the scenario itself is just utterly wrong. In a production environment do you simply not change what is working for you unless you have a very good reason to do so. Just the indication of an update alone cannot be that reason. Yet, this is what they are proposing we should be doing in the future... install an update because it's there, and almost like tossing a coin, ... if it fails "rollback and reboot".

              You don't need to have a degree in IT or any other science to under stand how bad this concept is. Just ask yourself what you will tell your boss when an update broke the system and your company is now making a loss, because some message said "I'm an update. Please install me!" Hell, it doesn't even need to be break the system, but only cause a drop out of 5 minutes, which can cost some companies millions. Give this a thought and you'll understand what they are trying to sell is meant for the consumer and small sever market at best.

              As a system administrator do you not touch the production environment unless it's required. You want software which can smartly uninstall itself without interrupting the remaining system and you want this to be the dominant solution. Of course, you also keep independent backups of the systems, because you know you cannot base your job purely on trust and believes, but because you know that safety and security are things we have until we lose them and that we can lose them at any time with no warning at all.

              Try to sell "rollback and reboot" to experienced UNIX/Linux system administrators is not more than a slap into their faces.

              Comment


              • #37
                Originally posted by Alliancemd View Post
                It's nice that you can rollback. OSTree is nice, it is better, but they don't solve the problem that well.
                I think Habitat solves the problem properly and it doesn't care about the distro you have. It's a static binary that has 1 single requirement: Linux Kernel 2.6+.
                Their website: https://www.habitat.sh/
                Here is for example how the GDB package looks like in Habitat: https://app.habitat.sh/#/pkgs/core/g...20160926185837
                Pay attention how specific it is about it's dependencies. Full reuse between applications, while still being completely isolated when needed.
                It seems to me that Habitat is just another Linux distro. It's closer to Gentoo than other distros. The main difference is making each package independent. It makes more sense to do this based on existing distros (Docker or Flatpack with rpm-ostree). Distros do all the work to make sure the version of the software they make available is solid and security updates get published. Does Chef have the chops to be a Linux distro company?

                Comment


                • #38
                  Originally posted by Luke_Wolf View Post
                  I'm sorry but installing updates on a currently running system has always been something of a kludge on all distros, ...
                  The only way I can see you making such a statement for all distros is when you count yourself in as the source of the failure.

                  Comment


                  • #39
                    Originally posted by sdack View Post
                    Just tell me at which point you threw out system uptime in favour of getting the latest software and accepted reboots as part of your regular maintenance, because this is what you've done.
                    System uptime is critical if you have a single server running everything. It's less critical with clustered infrastructure, where dropping one or two nodes at a time to do maintenance is normal practice, because you've got plenty of redundancy. If it's all running on VMs, you might even just be updating one offline copy, and propagating updates by nuking and recreating VM images...

                    Comment


                    • #40
                      Originally posted by Delgarde View Post
                      System uptime is critical if you have a single server running everything. It's less critical with clustered infrastructure, where dropping one or two nodes at a time to do maintenance is normal practice, because you've got plenty of redundancy. If it's all running on VMs, you might even just be updating one offline copy, and propagating updates by nuking and recreating VM images...
                      It's all not a reason to abandon the better solutions and to replace them with dumb-down methods. All that is really being said here is that some people cannot deal with the complexity of existing solutions and long for a simpler one. This I can agree with as long as you understand that it isn't what everybody wants.

                      My problem in particular with it is that once a company such as RedHat gets behind it does it usually gain significant popularity, while most Linux users don't actually mind the challenge of the more complex methods. Rather will it lead to less quality, because more maintainers will depend on the "rollover and reboot" of sys admin. The amount of reboots will go up and will become of a problem of its own. You'll find that even the smallest updates will require a reboot and instead of being a simple task always becomes a major event as if something had broken down. That's just not acceptable to me.

                      Comment

                      Working...
                      X