Announcement

Collapse
No announcement yet.

Lennart Poettering Talks Up His New Linux Vision That Involves Btrfs

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    I don't fully understand his goals for btrfs, but his goals on package management is a good one, but a lost cause. The stupid thing is everyone always makes a new piece of software in the hopes that it will become the industry standard or global default. There will ALWAYS be someone who says "wait a minute lets do it this way" and that's why closed source has always got the most popularity early on - it might not ever be the best choice but it's a choice you're forced to use and therefore everyone must support it, which in turn reduces fragmentation and increases the interest of 3rd party and commercial software. I believe package management is one of the core reasons commercial companies have a hard time supporting end-user software for linux. It's not the only reason, just a big one.

    Poettering can go ahead and try to unify package systems but so far, in my experience, Arch has already done this with the most ease. First of all, I haven't had a use to install anything .deb or .rpm because the AUR effectively replaces it. But even if I did need a foreign package, I can install things like dpkg, alien, and yum. I respect Poettering's ambitions but he needs to stay focused on his current projects and preferably simplify them a little.

    Comment


    • #22
      Originally posted by schmidtbag View Post
      I don't fully understand his goals for btrfs, but his goals on package management is a good one, but a lost cause. The stupid thing is everyone always makes a new piece of software in the hopes that it will become the industry standard or global default. There will ALWAYS be someone who says "wait a minute lets do it this way" and that's why closed source has always got the most popularity early on - it might not ever be the best choice but it's a choice you're forced to use and therefore everyone must support it, which in turn reduces fragmentation and increases the interest of 3rd party and commercial software. I believe package management is one of the core reasons commercial companies have a hard time supporting end-user software for linux. It's not the only reason, just a big one.

      Poettering can go ahead and try to unify package systems but so far, in my experience, Arch has already done this with the most ease. First of all, I haven't had a use to install anything .deb or .rpm because the AUR effectively replaces it. But even if I did need a foreign package, I can install things like dpkg, alien, and yum. I respect Poettering's ambitions but he needs to stay focused on his current projects and preferably simplify them a little.
      I think he was a little bit off when he said this:

      "Upstream software vendors are fully dependent on downstream distributions to package their stuff. It's the downstream distribution that decides on schedules, packaging details, and how to handle support. Often upstream vendors want much faster release cycles then the downstream distributions follow."

      I'm not sure exactly what he's saying here since it could be misinterpreted, but obviously you can snag things like CUDA, Steam, Chrome, etc. as .debs and the software vendor doesn't depend on, e.g., Canonical. I do recognize there is a dependency on the OS release schedule and included default packages, etc., but considering the infrequency of OS releases I'm not sure why that's a big deal.

      And then there is the somewhat ubiquitous .tar.gz /.run method of just extracting everything to a target directory. So this is the Eclipse, NetBeans, JDK, Android SDK, etc. way of distribution. This is about the equivalent of the Windows self-extracting installer, except without having to hit "Next" "Next" "Next" over and over again.

      I could be wrong but I just find installing and updating software to be generally easier on Ubuntu than it is on Windows. And on OS X it's weirdly unintuitive. You open a .dmg or whatever and you can't tell if it's installing something or just mounting the package and opening it or... wtf is it doing anyway?

      Comment


      • #23
        Originally posted by schmidtbag View Post
        I don't fully understand his goals for btrfs, but his goals on package management is a good one, but a lost cause. The stupid thing is everyone always makes a new piece of software in the hopes that it will become the industry standard or global default. There will ALWAYS be someone who says "wait a minute lets do it this way" and that's why closed source has always got the most popularity early on - it might not ever be the best choice but it's a choice you're forced to use and therefore everyone must support it, which in turn reduces fragmentation and increases the interest of 3rd party and commercial software. I believe package management is one of the core reasons commercial companies have a hard time supporting end-user software for linux. It's not the only reason, just a big one.

        Poettering can go ahead and try to unify package systems but so far, in my experience, Arch has already done this with the most ease. First of all, I haven't had a use to install anything .deb or .rpm because the AUR effectively replaces it. But even if I did need a foreign package, I can install things like dpkg, alien, and yum. I respect Poettering's ambitions but he needs to stay focused on his current projects and preferably simplify them a little.
        I don't think you understand how his system works. You can use whatever packaging system you want on the dev side but it gets sent to the user side via a btrfs send/receive to /usr (or /, depending on what is being installed).

        Comment


        • #24
          I sent a letter to Bryan Lunduke and I'm making it an open letter.

          I've always wanted these three issues to be widely talked about in the Linux world because I think they are the most important problems that desktop Linux has.


          1) One of them is the fact that unless you go with rolling releases you are stuck with the versions of the applications that are shipped with the release. There is some backporting like new Mesa for Fedora 20, Firefox updates to new versions and driver updates through point releases(Ubuntu 14.0.4.1, for example). But for the vast majority of cases you are stuck with old software.

          Do you know the technical reasons for this issue. PBIs in PCBSD solve this problem, can't we do the same in Linux? In seems easy enough for a huge hassle.

          With Windows not only do I get the latest apps the second they come out, but they backport system software like .NET 4.5.1 all the way back to Vista, which came out seven years ago!




          2) X.org crashes close all the applications. All the state of the windows is kept in the X server. If the server crashes so do all the open windows. In Windows or OS X, if you crash the compositor, the programs remain running.

          Is this a problem with Wayland too? Some people said this is caused by the toolkits, I donno.

          A link to what I'm talking about.


          The third one is less important but pretty bad too. You can't install driver updates in a straightforward fashion. For home users you get a new distro release every six months, so your hardware support is pretty current. But for "enterprise distros" like RHEL/Centos or Debian it's aweful. Say you are an enterprise and want to install Linux on your new servers. You are fscked, because the driver support in those distros is four years old. Unless you go nuts and start compiling kernel modules, you are basically stuck with buying legacy hardware as the only way out.

          Right now the latest RHEL has the 2.6.32-431 kernel. No Haswell support, no support for that fancy new 10gbe NICs or SAS controller. That kernel came out in Dec 2009. Yeah, distros always freeze the vanilla kernel and customize it, so there is SOME new hardware support, but it's a drop in the bucket.

          Comment


          • #25
            There are some good ideas here.

            Not entirely new, of course, though there are specifics here that might it feasible for adoptions. Because of it's "meta" approach, it's really all about booting the OS. Once you actually mount the various images (usr, root, libraries and apps), it will appear exactly the same as existing Linux distros. This is great because it would allow a smooth transition to a better world.

            People commenting here are focusing too much on btrfs: He's not saying it will be a dependency, just that btrfs has this inspiring feature. Sub-volumes could be implemented by various means for other filesystems. Of course he's not suggesting that Linux switch to using only btrfs.

            I'm reminded of two other original approaches:

            1. GoboLinux is an amazing distribution, which finally re-imagines the *nix system file structure, allowing various packages to exist together in various versions.

            2. And of course, there's Plan 9, which was planned as a successor to Unix, and had a radical everything-is-a-file approach. I keep seeing people thinking up "new" things for *nix which were already attempted in Plan 9. It's a good place to examine whether they worked or not and why.

            Comment


            • #26
              Originally posted by garegin View Post
              The third one is less important but pretty bad too. You can't install driver updates in a straightforward fashion. For home users you get a new distro release every six months, so your hardware support is pretty current. But for "enterprise distros" like RHEL/Centos or Debian it's aweful. Say you are an enterprise and want to install Linux on your new servers. You are fscked, because the driver support in those distros is four years old. Unless you go nuts and start compiling kernel modules, you are basically stuck with buying legacy hardware as the only way out.

              Right now the latest RHEL has the 2.6.32-431 kernel. No Haswell support, no support for that fancy new 10gbe NICs or SAS controller. That kernel came out in Dec 2009. Yeah, distros always freeze the vanilla kernel and customize it, so there is SOME new hardware support, but it's a drop in the bucket.
              If you're running RHEL, and have support, you should be running on certified hardware (https://access.redhat.com/search/bro...rprise+Linux+7). Rhel has had haswell support for awhile (https://access.redhat.com/solutions/639783), 10gbe (http://www.redhat.com/promo/summit/2...ark_Wagner.pdf), etc. RHEL is for enterprise, so you can bet this kinda stuff has been supported for awhile. If it's not, open a ticket. You pay for support, and drivers is part of that.

              Comment


              • #27
                Originally posted by emblemparade View Post
                1. GoboLinux is an amazing distribution, which finally re-imagines the *nix system file structure, allowing various packages to exist together in various versions.
                If you like that, then you'll love NixOS (automatic dependency discovery, multiple version coexistence, atomic updates/rollbacks).

                Comment


                • #28
                  Originally posted by emblemparade View Post

                  I'm reminded of two other original approaches:

                  1. GoboLinux is an amazing distribution, which finally re-imagines the *nix system file structure, allowing various packages to exist together in various versions.
                  OS X already does that with its Application / Utilities / (whatever other crap> folders and application DMGs.

                  Comment


                  • #29
                    Lennart Poettering is a smart and great person

                    Please watch the following before mindlessly bad mouthing systemd.

                    https://www.youtube.com/watch?v=-97qqUHwzGM

                    Comment


                    • #30
                      The app installation problem was solved in 2006: it's called KLIK

                      Lennart wants to reinvent the wheel in his quest to make non-distro apps install and work automagically. This feat has already been accomplished 8 years ago, with KLIK:

                      http://en.wikipedia.org/wiki/Klik_%2...ging_method%29

                      KLIK is a neat and revolutionary way to sandbox app installations, using the FUSE filesystem interface.

                      Comment

                      Working...
                      X