Announcement

Collapse
No announcement yet.

Lennart Poettering Talks Up His New Linux Vision That Involves Btrfs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by stan View Post
    Lennart wants to reinvent the wheel in his quest to make non-distro apps install and work automagically. This feat has already been accomplished 8 years ago, with KLIK:



    KLIK is a neat and revolutionary way to sandbox app installations, using the FUSE filesystem interface.
    klik is neat, but it is near non functional compared to current sandbox initiative since all klik does is sandbox contents into workable state, there are no security features in klik. it is also only application level, while this is at distro level, completely different beast.

    Comment


    • #62
      Originally posted by johnc View Post
      It's not that I'd have to, it's that it's something that wouldn't interest me.
      It would interest me if it would allow me to easily revert back to an old version if I don't like the new version (feature or stabilitywise). Yes you can do package rollbacks on current distros. Explain this to your grandma, though. Also good luck downgrading one thing without setting off a massive downgrade avalanche due to all the interdependencies.

      Comment


      • #63
        Originally posted by garegin View Post
        With Windows not only do I get the latest apps the second they come out, but they backport system software like .NET 4.5.1 all the way back to Vista, which came out seven years ago!
        so, how does DX11 works on your Vista? point being, it is not up to you to be able to run something, it is up to MS

        Comment


        • #64
          Upstream software vendors are fully dependent on downstream distributions to package their stuff. It's the downstream distribution that decides on schedules, packaging details, and how to handle support. Often upstream vendors want much faster release cycles then the downstream distributions follow.
          Upstream vendors will provide binary installation of the software they are developing if they want a user adopting their software the easy (and right a)way, source code is almost always available (at least for software you will generally find in distro repos). Rolling release distros doesn't have any issues with this and the "problem" can't be applied to them.

          Realistic testing is extremely unreliable and next to impossible. Since the end-user can run a variety of different package versions together, and expects the software he runs to just work on any combination, the test matrix explodes. If upstream tests its version on distribution X release Y, then there's no guarantee that that's the precise combination of packages that the end user will eventually run. In fact, it is very unlikely that the end user will, since most distributions probably updated a number of libraries the package relies on by the time the package ends up being made available to the user. The fact that each package can be individually updated by the user, and each user can combine library versions, plug-ins and executables relatively freely, results in a high risk of something going wrong.
          Bshit again, dev's, along the source code, are releasing system requirements with taxative list of supported libraries when building their code. If you want this nonexistent - then ship statically linked precompiled binary :P (joking). Also, what RedHat does with backporting code from mainline kernel is the same what distro developers do for code that is in their core repos if something proves to be non-buildable or unstable. I find RedHat's kernel a symbol for something freakin' unbreakable and utterly stable. Yeah sure, sometimes I don't get OTB support but HP, Dell and all other major players on the server market give support providing driver binary installers for RedHat and the likes.

          Since there are so many different distributions in so many different versions around, if upstream tries to build and test software for them it needs to do so for a large number of distributions, which is a massive effort.
          IF is a big if, upstream usually test their code against exact library versions - distro developers test their code against what is found in their repos, not the other way around. We are not talking about projects from distro dev's themselves (canonical first comes to mind).

          The distributions are actually quite different in many ways. In fact, they are different in a lot of the most basic functionality. For example, the path where to put x86-64 libraries is different on Fedora and Debian derived systems..
          Then stop using hardcoded paths and learn to utilize enviroment variables

          Developing software for a number of distributions and versions is hard: if you want to do it, you need to actually install them, each one of them, manually, and then build your software for each.
          That usually ends up for two - three distros tops, Fedora, Ubuntu/Debian and.... well make that two distros tops.

          Since most downstream distributions have strict licensing and trademark requirements (and rightly so), any kind of closed source software (or otherwise non-free) does not fit into this scheme at all.
          So? To quote myself:

          Upstream vendors will provide binary installation of the software they are developing
          Download, chmod +x and then ./blob.bin. If you want to.

          Ultimately, I really don't give a f* f* as long as source is provided

          Comment


          • #65
            Originally posted by johnc View Post
            To me it seems to add a lot of needless complication. I couldn't imagine trying to keep track of all of that myself. And I don't want to have ten versions of Firefox installed (again, unless I'm doing development).
            Actually, it's a huge simplification compared to what we have now. Did you read his blog post?

            For example, when installing your LiveCD to disk, you would simply copy the btrfs deltas. That's it. The files on the LiveCD and your disk would be identical, no need to wait half an hour for your package manager to install packages.

            Installing a new package, runtime or, hell, trying a new distro would be as simple as copying a btrfs delta. Btrfs makes rollback/uninstallation trivial. It's a *huge* simplification to both end-users and developers, compared to the current mess of distro-specific packages.

            Comment


            • #66
              Originally posted by justmy2cents View Post
              so, how does DX11 works on your Vista?

              Great. DirectX 11 is available for Vista since 2009.

              Comment


              • #67
                Originally posted by BlackStar View Post
                Actually, it's a huge simplification compared to what we have now. Did you read his blog post?
                I read every word of it and as I see it, it adds a large amount of universal complication to make easier very rare use cases, such as these:

                For example, when installing your LiveCD to disk, you would simply copy the btrfs deltas. That's it. The files on the LiveCD and your disk would be identical, no need to wait half an hour for your package manager to install packages.

                Installing a new package, runtime or, hell, trying a new distro would be as simple as copying a btrfs delta. Btrfs makes rollback/uninstallation trivial. It's a *huge* simplification to both end-users and developers, compared to the current mess of distro-specific packages.
                I think I've installed from a LiveCD exactly three times in my life (all as separate machine builds). I can't say that the meager hour or two it takes to install from a CD is particularly onerous.

                If there was really a concern to make Linux better for end users, how about addressing the 3 million hours I wasted trying to figure out why Unity, GNOME, compiz, etc. are fundamentally broken messes that make even basic tasks annoyingly impossible?

                When you actually take a sober look at the top 1,000 things that piss off USERS of Linux systems (not engineers, developers, system admins), you'll find the stuff that Lennart is talking about is completely unhelpful. Meanwhile he wants to heap on everybody this new filesystem paradigm that, yes, overly complicates things. It's like systemd for filesystems and package management.

                Comment


                • #68
                  Originally posted by Awesomeness View Post
                  You apparently don't know that he works for Red Hat and therefore already builds his own distro.
                  Nobody from Debian, Arch, Mageia, etc. has ever been forced to accept code from Red Hat. It's just that those other distributors would then have to work on such technology on their own instead of freeloading Red Hat code which is far more work and probably less fun than flaming on mailing lists.
                  Arch devs also contribute back to systemd, you really must mean canonical.

                  Comment


                  • #69
                    Originally posted by johnc View Post
                    I read every word of it and as I see it, it adds a large amount of universal complication to make easier very rare use cases, such as these:



                    I think I've installed from a LiveCD exactly three times in my life (all as separate machine builds). I can't say that the meager hour or two it takes to install from a CD is particularly onerous.

                    If there was really a concern to make Linux better for end users, how about addressing the 3 million hours I wasted trying to figure out why Unity, GNOME, compiz, etc. are fundamentally broken messes that make even basic tasks annoyingly impossible?

                    When you actually take a sober look at the top 1,000 things that piss off USERS of Linux systems (not engineers, developers, system admins), you'll find the stuff that Lennart is talking about is completely unhelpful. Meanwhile he wants to heap on everybody this new filesystem paradigm that, yes, overly complicates things. It's like systemd for filesystems and package management.
                    Having 1 and only 1 standardized package manager and a standardized file system hierarchy (get rid of all the /lib /lib32 /lib64 /lib/<arch> differences between distributions) sounds exactly like the kind of thing that is designed to not piss off a linux user.

                    Comment


                    • #70
                      Now, now

                      Originally posted by Awesomeness View Post
                      Debian is just a huge crowd of crybabies.
                      Give yourself some credit; you're doing as fine a job of whining as any Debian dev ever could.

                      Comment

                      Working...
                      X