Announcement

Collapse
No announcement yet.

Lennart Poettering Talks Up His New Linux Vision That Involves Btrfs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by Sonadow View Post
    Having 1 and only 1 standardized package manager and a standardized file system hierarchy (get rid of all the /lib /lib32 /lib64 /lib/<arch> differences between distributions) sounds exactly like the kind of thing that is designed to not piss off a linux user.
    I can't say that .deb or .run files have ever bothered me. And when I need a piece of software it's almost always available in at least one of those two formats.

    The /lib stuff never bothered me either (though the move to multiarch was a bit painful) but I think the right long-term solution there is for developers to make 64-bit versions of their software if they haven't already. And that seems a far better solution to me because it actually addresses the problem.

    When I look on forums and buglists and such I don't see many complaints about using a software center or installing from a deb or whatever.

    Comment


    • #72
      Originally posted by BlackStar View Post
      Actually, it's a huge simplification compared to what we have now. Did you read his blog post?

      For example, when installing your LiveCD to disk, you would simply copy the btrfs deltas. That's it. The files on the LiveCD and your disk would be identical, no need to wait half an hour for your package manager to install packages.

      Installing a new package, runtime or, hell, trying a new distro would be as simple as copying a btrfs delta. Btrfs makes rollback/uninstallation trivial. It's a *huge* simplification to both end-users and developers, compared to the current mess of distro-specific packages.
      It wasn't QUITE that easy (you still have to prepare the disk to be bootable for uefi), but, yeah, this is a massive simplification.

      Comment


      • #73
        Originally posted by Rambo Tribble View Post
        Give yourself some credit; you're doing as fine a job of whining as any Debian dev ever could.
        Nah, you are just too stupid to understand the difference.

        Comment


        • #74
          So the solution to distribution fragmentation is: kill all distributions but one?

          If the difference in packaging is the main problem, and Lennart proposes to just give this job to upstream, what is the point in actually having different distributions? IIRC the core business of distributions is packaging ...
          Also, how does Lennart want to deal with situations like different programs requiring different versions of a service like DBUS running? Those two will never be able to communicate with each other!
          Finally, if you are going to give upstream total control over the dependency versions, you close the door to post-release bugfixes of those dependencies, OTOH if you let the dependency be updated, the "testability for devs"-point is moot. You can't have both (for all I know. If you know better, please enlighten me)

          Comment


          • #75
            Originally posted by johnc View Post
            I read every word of it and as I see it, it adds a large amount of universal complication to make easier very rare use cases, such as these:



            I think I've installed from a LiveCD exactly three times in my life (all as separate machine builds). I can't say that the meager hour or two it takes to install from a CD is particularly onerous.

            If there was really a concern to make Linux better for end users, how about addressing the 3 million hours I wasted trying to figure out why Unity, GNOME, compiz, etc. are fundamentally broken messes that make even basic tasks annoyingly impossible?

            When you actually take a sober look at the top 1,000 things that piss off USERS of Linux systems (not engineers, developers, system admins), you'll find the stuff that Lennart is talking about is completely unhelpful. Meanwhile he wants to heap on everybody this new filesystem paradigm that, yes, overly complicates things. It's like systemd for filesystems and package management.
            You are being argumentative. If you read what he wrote, all of it, you'd understand the problems he's trying to solve, and the proposed solutions.
            So Let's Summarize Again What We Propose

            • We want a unified scheme, how we can install and update OS images, user apps, runtimes and frameworks.
            • We want a unified scheme how you can relatively freely mix OS images, apps, runtimes and frameworks on the same system.
            • We want a fully trusted system, where cryptographic verification of all executed code can be done, all the way to the firmware, as standard feature of the system.
            • We want to allow app vendors to write their programs against very specific frameworks, under the knowledge that they will end up being executed with the exact same set of libraries chosen.
            • We want to allow parallel installation of multiple OSes and versions of them, multiple runtimes in multiple versions, as well as multiple frameworks in multiple versions. And of course, multiple apps in multiple versions.
            • We want everything double buffered (or actual n-ary buffered), to ensure we can reliably update/rollback versions, in particular to safely do automatic updates.
            • We want a system where updating a runtime, OS, framework, or OS container is as simple as adding in a new snapshot and restarting the runtime/OS/framework/OS container.
            • We want a system where we can easily instantiate a number of OS instances from a single vendor tree, with zero difference for doing this on order to be able to boot it on bare metal/VM or as a container.
            • We want to enable Linux to have an open scheme that people can use to build app markets and similar schemes, not restricted to a specific vendor.
            After reading so much miscomprehension from folks, I'm now beginning to realise that Lennart-hate has way too many tag-alongs who simply don't understand ALL of the specifics (I'm not calling out anyone in particular).

            Comment


            • #76
              Originally posted by Sonadow View Post
              Having 1 and only 1 standardized package manager
              Won't happen - just because they create one package manager, doesn't mean the others disappear. Apt and Yum are considered to the main two standards, but you've still got all sorts of others, like portage, entropy, nix, pacman, etc. Choice is good, provided they can play well together (which is what the FHS ensures).

              a standardized file system hierarchy (get rid of all the /lib /lib32 /lib64 /lib/<arch> differences between distributions) sounds exactly like the kind of thing that is designed to not piss off a linux user.
              And that's trivially doable without creating a whole new package manager.

              Comment


              • #77
                Originally posted by Isedonde View Post
                Unless I got Lennart wrong, Qt 4 / 5 etc. "runtimes" can be installed at the same time. Not by installing into different file system locations, but simply by only ever "mounting" one of the runtimes at any given time in one filesystem namespace. Since the concept involves filesystem namespaces, there's no conflict: When you start a Qt4 app, the Qt4 runtime is mounted to /usr/lib/qt (or whatever) in the app's very own filesystem. When you start a Qt5 app afterwards, there's a completely new (initially empty) filesystem, and Qt5 is mounted to /usr/lib/qt (and also your /home/foo is mounted, and the base /usr system, and the root filesystem, etc). Unless Qt4/5 prefer to install into directories called qt4 and qt5, but that is not required if no single app uses both of them.


                I completely agree with curaga's post. On top of these points, I wonder how security / grave functionality bug fixes in runtimes would be handled. Say, $MIGHTY_RUNTIME_VENDOR (let's call that vendor gnome foundation) discovers that there's a critical bug in the libgio library that is part of GNOME_RUNTIME_3_12, and it might erase your home directory. So the gnome foundation releases GNOME_RUNTIME_3_12_1 or something like that. Now all the apps still use the old libgio from 3_12?

                Or maybe this is one point that wasn't very clear to me (or I just wasn't reading careful enough), can there be updates to existing runtimes that do NOT require app runtime depency bumps? For example, the bug is in libgio from "GNOME_RUNTIME_3_12, version 3.12.0" and then gnome publishes "GNOME_RUNTIME_3_12, version 3.12.1" and all the apps pick up the change automatically? I guess that's possible. Of course, the "major version" / "name" of a runtime must be increased when the ABI of at least one of the contained libraries changed.

                Also, I can't imagine major distros saying "Oh fine, we'll stop maintaining any package that is not required to boot the system, since those will be supplied from a 3rd party through a runtime. We will just provide systemd in a basic /usr system to boot the system, and we will make sure /etc/issue contains our distro name, since that is the only thing specific to our distro now!". Clearly, there will be UBUNTU_GNOME_RUNTIME, FEDORA_GNOME_RUNTIME, SUSE_GNOME_RUNTIME, DEBIAN_GNOME_RUNTIME. And of course the vanilla GNOME_RUNTIME. I guess there will be no ARCH_GNOME_RUNTIME though? Basically, any distro that likes to add their own patches to libraries will create their own runtimes, containing different library versions/APIs/ABIs, and then we're back to "oh noes I must build 50 different versions of my app for every existing runtime on the planet".

                With this entire scheme the kernel API and the bus APIs become the most crucial interfaces between apps and the rest of the system. It's the common denominator that all the runtimes rely on. As such, it is the DRM kernel/userspace API that must make sure to stay compatible as good as possible.

                API/ABI stability on Linux is a difficult thing. There are some projects which are better than others, but in general shared library stability is a desaster, as soon as you start linking against multiple of them. With the "runtimes" scheme, we kinda try to avoid the problems around this though, as we allow multiple to be installed at the same time easily, so that apps can stay with versions they are used to. However, this doesn't really mean that the libraries coouldn't be updated anymore. They have to, already for CVEs and stuff. Now, if we have userspace driver components, and they need to be loaded into all the apps, then yes, this will mean that the runtimes have to include them. Or more specifically, I expect that when GNOME puts out a runtime GNOME_3_34, and then one day does a new runtime, maybe called GNOME_3_36, that they then continue to still maintain the older GNOME_3_34 runtime. They wouldn't add new libraries to it or change anything major like that. But they would still fix CVEs, and they if GL requires client-side drivers, then yeah, they would have to update those too.
                That's a comment from his g+ stream.

                Comment


                • #78
                  Originally posted by garegin View Post
                  I sent a letter to Bryan Lunduke and I'm making it an open letter.

                  I've always wanted these three issues to be widely talked about in the Linux world because I think they are the most important problems that desktop Linux has.


                  1) One of them is the fact that unless you go with rolling releases you are stuck with the versions of the applications that are shipped with the release. There is some backporting like new Mesa for Fedora 20, Firefox updates to new versions and driver updates through point releases(Ubuntu 14.0.4.1, for example). But for the vast majority of cases you are stuck with old software.

                  Do you know the technical reasons for this issue. PBIs in PCBSD solve this problem, can't we do the same in Linux? In seems easy enough for a huge hassle.

                  With Windows not only do I get the latest apps the second they come out, but they backport system software like .NET 4.5.1 all the way back to Vista, which came out seven years ago!
                  The reason for the issue is commonly referred to as 'dependency hell'; http://en.wikipedia.org/wiki/Dependency_hell ...and mostly comes down to apps requiring particular libs/versions/deps - which can/may cause conflicts/issues with the one's shipped with the OS. PBI works around this by including/building the deps/libs [statically] for a given app. AFAICT. But that comes with a cost too - you end up wasting a lot of disk space on bundled libs/deps for each app.... In Win/Mac this is less of an issue, due to MS/Apple managing the core system + those companies have the workforce/money to be backporting key components into older OS versions. [well, and Apple/mach-o binaries most likely will contain their own deps, minus what frameworks from Apple they rely on]... Some distros try to work around this by shipping multiple versions of the same libs/deps in the file-system [as shared libs], this allows packages to be able to use XYZ versioned shared libs that they need, instead of having a bunch of duplicates of libs [as you would get with PBI, AFAICT]. Personally, i think having multiple versions of XYZ dep is a better approach than PBI. I could only justify using the PBI 'bottled' approach for a few select apps, anymore than that and it just seems like a waste/ i couldn't do it.

                  Originally posted by garegin View Post
                  2) X.org crashes close all the applications. All the state of the windows is kept in the X server. If the server crashes so do all the open windows. In Windows or OS X, if you crash the compositor, the programs remain running.

                  Is this a problem with Wayland too? Some people said this is caused by the toolkits, I donno.

                  A link to what I'm talking about.
                  You're comparing apples to oranges; Xorg is not a compositor. I can crash my compositor [compiz, running on X] over and over, all day long and i will not lose any of my applications, ever. A better comparison would be WindowServer on MacOSX vs. Xorg - if i crash WindowServer - guess what? - i lose all apps and am logged out. [note: I can't speak of latest MacOSX - but from 10.1 to snow leopard, IIRC this was the case... I ditched Apple though, a while back ]

                  Originally posted by garegin View Post
                  The third one is less important but pretty bad too. You can't install driver updates in a straightforward fashion. For home users you get a new distro release every six months, so your hardware support is pretty current. But for "enterprise distros" like RHEL/Centos or Debian it's aweful. Say you are an enterprise and want to install Linux on your new servers. You are fscked, because the driver support in those distros is four years old. Unless you go nuts and start compiling kernel modules, you are basically stuck with buying legacy hardware as the only way out.

                  Right now the latest RHEL has the 2.6.32-431 kernel. No Haswell support, no support for that fancy new 10gbe NICs or SAS controller. That kernel came out in Dec 2009. Yeah, distros always freeze the vanilla kernel and customize it, so there is SOME new hardware support, but it's a drop in the bucket.
                  1. "home users" don't get 6-month release cycles - people who _choose_ to use distros that follow that model, jump through those hoops. Frankly, the 6month cycle is basically crap, imo - and i had years of dealing with tat crap. Personally, i think it is a waste of time; dist-upgrades can break too easily, re-installing is annoying and obviously, being stuck on outdated software is a poor option too. Unfortunately, i don't think LTS is all that much better... I personally think distributions would be better served to follow a rolling release model of some kind [maybe follow a more conservative methodology, over the more bleeding-edge like Arch - but i think a balance could be struck there, better than 6month cycles].

                  2. Compiling a kernel is not difficult, especially not for a company, whom no doubt has an IT department/staff. To be claiming these people are 'fscked' because they need a kernel not shipped with their distro is extremely short-sided, even laughable... Especially, considering even a noob, can learn very quickly how to do that... - I can't imagine that you have any real experience working inside of an IT department that is deploying Linux on their servers and that your comment is pure speculation; because in reality it is as simple as building generic kernel packages on a single machine, then using those packages to upgrade all other machines. ~ which amounts to very little work, in the end... Depending on how you are upgrading your servers; maybe via a custom/local repository - we are talking about very little work involved; build packages, put them in your local repo, then upgrade said servers. No one on RHEL, CentOS, etc is 'stuck' on the shipped kernel version... You are proposing a serious problem that doesn't even exist.

                  Comment


                  • #79
                    Originally posted by liam View Post
                    You are being argumentative. If you read what he wrote, all of it, you'd understand the problems he's trying to solve, and the proposed solutions.
                    And if you took a poll of the Linux user base you'd find quite easily that these "problems" of his are of no problem to probably 99% of users.

                    Nobody cares about this, but I'll bet we're all going to be forced to care because it'll be coming one way or another.

                    Comment


                    • #80
                      Originally posted by liam View Post
                      After reading so much miscomprehension from folks, I'm now beginning to realise that Lennart-hate has way too many tag-alongs who simply don't understand ALL of the specifics (I'm not calling out anyone in particular).
                      Lennart strikes me as a visionary with poor vision. It's nothing personal at all. I've never met the guy. He just wants to take Linux down a certain path that I don't want to go. And he has the influence to make it happen. I do not, of course, lay all the blame on him. There is a lot of bad thinking in the Linux world these days.

                      People are always so excited about revolutionary change. But history proves that revolutions don't always end well.

                      Comment

                      Working...
                      X