Announcement

Collapse
No announcement yet.

Lennart Poettering Talks Up His New Linux Vision That Involves Btrfs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by jonnor View Post
    A .deb must be installed as root (and most .run files too), and can do whatever it wants with your entire system. It is not, and will never be, safe to install .debs from untrusted sources. It is also a huge endeavour for a developer to create one package per distro+distroversion+architecture. For these reasons we cannot sanely have 3rd party applications on Linux. Some consider that a problem.
    You only have to watch Linus Torvalds Talk at DebConf just a few days ago to realize why we desperately need what the systemd guys envision.
    The issues the proposal wants to address were brought up 6 times by Linus Torvalds during the talk.


    The discussion about how subsurface is distributed on Linux, OS X and Windows is at 50:00 and highlights how broken our currrent package system is.

    Comment


    • Originally posted by justmy2cents View Post
      API/ABI for syscall for sockets and protocol for clipboard are on the same level? we seem to live on different planet
      I wrote earlier that clipboard operations are almost certainly handled via communication, a common solution for which being sockets.
      So the interface between application and the socket is the one that needs to remain API/ABI stable.
      Any participant can use whatever technology stack they want, in any version they want, without interfering with the other.

      For example the X clipboard mechanis, due to being handled as a communication over the X11 socket, works even across machines, between different CPU architectures using a very wide range of implemenations of the protocol.

      Originally posted by justmy2cents View Post
      while true, they were mitigated by distro.
      What does "they" refer to in this context?
      The part you quoted mentioned nothing that would have to be mitigated.

      Originally posted by justmy2cents View Post
      i'm saying about protocol itself. we probably both agree that dbus is wonderfully stable, but neither of us can claim it will be stable for next 10 years. and GDBus won't help if different softwares suddenly require different versions. at that point one would be required to run 2 different dbus services where softwares using old and new version won't be able to interop.
      Sure, if one explicitly, on purpose, engineers the second protocol to be incompatible and make the service use the same socket (thus not allowing any kind of protocol proxying), then of course that is bad.

      But what would be the gain in doing that?

      We have plenty of examples of either the address changing, thus enabling translation between protocols, or protocols being versioned.
      What would be the reason to handle that for the most important system IPC mechanism differently?

      Cheers,
      _

      Comment


      • Originally posted by jonnor View Post
        For these reasons we cannot sanely have 3rd party applications on Linux. Some consider that a problem.
        Only if we forget that third party applications have been available and installed on numerous platforms that do not have, or as Linux do not require, a central sofware store.

        This kind of software distribution method is most often referred to as an "installer", a packaging that contains all dependencies of an application and that deploys it to an application specific directory tree, with startup hooks taking care of establing base paths.

        I've installed plenty of software that way, both in Windows and Linux, so any claim that this is not possible is not really valid or sarcarsm.

        Cheers,
        _

        Comment


        • Originally posted by ryao View Post
          The solution is for Redhat to license the ZFS patents from Oracle, find a way to get software patents invalidated or switch to Open ZFS. Leaving things to chance when Oracle is known to be litigation happy is not a good idea.
          BTRFS as any other piece of Linux is GPLv2-licensed. GPLv2 (that the main difference from GPLv1) License asks you to also give permission for the patents covering the code that you wrote.
          So if BTRFS is covered by patents that Oracle owns, you're free from lawsuites if you use GPLv2 code provided by Oracle (which is the case of BTRFS).

          The whole point of GPLv2 was avoiding that the freedom to study and tinker code was prevented by patentes.

          (Just like the whole point of GPLv4 is avoiding that the usual free-software freedoms are prevented by DRM).

          Originally posted by BlackStar View Post
          Actually, it's a huge simplification compared to what we have now. Did you read his blog post?
          For example, when installing your LiveCD to disk, you would simply copy the btrfs deltas. That's it. The files on the LiveCD and your disk would be identical, no need to wait half an hour for your package manager to install packages.
          Some distribution (like OpenSUSE) have already experimented with such install-images (KIWI in the case of OpenSUSE). Instead of completely installing a system from the first package, installer have the option to download and install an image which has already a lot of stuff pre-installed, and only install/remove package from that point onward.
          This is much better both for bandwidth (1 compressed image takes less space than all the various RPMs used to create it), and processing (all the RPMs are already pre-installed, their pre&post scripts run, and the rpm database updated on the pre-installed image).



          Also people commenting in this thread need to realise that indeed, the proposal of Lennart & co is on another level, on a *image* level.


          There's no discussion about doing away with RPMs/DEBs/etc. They are going to stay around and still be the tools of choice of building the base system.
          To take the Rythmbox discussed earlier:
          - actually, Rythmbox will still typically get distributed (as RPM / DEB) with you base Fedora / Ubuntu whatever. Small- and Mid- scale projects won't change much. Things will change with very large-scale project (full office suites like LibreOffice could be an exemple). In addition of being a piece of software distributed by your distribution (and being nicely integrated with everything down to the minute details, like suse's spin including a special theme to match the rest of the desktop), they could also be distributed the way commercial vendor distribute their software under shrink wrap: in a generic way that didn't require the customisation of a distribution maker.

          - Commercial software, or big large-scale free software project, will be able to function exactly like commercial games on Steam: they don't give a fuck about what distro they are running on. It's steam's job to provide an uniform environment that they can run on, and steam's work to make available the missing libraries (on an OS like SteamOS, steam client probably is just the game manager. On a ultra-custom OS like Gentoo, steam will probably provide a whole set of libraries, probably nearly all client-side libraries of steamos). If some game need some specific middle ware, it just packages it together.

          Originally posted by gilboa View Post
          Had you bother to read my comment in full, you'd understand the implication of no system wide package management.
          - No centralized secure means to install software.
          - No centralized updates management, leaving the system littered with multiple unmanaged software and libraries. In Linux, if a vulnerability is fixed, it is fixed system wide (as opposed to fixing a single copy of one application).
          - No centralized secure means to uninstall software. A failure in a 3'rd party software uninstaller can leave the system littered with orphan libraries, files - let alone trashed registry.
          So there are no problem of cleanly installing / uninstalling packages:
          - that is still done at the level of your base OS using your favorite package manager (zypper, yum, aptitude, etc.)

          Such software packs behave as completely separate sub-volumes (more or less like completely different partitions).
          To install, you just add on separate sub-volume. To un-install it, you just erase the subvolume.
          We're very far from the Windows world of thousands of incompatible installer all puting their files $DEITY knows where.
          (The closest this to the Windows world is "portable" software: soft that runs from USB stick. You just plug/unplug the sticks as needed).

          Your base fedora is still managed by you, with its own package database in its own /usr/share/rpm, etc.

          But if you choose so, you can decide to use software distributed over steam. Steam maintains its own base environment (it's up to you to decide if you trust them or not, security-wise), and get software as "whole-partition-package" running above it.

          Don't trust a provider anymore? instead of going through a grueling un-install procedure like in windows, with Lennart's proposal, you just wipe the sub-volume.

          The whole point of Lennart is leveraging the capabilities of BTRFS (among other sub-volumes that handle like separate partitions) and containers (including systemd's capacity to very quickly assemble on). You want to run a big piece of software that isn't packaged in you distro? systemd will simply on-the-fly create a chroot that mounts the relevant software at the correct path mount point and launch it.


          You also need to realise that most of the users here around aren't the main target audience for this.

          Regulars at forums like Phoronix and /. are still going to use highly customised systems, where they craftfully select packages from base distro and 3rd party repositories (like suse' OBS, like Packman, like Ubuntu's PPA). you build it a single-use system which best suits your individual needs.

          Grandma's also won't probably be the target audience. They'll just run whatever came with the computer and not bother much (ChromeOS style of OS is probably the best use case for them).

          It's the rest. The admins who have to deal with large scale deployment, and have to deal with huge commercial packages, whose integration isn't stellar with the base OS.
          - Imagine: you deploy the same CentOS7 image everywhere in the university. The lab that needs some weird custom package to interface with the lab's analysis equipement, will just use a separate layer or stack of layers that provide just that. (That's happening way to often at universities. Lennart's proposal would be a real blessing)
          - Imagine: you're a player... well you do it already today with steam. Lennart proposal is just about making this a bit more standard and well organised, by leveraging a combination of btrfs/containers/systemd
          - Embed linux deployment: this could impact also firmware deployment, or minimalist system (ChromeOS) upgrades. If it is done in a standard manner, it could help simplifying and increase the re-use in these system. Currently each embed linux vendor has a local specific home-made solution. things will get better if there is an "official linux way" to do things.

          The best part, this proposal seeks to do it:
          - in a non breaking way. The base system continue to work the same way (RPMs, etc.) its only for large 3rd party software suite (the thing that will usually necessitate either the vendor only supporting 1 target (only RPMs for CentOS 5 are available) or the vendor using a .run that bleeds files everywhere).
          - in a non breaking way2. Each piece of software still uses the same /usr, /home, directories. It's the job of the container to have the correct environment mounted at each point.
          - in a standard way (it attempts to bring the kind of organisation that LSB brought to distribution themselves, but bringing it to integration between distro and 3rd party software).
          - leverage as much modern technologies as possible to simplify the work. Whereas older solution required quite some work (some basically re-inventing an 2nd package management system along side the basic RPM/DEB), this proposition uses a combo of btrfs, lxc and systemd's ability to quickly setup containers to simplify much of the work.

          Comment


          • Originally posted by michal View Post
            Ok, I might be wrong here, but IIRC logind didn't existed before systemd - so systemd absorbed nothing here.
            Ah, you're right - I got the facts slightly wrong. logind was always a systemd project, but only in systemd v205 did the systemd (the init daemon) become a hard dependency. In that context, systemd becoming a hard dependency is much less unsurprising, though I stand by my point that if it were a non-systemd project, they would have looked at supporting other init daemons as well.

            The two projects that I'm aware that was absorbed by systemd are udev and consolekit. Both projects was developed by systemd developers before. IIRC you should be able to build and use udev without systemd, also you can build an old consolekit http://cgit.freedesktop.org/ConsoleKit/tree/ - it's there, no one deleted it. It's just not maintained anymore.
            Being unmaintained is the same as being dead - even if it works now, sooner or later something it depends on will change, or a vulnerability will be discovered, and it will become impractical to use.
            The systemd folks claim that udev can be used without systemd, but it looks like that was becoming increasingly difficult.[1][2]

            Originally posted by Lennart Poettering
            Anyway, as soon as kdbus is merged this i how we will maintain udev, you
            have ample time to figure out some solution that works for you, but we
            will not support the udev-on-netlink case anymore. I see three options:
            a) fork things, b) live with systemd, c) if hate systemd that much, but
            love udev so much, then implement an alternative userspace for kdbus to
            do initialiuzation/policy/activation.
            That makes it pretty clear that there's absolutely no interest in supporting the use of systemd projects outside of systemd; LP has an 'all or nothing' philosophy and is outright hostile to non-systemd systems. Most open source projects are more flexible about this - Gnome and KDE applications work independently of the DE, and eudev (despite being a Gentoo project) explicitly states that they're interested in working with other distributions.

            [1] http://www.phoronix.com/scan.php?pag...tem&px=MTczNjI
            [2] http://www.phoronix.com/scan.php?pag...tem&px=MTI1NTE

            Comment


            • Originally posted by anda_skoa View Post
              But what would be the gain in doing that?

              We have plenty of examples of either the address changing, thus enabling translation between protocols, or protocols being versioned.
              What would be the reason to handle that for the most important system IPC mechanism differently?

              Cheers,
              _
              if you didn't delete (or ignore) the last part of the comment about my direction where i see a loophole, you'd answer to your question. but, let me repeat it again.

              if world was comprised only out of OSS as now, idea would be awesome. OSS projects follow three way path in most cases
              1. they succeed and get maintained with care
              2. they succeed just to be superseded by something better and extinct
              3. they go extinct at the start
              in all 3 cases, i couldn't find one serious problem with this setup. if i did, i'd be lying. idea up to here not only makes sense, it's awesome

              the problem where this will show is the fact how low runtime dependency can be exposed for commercial vendors. and this is where i see framework in sandboxes much more suitable, since those somehow state the limit better. now the 3 steps how commercial software is done
              1. they succeed in putting out something that works and as long as it is generating money... who cares. development costs and no one will pay more money if they fix some random security bug
              2. they succeed temporarily, but in most cases they can't get superseded because people will still need access to old data which is in most cases proprietary. this application will not be maintained
              3. they go extinct at start
              here 1 and 2 impose whole shitload of unmaintained custom snapshots

              if i look at some work my commercial software friends did, i want to snap their heads off more than often, while they just brush it off with "nobody will pay me more if i do it right, but i'll sure as hell need more work and study... and there are deadlines, i barely keep up this way". that kind of approach will simply generate zillions of unmaintained snapshots put together by mostly clueless people and it will more or less feature first working version they put together blindly tapping in the dark. now, in the worst case scenario one vendor produces bad snapshot that is incompatible and not updated, rest of the vendors simply say "uggh, vendor A will definitely keep better stability than community" and they start depending on that bad snapshot. now, each admin that has the bad luck where environment requires vendor A solution is stuck with problems for life.

              don't misunderstand my poking holes as saying "this tech is bad, let's fscking throw it away". it isn't, it just fails to specify worst case scenario and takes it in a range that it specifies is too broad. tech is only good if enough people question every aspect of it. if people simply blindly agree, then something is terribly wrong. in this case, runtime can be anything up to whole distro, since no limits are set what is what. and that means it is only half of the plan and half of personal interpretation. and 2nd half will be really badly misinterpreted in commercial sector where cost>quality and support=cost
              Last edited by justmy2cents; 02 September 2014, 11:56 AM.

              Comment


              • Originally posted by justmy2cents View Post
                if you didn't delete (or ignore) the last part of the comment about my direction where i see a loophole, you'd answer to your question. but, let me repeat it again.
                I did not ignore it but it was about something other than the point I was addressing and still is.

                I am well aware of differences in development style, goals, etc. under different circumstances, but don't see how that matters in this case where the communication channel is primarily provided by FOSS solutions and used by anyone.

                From history we have TCP/IP, HTTP as examples in that category and they seem to have worked fine for all kinds of developers on either side of the server/client spectrum.

                Or, as I mentioned earlier, the several decade stable X11 protocol, implemented in FOSS and proprietary software alike, again server and client side.

                I therefore have a hard time to believe anyone would gain in purposfully making a core communication mechanism of a platform less stable or less long lived.

                What would e.g. developers of switches gain by making it impossible to route TCP/IP from one version to another?
                What would developers of a core message passing system gain from doing the equivalent?

                Cheers,
                _

                Comment


                • Originally posted by anda_skoa View Post
                  I did not ignore it but it was about something other than the point I was addressing and still is.

                  I am well aware of differences in development style, goals, etc. under different circumstances, but don't see how that matters in this case where the communication channel is primarily provided by FOSS solutions and used by anyone.

                  From history we have TCP/IP, HTTP as examples in that category and they seem to have worked fine for all kinds of developers on either side of the server/client spectrum.

                  Or, as I mentioned earlier, the several decade stable X11 protocol, implemented in FOSS and proprietary software alike, again server and client side.

                  I therefore have a hard time to believe anyone would gain in purposfully making a core communication mechanism of a platform less stable or less long lived.

                  What would e.g. developers of switches gain by making it impossible to route TCP/IP from one version to another?
                  What would developers of a core message passing system gain from doing the equivalent?

                  Cheers,
                  _
                  it is not about communication, i think i was specific with the fact that i named it as one of worst case scenario examples. if i didn't, then... my bad.

                  while idea is sound and awesome, implementation has no clear guidelines what is defacto and what is personal interpretation

                  i'll make concrete example here. lennart specifies gnome could produce extra snapshot for each version and that is outright sacrilege. if spec said you can only build something-x.y where you guarantee backward compatibility with previous something-x.z and thus make previous snapshot replaceable since they are API/ABI compatible. if they are not then provide something-x+1.1. but, it should also specify how long term snapshot must be.

                  i will use gnome here as example and please don't start flamewar on this, i'm not bashing gnome here, just stating facts. gnome-3 is evolving version where they decided it will be so. this is good and bad. on one side, gnome can test various things making gnome better, on another API/ABI suffers.

                  so, in case of project with unstable API/ABI either separate non stable parts out of runtime snapshot or don't provide snapshot aka. REJECTED by some runtime maintaining org. this way we would always have 1 gnome runtime for 3.x and not 20 in 10 years or so just because one of apps always decided it will depend on previous version since api/abi changed in one of the libs. there should be a guideline when/how to prepare runtime snapshot and what that snapshot should contain.

                  another thing of concern is who is cleaning unused snapshots. let's say appX used gnome-3 and now updated to gnome-4. if this was the only appX that depended on gnome-3, there should be a way to know that fact so gnome-3 can be simply removed. this is not a bug, just missing in proposal and addable at any time

                  now, gnome would probably follow statement like this to the letter. commercial vendors?

                  another case is randomness of what runtime is. there should be definitely some stable structure of default runtimes, so vendors can't bypass that and simply introduce whole copy of their running distro as one. there is for example kernel runtime, development runtime, ... and those should never be superseded by some custom runtime, since API/ABI in those is predictable and known
                  Last edited by justmy2cents; 02 September 2014, 12:50 PM.

                  Comment


                  • Originally posted by justmy2cents View Post
                    if i look at some work my commercial software friends did, i want to snap their heads off more than often, while they just brush it off with "nobody will pay me more if i do it right, but i'll sure as hell need more work and study... and there are deadlines, i barely keep up this way". that kind of approach will simply generate zillions of unmaintained snapshots put together by mostly clueless people and it will more or less feature first working version they put together blindly tapping in the dark.
                    This is a real problem. I used to have a closed-source printer driver that could only work with a specific version of libc and gtk1. You get the same problems now if you install closed-source software that uses e.g. gtk 3.4 (given that gtk 3.x versions are not parallel-installable.)

                    Lennart suggests a very elegant solution to this problem: each runtime is self-contained and always parallel-installable. In the aforementioned scenario you could install the correct version of gtk 3.x and have the closed-source application work as long as that version works. This makes things simpler for both the user and the admin.

                    The user can keep using their closed-source application.

                    The admin can upgrade the system without holding back for fear of breaking the application. A security-conscious admin will use cgroups to create a jail for the closed-source app and outdated runtime so they cannot connect to the network or access the file system in an insecure fashion.

                    Looking forward to seeing the results of this work!

                    Comment


                    • Originally posted by BlackStar View Post
                      This is a real problem. I used to have a closed-source printer driver that could only work with a specific version of libc and gtk1. You get the same problems now if you install closed-source software that uses e.g. gtk 3.4 (given that gtk 3.x versions are not parallel-installable.)

                      Lennart suggests a very elegant solution to this problem: each runtime is self-contained and always parallel-installable. In the aforementioned scenario you could install the correct version of gtk 3.x and have the closed-source application work as long as that version works. This makes things simpler for both the user and the admin.

                      The user can keep using their closed-source application.

                      The admin can upgrade the system without holding back for fear of breaking the application. A security-conscious admin will use cgroups to create a jail for the closed-source app and outdated runtime so they cannot connect to the network or access the file system in an insecure fashion.

                      Looking forward to seeing the results of this work!
                      ok, either it is my english at fault here or i really don't know. i know what it proposes and exactly understand benefits. i also think something like this is really awesome and needed. just don't agree with broadness of interpretation and abuse.

                      btw, if you go and read comments on G+, whole lot of people came up with same concern as me. it is nice and dandy, but incomplete. proposal like this simply shouldn't fly until they at least put it on more stable grounds than how you name runtimes.

                      hmmm, it works for gtk-1, i agree to the point where i would almost agree it also solves worlds hunger. but, he also says gnome-3.x can create runtime for each version without restriction when/how. welcome to the world where 100kb appX can carry 20GB of dependencies otherwise not required by anything else, since that app will never update to new runtime. i'm scared to think something like runtime could be made with unstable api/abi. that is whole point of my concern. anything up to whole distro, no matter how personalized can be declared as runtime

                      i went more on this in my previous comment.

                      Comment

                      Working...
                      X