Announcement

Collapse
No announcement yet.

Lennart Poettering Talks Up His New Linux Vision That Involves Btrfs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by rdnetto View Post
    But Systemd the project keeps absorbing other projects and integrating them tightly with the daemon, which makes it much harder to use them independently. The obvious example of this is that Gnome 3 depends on logind.
    Ok, I might be wrong here, but IIRC logind didn't existed before systemd - so systemd absorbed nothing here.

    The two projects that I'm aware that was absorbed by systemd are udev and consolekit. Both projects was developed by systemd developers before. IIRC you should be able to build and use udev without systemd, also you can build an old consolekit http://cgit.freedesktop.org/ConsoleKit/tree/ - it's there, no one deleted it. It's just not maintained anymore.

    Comment


    • Originally posted by ryao View Post
      Lastly, I think his btrfs plans will require approval from Oracle. ZFS and btrfs are similiar enough that I find it unlikely that btrfs is not subject to the ZFS patent portfolio. If it is subject, then the only company that can use the btrfs source code in a product without risk of a lawsuit is Oracle itself. This is much like how only Microsoft could use the Linux FAT code that implemented both short and long names. If Lennart's btrfs ideas get into RHEL without something being done about Oracle's software patents, Redhat would be an enormous target for Oracle's lawyers.

      Some theorecize that Oracle's contribution of the btrfs code without an explicit patent grant makes its reuse in commercial products okay, but I am not certain of that. The patent system exists to stifle innovation for the benefit of those that filed ideas first courtesy of America Invents and was not much better before then. While I am not a lawyer, I cannot imagine a legal defense along the lines of "the code was published for reuse under a license that did not give us a patent grant so we ought to have one" would go very well. That being said, OpenZFS does have an explicit patent grant through the CDDL, so whatever Lennart and his friends create likely could be adapted to use that without risk of producing something that steps on Oracle's ZFS patent portfolio.
      You know.. I always find these kinds of attacks disingenous.

      If Company A has Product X that they're developing out in the open (and thus implictly want other people to use) but have a patent portfolio on that they haven't set up a patent grant for, what you're saying is that therefore Company A will obviously attack Company B if Company B tries to use Product X.

      This is quite simply not the case. The only reason that Company A would litigate in this instance is if they wanted to become the One True Vendor, in which case they have now destroyed any community surrounding product X and basically insured that Company B and the community that used to surround the product won't do anything to support said product anymore and will move onto other things. As a result litigation is not in Company A's best interests. Now this said Company C can swoop in and buy out Company A, and then litigate against Company B, assuming that they don't actually care about the product (See: Java), but at that point you're falling into the "what if" category of thought and certainly Company C wouldn't buy out Company A to sue over Product Y they produce in the open as opposed to suing over Product X that they're trying to either kill or see a litigation opportunity for (see: Oracle vs Google).

      Comment


      • Originally posted by curaga View Post
        I see some unanswered issues in it, mainly those of control. If there is only one GNOME_FOO runtime, who creates it? How can the entire linux world trust that entity to do their job correctly, without backdoors, bugs, or accidental breakage? It would become a single point of failure, and if I were to target malware, that's exactly where I would do so.
        There could be several GNOME_FOO_X runtimes, or runtime named MY_HOBBIT_Y which would contain roughly the same software but repackaged by to be more awesome. However each application (version) must chose one runtime they use. It is of course likely that the runtime released by official upstream developers would be more used.

        Currently the distros are the single-entity an end-user must trust. And because we have many of them, the effort and knowledge for maintaining say the GNOME packages are split up. No distribution has the same level of competence in the GNOME stack as the upstream GNOME developers. No distro knows the GIMP application as well as the people developing GIMP.

        Originally posted by curaga View Post
        Secondly, the issue of patches. Say I'm an app vendor. I target the GNOME_56 runtime. During development I discover I need to patch a bug in one of the libs in that runtime. Do I
        a) wait until $ALMIGHTY_RUNTIME_VENDOR gets off its ass and releases a GNOME_57 runtime? That may not be an option, it may take far too long on their part, or if they do it in time, requiring all users to get it.
        b) I package a version of it in my app's area. Running afoul of "no bundled libs" policies, depriving other apps of the fix in a shared place.
        You do both of course. First you file a bug against the runtime. Then you bundle it so that your users get the fix right away (near impossible in the current distro-controlled application distribution model). Then after some days you nag the runtime people again, maybe provide the patch. If they fix the issue, you remove your bundled lib.
        If after a while they don't get their shit together of, you fork their runtime and ship your own, for your application and maybe others to use.

        Originally posted by curaga View Post
        Third, the difference in build options. Since there is only one $ALMIGHTY_RUNTIME_VENDOR, the one runtime clearly cannot be all at once fast, small, supporting old cpus, requiring new cpus, without extra checks, with extra checks, compiled for various stack and security protections, and without those.
        As explained above, there is not one runtime vendor. Targetting old/new cpus can be done with different architectures (just like i486 versus i686 now). Also, we don't need less-secure runtimes. There are very few apps for which the extra couple of % is critical. If you have such an app, bundle the libraries which must be built with special flags.

        Comment


        • Not bothering about deb/rpm sounds very tempting but please no more bugs. Udev for ex. is a pain in the ass which should be removed all togehter.

          Comment


          • Originally posted by mike4 View Post
            Not bothering about deb/rpm sounds very tempting but please no more bugs. Udev for ex. is a pain in the ass which should be removed all togehter.
            Another for the ignore list. You have the time to drag this topic in every single thread remotely related to Poettering, yet somehow don't have the time to do anything remotely productive to fix it.

            Comment


            • 2. Compiling a kernel is not difficult, especially not for a company, whom no doubt has an IT department/staff. To be claiming these people are 'fscked' because they need a kernel not shipped with their distro is extremely short-sided, even laughable... Especially, considering even a noob, can learn very quickly how to do that... - I can't imagine that you have any real experience working inside of an IT department that is deploying Linux on their servers and that your comment is pure speculation; because in reality it is as simple as building generic kernel packages on a single machine, then using those packages to upgrade all other machines. ~ which amounts to very little work, in the end... Depending on how you are upgrading your servers; maybe via a custom/local repository - we are talking about very little work involved; build packages, put them in your local repo, then upgrade said servers. No one on RHEL, CentOS, etc is 'stuck' on the shipped kernel version... You are proposing a serious problem that doesn't even exist.
              swapping out a kernel is dangerous from a standpoint of stability. Vendors used to give you kernel updates back in the days, until they realized its perils. You don't have to build a new kernel, just the modules. there's also elrepo. Vendors are not idiots. If providing new drivers was no big deal, they would do so through the system update tool. I remember Chris Fisher (Jupiter Broadcasting) saying that he had to deploy Gentoo on servers back in the day, because he needed new drivers for printers in the bank he worked at.

              Comment


              • Originally posted by anda_skoa View Post
                So, when did the syscall interface for sockets change ABI? I don't think it ever did.
                I think, but could be wrong, that the ABI of libc changed once, but then again an older libc would still work due to the kernel not having changed.

                "Almost" doesn't tell us anything here, the system interface is one of the most stable, also when compared to other platforms.
                API/ABI for syscall for sockets and protocol for clipboard are on the same level? we seem to live on different planet

                Originally posted by anda_skoa View Post
                That doesn't change the fact that libraries such as Qt have always used the existing name mechanism to make ABI incompatible version installable in parallel and ABI breaks are mapped into bumps of the major version like you suggested.

                On the note of D-Bus, are you referring to the convenience library for implementing the protocol or the protocol itself?
                The latter has been implemented in other ways than using the convenience library, a change in ABI of one implementation, e.g. GDBus, does not effect any other implementation.

                The convenience library has not changed ABI even once, btw.

                Cheers,
                _
                while true, they were mitigated by distro. if i install fedora for example, there is 0.01% i would bother deviating from defaults since i can just install new one in 6 months. installing new one gives me newer versions of both and again mitigates the problem.

                i'm saying about protocol itself. we probably both agree that dbus is wonderfully stable, but neither of us can claim it will be stable for next 10 years. and GDBus won't help if different softwares suddenly require different versions. at that point one would be required to run 2 different dbus services where softwares using old and new version won't be able to interop.

                and i think you're understanding my point a bit wrong. i don't have doubts in OSS (in pure OSS world i would love this), my doubts go to commercial where most apps go unmaintained. if you allow too much levy to lazy entity like that, you're bound to be stuck with some prehistoric distro runtime for the next century. that runtime also probably won't be updated. but, here is the catch, this is exactly what runtimes like this or application sandboxes will do. get commercial devs on board.

                if you want example, then just look mono in unity. it's still the same version as they started with. same thing will happen if you allow vendors to just push their runtimes with their softwares. and before you say "but, it works" try asking developers using unity how do they like this fact

                personally i think Lennarts idea http://www.superlectures.com/guadec2...ions-for-gnome is much better starting point than this, not to mention it overlaps in runtime department.

                Comment


                • Originally posted by Luke_Wolf View Post
                  You know.. I always find these kinds of attacks disingenous.

                  If Company A has Product X that they're developing out in the open (and thus implictly want other people to use) but have a patent portfolio on that they haven't set up a patent grant for, what you're saying is that therefore Company A will obviously attack Company B if Company B tries to use Product X.

                  This is quite simply not the case. The only reason that Company A would litigate in this instance is if they wanted to become the One True Vendor, in which case they have now destroyed any community surrounding product X and basically insured that Company B and the community that used to surround the product won't do anything to support said product anymore and will move onto other things. As a result litigation is not in Company A's best interests. Now this said Company C can swoop in and buy out Company A, and then litigate against Company B, assuming that they don't actually care about the product (See: Java), but at that point you're falling into the "what if" category of thought and certainly Company C wouldn't buy out Company A to sue over Product Y they produce in the open as opposed to suing over Product X that they're trying to either kill or see a litigation opportunity for (see: Oracle vs Google).
                  The solution is for Redhat to license the ZFS patents from Oracle, find a way to get software patents invalidated or switch to Open ZFS. Leaving things to chance when Oracle is known to be litigation happy is not a good idea.
                  Last edited by ryao; 02 September 2014, 08:13 AM.

                  Comment


                  • Originally posted by johnc View Post
                    I can't say that .deb or .run files have ever bothered me. And when I need a piece of software it's almost always available in at least one of those two formats.
                    A .deb must be installed as root (and most .run files too), and can do whatever it wants with your entire system. It is not, and will never be, safe to install .debs from untrusted sources. It is also a huge endeavour for a developer to create one package per distro+distroversion+architecture. For these reasons we cannot sanely have 3rd party applications on Linux. Some consider that a problem.

                    Comment


                    • (emphasis mine)
                      Originally posted by Isedonde View Post
                      Using new Lennart idea:
                      * Rhythmbox depends on the GNOME_3_12 runtime, and links (among others) against libgio from that runtime
                      * Gnome fixes a CVE in libgio. Gnome releases updated GNOME_3_{8,10,12,whatever} runtimes containing the new libgio, without breaking ABI, so Rhythmbox uses the new libgio without recompiling and the CVE is fixed.
                      * Gnome does ABI breaking changes to some library in the gnome runtime, releases GNOME_3_14 (but somehow fails to update Rhythmbox for it). Now Rhythmbox "3.12" still works, because the old GNOME_3_12 is still available, containing the ABI compatible libgio.
                      * Same as before, but this time GNOME_3_12 runtime is no longer available because 10 years have passed: I can no longer use rhythmbox (required GNOME_3_12 runtime not found).

                      Using sonames:
                      * Rhythmbox links against libgio-2.0.so, provided by some distro package libgio2.
                      * Gnome fixes a CVE in libgio. Gnome releases updated libgio-2.0.so source code without breaking ABI. The distro provides their users with an updated libgio2 package and Rhythmbox uses the contained libgio-2.0.so library without recompiling, so the CVE is fixed.
                      * Gnome does ABI breaking changes in libgio. Thus, they must choose a new soname and call it libgio-3.0.so, which is shipped in my distro in the libgio3 package. The old Rhythmbox still works, because the libgio2 package is still available, containing the libgio-2.0.so library.
                      * Same as before, but this time libgio2 was dropped from the distro because 10 years have passed: I can no longer use rhythmbox (shared library libgio-2.0.so not found)
                      The difference is that the runtimes will be shipped (at least nearly) directly to user by the runtime vendor. Thus the repackaging work does not have to be done once per downstream/distro, allowing the fixes to reach the users independently of distro scheduling.
                      Currently distros are extremely hesitant to update libraries during the lifetime of a distro, for doing so could break any and all software in the entire system. An update to glib could cause the system to no longer boot. With the split into /root and runtimes, this is no longer a concern again facilitating fixes and improvements reaching users faster.
                      It will also become feasible to ship new applications (and runtimes) to "old" distro versions, so that one does not have to always update to the very latest distro version in order to have applications that are <9 months old.

                      Comment

                      Working...
                      X