Announcement

Collapse
No announcement yet.

Lennart Poettering Talks Up His New Linux Vision That Involves Btrfs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by RahulSundaram View Post
    The emphasis on shared libraries were brought on by the many man months it took to fix a security issue in zlib when static or bundling were more common.
    The great thing about Windows is that zlib is a DLL... but every piece of crap program has its own copy.

    I lost count of the number of zlib.dlls I had to replace on my old XP machine when there was a major security hole a while back.

    Comment


    • Originally posted by RahulSundaram View Post
      Look it up. It is well documented. The packaging guidelines from distributions that advise against static linking and bundling was born from such experiences. Note that this policy usually has several exceptions so the idea of striking a reasonable balance is already implemented. However shared libraries aren't the problem. The lack of standard runtimes across distributions is. Several distributions also patch and introduce their own library versioning downstream
      I had a quick look at a couple of guidelines from different distributions. It does not seem like anyone is demanding an exclusive use of dynamic linking. More like it was formulated in an unfortunate way with some of them. They all do support static libraries even it is just for the cases where no shared library can be build.

      And, yes, it is a problem, because only by pointing at what other distros do is one not looking at what oneself can do. And there will always be some distro that is not going to care about what the rest of the world does and one will always keep running into this problem. The last thing a distro really should to do is to depend on other distros.

      The best chance still is to reduce the number of dependencies, to start working on a standard for those that remain, and to become open to even fully statically linked binaries. The later is not ideal, but when it means a package is available in a large-static form, too, then it has a good chance for the package to become popular beyond its distro and thereby adding to the popularity of the distro itself. This too helps in bringing down the barriers between distros.
      Last edited by sdack; 04 September 2014, 02:11 PM.

      Comment


      • No, we don't need to support static linking. Static linking is for idiots, because it means the only way to fix bugs in a common library is to wait for a new version of your app.

        Hey, guess what? I run Linux because I don't want to run an insecure heap of crap like Windows. Please stop trying to turn it into one.

        Comment


        • Originally posted by sdack View Post
          The last thing a distro really should to do is to depend on other distros.

          The best chance still is to reduce the number of dependencies, to start working on a standard for those that remain, and to become open to even fully statically linked binaries.
          Well, Linux distros do depend on each other because we are a community and patches etc flow from one distro to another all the time. Distributions already do static linking when required but noone is going to do it willy nilly for all the binaries. That would just import another form of dll hell in Linux.

          Comment


          • Originally posted by sdack View Post
            And in the other thread (on systemd) did I already explain to someone where /sbin and /usr/sbin originally came from.
            After all the modifications imposed by systemd the Filesystem Hierarchy Specification finally began to make sense.
            Before that, we had "configuration" in /etc that was changed dynamically at runtime ( mtab and hosts after updating hostname).
            Additionally, the base system was spread over so many directories that mounting it from NFS was very problematic.

            Systemd enforces a really strict design discipline, but this is exactly what makes a system well engineered.

            Comment


            • Originally posted by movieman View Post
              No, we don't need to support static linking. Static linking is for idiots, because it means the only way to fix bugs in a common library is to wait for a new version of your app.

              Hey, guess what? I run Linux because I don't want to run an insecure heap of crap like Windows. Please stop trying to turn it into one.
              No. You assume that your library and your binary come from different sources when really they come from the same source, the maker of your distro. You are also not supposed to have everything statically linked exclusively. You only statically link the smaller libraries, which are unlikely to be found everywhere and also do not have a dependency count as high as libc or libX for example. When you then have a bug in such a library then you will need to download the binaries just like you would need to download a new version of a shared library. This means somewhat more downloads, but it is not going to be a problem and these will be available on the same day as the fixed library. So then you have a reasonable amount of dependencies and distros can begin with a standardization. But when the tree of dependencies is left wild and untrimmed can you not create a standard.

              And in addition to this, do you offer fully statically linked packages to work around problems until a standard has been found. Yes, I agree with you that such packages are difficult to fix when a bug is found. You then have to drop it and download a newer package. But the point is that it can be used with other distros, too, opposed to not at all.

              Btrfs will only replace old problems with new problems. In the end will one still need to download a package containing binaries and libraries. Btrfs will not avoid the problem of missing libraries when there is no standard. In the worst case would a btrfs volume need to ship an application with all libraries, including libc, to avoid missing ones and version mismatches, and would not be much different from an ELF file of the same size.

              And just to mention it on the side, statically linked binaries run about 5%-7% faster than dynamically linked ones, because these require less or no position independent code (PIC). Gamers are willing to give a kidney for such a bonus and any distro, which does not offer it will get ignored by gamers.

              Comment


              • Originally posted by Mat2 View Post
                Systemd enforces a really strict design discipline, but this is exactly what makes a system well engineered.
                Well, this is now getting idealistic. It needs to work and not cause trouble really. Enforcing strict discipline when most programmers are on average 25 years old or so is of much value to them as alcohol-free beer is, which means if systemd works then they will not care for it, but when it fails is it only a cause for more complaints. So you could have implemented it in Ada, too. It is not a selling point to a lot of people.
                Last edited by sdack; 04 September 2014, 06:52 PM.

                Comment


                • Originally posted by anda_skoa View Post
                  ...Basically all ISVs ship their product in binary form, only very few ship sources only.

                  I doubt that all those free ware and share ware authors whos products one can download on sites such as download.com have sumitted their sources and had the hoster build it.
                  More likely they had the resources do create their installers themselves, apparently even without having Google's or BlackBerry's resources.

                  Cheers,
                  _
                  Is there a way that OpenSuse's OBS service can help in this regard? Was its purpose to take some source and build packages for a number of important Linux distros? This way they only have to work on binary packages once and just distribute whatever OBS outputs.

                  Comment


                  • Really awesome proposal for how to create runtime. It has everything Lennart and his cabal forgot in their otherwise nice project. It touches 3 major missing points. What runtime is? What is goal of the runtime? What should something like runtime mean for users and developers?



                    i don't know about other people, so i say this as my viewpoint. if every project approached runtimes with this direction, that would be awesome.

                    Comment


                    • Originally posted by garegin View Post
                      ...Do you know the technical reasons for this issue. PBIs in PCBSD solve this problem, can't we do the same in Linux? In seems easy enough for a huge hassle.

                      With Windows not only do I get the latest apps the second they come out, but they backport system software like .NET 4.5.1 all the way back to Vista, which came out seven years ago!
                      In fact PC-BSD used their old PBI format for one reason. Old PKG_ tools in FreeBSD had no real dependency solving, leading easily to ugly situations. So they just wrapped every dependency needed to PBI-container, making PBI software both quite bloated and quite slow to run. Somewhat later they added some dependency finder themselves, making PBI's to use compatible libraries from other PBI software, mitigating size needs. Nowadays on version 10.03 PBI's are just wrapping aroun pkgng packages, mainly containing package file itself, some graphical presentation for AppCafe installer and some textual description about software. Whole AppCafe system reminds me of Ubuntu's software center.

                      Comment

                      Working...
                      X