Announcement

Collapse
No announcement yet.

Experiment Underway To Improve Gentoo's Binary Package Handling With Portage

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Experiment Underway To Improve Gentoo's Binary Package Handling With Portage

    Phoronix: Experiment Underway To Improve Gentoo's Binary Package Handling With Portage

    Gentoo developer Andreas Hüttel "Dilfridge" is experimenting with binary Gentoo package hosting and finding out what improvements to Portage are needed for making it more of a reality at a larger scale...

    https://www.phoronix.com/scan.php?pa...n-Packages-Exp

  • #2
    Interesting. Of course it kind of jeopardizes the Gentoo approach, but still, in some cases of elderly CPUs / low power CPUs I am happy e.g. for LibreOffice or browser binaries, as it takes long to compile and chroot compiling on an Zen machine isn't always handy. Moreover, a few times it might be good in case of bug reporting to check an official build it it exposes the same error, before complaining and then noticing that it is due to a compiler messup.
    Stop TCPA, stupid software patents and corrupt politicians!

    Comment


    • #3
      The ability to mix binary and source packages could prove useful and it is a feature that FreeBSD which inspired the ports collection in gentoo lacks. Have code that is executed frequently like the c library and kernel compiled for mach=native -O3 and code that is update frequently like browsers or ran seldomly distributed as a binary.

      Comment


      • #4
        Originally Gentoo was meant as a distribution build system ("meta" distribution), somewhat like how Nix is often used today as a container build system. I don't know of anyone but Google using it that way at scale (Chrome OS). I kind of use it that way, but not for entire server farms or thousands of desktop machines at once or anything like that. Mostly a custom config for box or two here and there built on my big desktop (the "build/binpkg server"). Works nicely, especially for machines with little storage and RAM, like a 256MB RAM SBC. Slurps way less RAM than most alternatives.

        Comment


        • #5
          There was a distribution called RR4/RR64 and then Sabayon that was Gentoo with the option to install binary packages via a package manager called Entropy. I ran it for awhile and really liked it until the Entropy dependency solver was seemingly broken and you'd try to install some simple utility and it would insist on installing/updating 300+ unrelated packages. I don't know what happened to the distro after I stopped using it. The founder got a job as an SRE at Google so maybe he just didn't have time anymore.

          This seems very overdue as FreeBSD has worked this way for IDK decades?

          Comment


          • #6
            While I use makepkg and the AUR on Arch a lot for optimizing my own packages, providing something more optimized in binary form is great start that saves a lot of time for non-performance sensitive packages (but unfortunately this initiative is not about that, at least not currently).

            I'd argue that there is no source vs. binary dichotomy, it is about offering a optimized baseline and making it as easy in terms of dependancy handling and as safe as possible from a breakage point of view for people to customize their installation to their own liking. Try to maintain a custom-compiled install of KDE-git desktop on Arch to understand my point (and handle all the qt5 updates, new dependancies etc.).

            Comment


            • #7
              Originally posted by ms178 View Post
              While I use makepkg and the AUR on Arch a lot for optimizing my own packages, providing something more optimized in binary form is great start that saves a lot of time for non-performance sensitive packages (but unfortunately this initiative is not about that, at least not currently).

              I'd argue that there is no source vs. binary dichotomy, it is about offering a optimized baseline and making it as easy in terms of dependancy handling and as safe as possible from a breakage point of view for people to customize their installation to their own liking. Try to maintain a custom-compiled install of KDE-git desktop on Arch to understand my point (and handle all the qt5 updates, new dependancies etc.).
              Using KDE's kdesrc-build helps a lot with that. While it isn't the Arch Way, it'll build KDE separate from your OS using your installed libraries so you don't get sucked into using bleeding edge dependencies like when using git-based PKGBUILDs.

              Comment


              • #8
                Sabayon again?

                Comment


                • #9
                  I have mixed feelings about the binary packages available in gentoo.

                  I think in instances where the package takes an unreasonably long time to compile, or has a built-in update mechanism that wont function when the app is compiled (LibreOffice, Firefox, etc) it makes sense to have binaries available as an option, but gentoo already has for years (*-bin packages).

                  The introductions of the kernel binaries made sense too. Kernel config can be challenging for new users and (generally speaking) a finely tuned kernel config only has marginal benefits (not that it'll ever stop me). A binary kernel download is also a great option for a recovery system.

                  The biggest benefit and the unique function of gentoo is customisability. USE flags are incredibly powerful and binary packages are inherently incapable of supporting it. I'm all for the inclusion of select binaries but gentoo should always be a compile-first distro.

                  Comment


                  • #10
                    Primary reason for people wanting binary packages in Gentoo are performance issues when compiling C++ codes. A proper solution, instead of extending support for binary packages in Gentoo (with hardcoded USE flags), solving the performance issues would be a central/worldwide repository mapping preprocessed C++ code to compiled objects (when compiling open-source projects not containing any proprietary code). To query the repository for the existence of the compiled object, only a hashcode (256 bits = 32 bytes) needs to be sent over the Internet between machines.

                    Comment

                    Working...
                    X