Announcement

Collapse
No announcement yet.

Fedora Stakeholders Back To Discussing Raising x86_64 Requirements Or Using Glibc HWCAPS

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    To avoid package increasing, is sufficient to make the selection of the packages just after the diagnostic activity during the ISO installation of the OS, so to install only what is necessary to a specific platform. The matter occurs when a new hardware profile is applied (such as the integration of a more capable new GPU or a new modern CPU) missing the actual kernel profile optimized for such specific features. In this case, the new software can be integrated after a new verification is made on the hardware during the boot phase, for the purposes of the flexibility. However, very old and obsolete hardware should be abandoned based on the rapid technological development. What I mean. It's obvious, that some kind of hardware tends to evolve quicker than other kind of hardware although the same age. As example, soundcards generally have a longer life than video cards although PCI bus is more obsolete than PCI-E bus. So, it is necessary to evaluate the different level of obsolescence. In some cases indeed, the radical and generalized implementation of the end of support could be a negative solution.
    Another kind of solution deals with the divide of support between legacy and newer hardware, always for the purposes of the flexibility.
    Last edited by Azrael5; 19 June 2021, 06:04 AM.

    Comment


    • #12
      Originally posted by Eumaios View Post
      But I don't understand when Michael writes that the HWCAPS solution "leaves Linux distributions then to building multiple packages or inflating their packages by carrying multiple builds within there in order to have the libraries comply with the different x86_64 microarchitecture feature levels."

      The distributions now don't even have those libraries? How large are the libraries, that it would make the distributions larger? Or, could installing the libraries be done after installing the base system, so that the cost, so to speak, is on the user rather than the provider?
      Every package would have to be compiled at least two times: once for the old hardware and once for x86_64-v2. That means double computation load on the build servers and double the needed disk space on the ftp servers. On your computer, you only need one version of each package.

      Comment


      • #13
        Originally posted by andre30correia View Post
        they should kill old cpu
        This is really not that simple. You have civil infrastructure usage. There are up to i586 in fpga soft processor these days that are open source. There seams to be roughly a 3 decade gap between what you can buy on the shelf and what you as a fpga implementation with x86.

        So what is old is new again kind of applies to different computer hardware as they disappear as asic and reappear as fpga. At some point we can bet on fpga x86_64 turn up and them not being as feature complete as the current versions. Please do not the first x86_64 cpu was release year 2000 its the next year when the first instruction set of x86_64 could be implemented on fpga without major patent risks.

        If we do end up with fpga x86_64 the first generation of x86_64 support may be able to be kept on being done with new hardware being fpga.

        Heavily used cpu designs with fpga has the habit coming kind of immortal as a repeating trend when patents protecting the design runs out.

        Comment


        • #14
          Originally posted by chuckula View Post
          I get the urge to retain compatibility with older hardware, but adding HWCAPS is vital to getting performance out of new hardware.
          Unfortunately in the context of Fedora I don't get the urge. There are plenty of distros that support everything back to when the comet that killed the dinosaurs hit. That isn't Fedora though. We need a few distros focused on the bleeding edge to the point old hardware gets phased out. For me Fedora is one such distro.

          Comment


          • #15
            Originally posted by Eumaios View Post
            Hello, Linux newbie and absolutely not a developer, but really fascinated by the Linux phenomenon.

            I've looked up what HWCAPS are and believe I understand it at a basic level. And it does seem to offer a more flexible solution than raising the base level. But I don't understand when Michael writes that the HWCAPS solution "leaves Linux distributions then to building multiple packages or inflating their packages by carrying multiple builds within there in order to have the libraries comply with the different x86_64 microarchitecture feature levels."

            The distributions now don't even have those libraries? How large are the libraries, that it would make the distributions larger? Or, could installing the libraries be done after installing the base system, so that the cost, so to speak, is on the user rather than the provider?

            I apologize in advance if these are stupid questions.
            Most Linux developers are really anal about code and packaging efficiency. That is how we came to situation were a typical Linux distro have half the size of a stock Windows install, and still support more hardware (in my experience at least).

            So any extra packages containing more libraries, have to be justified before going on the stock install of said distro. Anything judged superfluous is discarded in the name of deficiency.

            Also, keep in mind that, while there is a lot of paid developers working on Linux distros, a good chunk is still made of volunteers, and those of course will avoid doing extra work, like the situation we have here, without good reason.
            Last edited by M@GOid; 18 June 2021, 07:47 AM.

            Comment


            • #16
              Originally posted by Eumaios View Post
              Hello, Linux newbie and absolutely not a developer, but really fascinated by the Linux phenomenon.

              I've looked up what HWCAPS are and believe I understand it at a basic level. And it does seem to offer a more flexible solution than raising the base level. But I don't understand when Michael writes that the HWCAPS solution "leaves Linux distributions then to building multiple packages or inflating their packages by carrying multiple builds within there in order to have the libraries comply with the different x86_64 microarchitecture feature levels."

              The distributions now don't even have those libraries? How large are the libraries, that it would make the distributions larger? Or, could installing the libraries be done after installing the base system, so that the cost, so to speak, is on the user rather than the provider?

              I apologize in advance if these are stupid questions.
              It means that everything has to be built 4 times (once for each HWCAPS level). They have the libraries now, but they're just the v1 unoptimized ones. Adding in the new ones would make the distribution about 4 times larger than what it is on the server side (4 of everything) and 2 to 4 times on the user side (depends on if 1-4 are all-in-one or if 1 and whatever you need can be used. Because of the 4 times part the cost will always be on the provider/distribution and there really is no way to lessen it; if package managers let us get what we need then it'll lessen the cost on the user.

              Comment


              • #17
                Originally posted by skeevy420 View Post
                It means that everything has to be built 4 times (once for each HWCAPS level). They have the libraries now, but they're just the v1 unoptimized ones. Adding in the new ones would make the distribution about 4 times larger than what it is on the server side (4 of everything) and 2 to 4 times on the user side (depends on if 1-4 are all-in-one or if 1 and whatever you need can be used. Because of the 4 times part the cost will always be on the provider/distribution and there really is no way to lessen it; if package managers let us get what we need then it'll lessen the cost on the user.
                Its not 4 times everything. Large percentage of libraries don't use per cpu stuff. V1 stuff is still optimised just not to particular CPUs. Particular CPU optimisations can in fact result in lower performance in particular libraries as well. HWCAPS functionality for so loading include a order of priority so you are on level 3 it might be set to use a level 2 library if a level 3 is no provided then step down to a level 1. This is because the design does not include rebuilding every library.

                So per library is between zero extra cost because there is only 1 for everything and many as what are required. RPM package has the platform optimised libraries as extra packages to the core libraries that are built hopefully to run no matter what just with lower performance.

                Comment


                • #18
                  There was a proposal shot down last year for raising the x86_64 microarchitecture feature level while now that discussion has been restarted or alternatively making use of Glibc's HWCAPS facility for allowing run-time detection and loading of optimized libraries...
                  I prefer the HWCAPS run-time detection and loading of optimized libraries approach. For the libraries where newer feature levels can provide a significant speed up or other important advantages, an extra version can be compiled and selected at run-time.

                  Comment


                  • #19
                    Originally posted by oiaohm View Post
                    Its not 4 times everything. Large percentage of libraries don't use per cpu stuff.
                    I think that's a good point: the number of packages that would genuinely benefit from the added caps is VERY small - and in a sizeable number of cases, such software is already written with runtime selection built in.

                    One thing that's fairly clear in this particular case though, is that this is absolutely just "monkey see, monkey do". And that is NEVER a good foundation for making a decision, regardless of whether or not you end up at the "right" answer.

                    Comment


                    • #20
                      Originally posted by arQon View Post
                      I think that's a good point: the number of packages that would genuinely benefit from the added caps is VERY small - and in a sizeable number of cases, such software is already written with runtime selection built in.
                      But some of the ones are massively used that can gain. Like when there was a randomise issue with AMD CPU that could have been worked around with a glibc just for that bit of hardware and then that is something that going to be used by all applications. The number of libraries that gain from this hardware stuff is not many. But it also can be some of you most used libraries so a 1-5% gain in those areas can be quite a bit of overall system gain.

                      Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite


                      Please note this is back in 2020 when glibc hwcaps start there is performance gains and stability gains here. Yes glibc is written with runtime selection but zen issue with the cache size being different to intel requiring a different alignment in allocations is one of the cases. Interest point you only need to update the libraries that did new allocations to fix the zen issue with caching not all yes that was glibc and about 5 others and you were done giving everything on the system a performance gain. There are other issues where what you will be working around like spectre and meltdown issues results in lower performance but you fix security/stability of course those fixes make no sense on cpus without the issue.

                      This is why its tricky to work out what the cost will be lot of cases only a small number of packages need to be changed to fix a cpu/platform linked performance/security issue but these are normally libraries that are massively used.

                      Something to consider here if you don't have cpu with a problem why should you system have to change its core library that is not defective. Remember this is something built in runtime selection causes. The side by side library solution allows a faster deployment of a fix for a hardware issue as the fix library to X hardware problem only has to be tested on X hardware instead of the runtime selection path where the library has to work right for everything.

                      There are a stack of benefits to the to the glibc HWCAPS route.

                      Comment

                      Working...
                      X