Announcement

Collapse
No announcement yet.

Solus Linux Experimenting With Automated Profiling/Optimizations

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Solus Linux Experimenting With Automated Profiling/Optimizations

    Phoronix: Solus Linux Experimenting With Automated Profiling/Optimizations

    Not only are Solus Linux developers busy porting the Budgie desktop away from GNOME and switching to Qt but they are also continuing to work on more performance optimizations...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    How much of the Clear Linux optimizations does Solus use? Clear Linux and Solus are two of the distros I'm most interested in now. More distros should be using function multiversioning. There would be less reason for Gentoo if distros used function multiversioning.

    Comment


    • #3
      Originally posted by zboson View Post
      How much of the Clear Linux optimizations does Solus use? Clear Linux and Solus are two of the distros I'm most interested in now. More distros should be using function multiversioning. There would be less reason for Gentoo if distros used function multiversioning.
      probably AVX libs. And multiversioning would be dependency hell secondly it wont make full benefit of your cpu instruction unless you make a lot of versions, that is not doable. Linux distros need to find some mechanism to make native compilations in the background with profiles.

      Comment


      • #4
        I applaud them but having multiple systems on the same distro just means performance will latched to the designated set of testing hardware. Clear LInux can get away since Intel is trying to make it work faster on their chips but insert alternative architecture or manufacturer and it all falls apart.

        Gentoo could come out on top if they created automated application profiling to find the optimal compiler settings per-application. Since their solution would be architecture independent.

        This is most feasible to a platform solution versus your standard user. A company will gladly re-compile an application numerous times just to improve benchmarks or maximize performance, Vertical Improvements.

        Standard Users, non-geeks, wouldn't waste their time trying to maximize the system, they just want to use it and if it is slow they most likely will go for Horizontal Improvements, buying new hardware.

        Comment


        • #5
          Yeah, the code paths where it matters (kernel, drivers, libraries) should be split off and packaged as a generic binary and sources with optimized build flags, so it can automatically rebuild an optimized version in the background.

          Comment


          • #6
            Originally posted by cj.wijtmans View Post

            probably AVX libs. And multiversioning would be dependency hell secondly it wont make full benefit of your cpu instruction unless you make a lot of versions, that is not doable. Linux distros need to find some mechanism to make native compilations in the background with profiles.
            Nobody is asking for "full benefit". Use multiversioning if and where it makes sense to you in terms of performance, not all over the place.

            "native compilations in the background with profiles" sounds nice, but I wouldn't hold my breath for that one, or use it to discourage realistic forms of optimization. Function multiversioning is already part of Clear Linux's performance work, so obviously it is "doable".

            Comment


            • #7
              Originally posted by ferry View Post
              Yeah, the code paths where it matters (kernel, drivers, libraries) should be split off and packaged as a generic binary and sources with optimized build flags, so it can automatically rebuild an optimized version in the background.
              It is not a matter of using optimized build flags, it is a matter of having multiple versions of source code. FMV is a simpler and more well-defined way to deal with that.

              Background compilation with -march=native, or something like that, would be a different thing.

              Comment


              • #8
                Originally posted by cj.wijtmans View Post

                probably AVX libs. And multiversioning would be dependency hell secondly it wont make full benefit of your cpu instruction unless you make a lot of versions, that is not doable. Linux distros need to find some mechanism to make native compilations in the background with profiles.
                I'm not sure how function multi-versioning is dependency hell, it duplicates functions optimized with different instructions all within the same file. Then selects the best choice at run-time. avx2 libraries that are utilised within Clear Linux and Solus create a 2nd set of libraries that can be used with newer CPUs, but is a piece of cake to implement in a build. FMV is a much more arduous process.

                Comment


                • #9
                  Originally posted by Yndoendo View Post
                  Clear LInux can get away since Intel is trying to make it work faster on their chips but insert alternative architecture or manufacturer and it all falls apart.
                  Nothing falls apart. Added architectures will simply use the generic version of a function. The same situation as if there wasn't any multiversioning. Until, optionally, a version taking advantage of special instruction sets on that architecture, is supplied.

                  So you (potentially) get optimal performance on each platform, for that function, instead of some compromise.

                  Comment


                  • #10
                    Originally posted by indepe View Post

                    Nobody is asking for "full benefit". Use multiversioning if and where it makes sense to you in terms of performance, not all over the place.

                    "native compilations in the background with profiles" sounds nice, but I wouldn't hold my breath for that one, or use it to discourage realistic forms of optimization. Function multiversioning is already part of Clear Linux's performance work, so obviously it is "doable".
                    This is pretty much what I'm thinking and where I'm going. Build packages optimized with proven build flags, PGO where possible, and plug in advanced instructions where (and only where) it makes sense. Pretty much the purpose behind automating my processes as much as possible so that I can apply it to as many packages as possible. So I can set up a test, run it overnight and know what the flags do and whether CPU instructions are valuable (and to what degree). Then by adding a couple of lines, can test the impact of a PGO implementation. In future, will utilise clang and linker variations to test their performance

                    FMV is very doable, but can take a long time (and likely maintenance to the patches with future releases). I have ideas in how to implement it better into my testing, but not coded yet.

                    The biggest hurdle is having a benchmark to test each package

                    Comment

                    Working...
                    X