Announcement

Collapse
No announcement yet.

Trying To Make Ubuntu 18.10 Run As Fast As Intel's Clear Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Trying To Make Ubuntu 18.10 Run As Fast As Intel's Clear Linux

    Phoronix: Trying To Make Ubuntu 18.10 Run As Fast As Intel's Clear Linux

    With the recent six-way Linux OS tests on the Core i9 9900K there was once again a number of users questioning the optimizations by Clear Linux out of Intel's Open-Source Technology Center and remarks whether changing the compiler flags, CPU frequency scaling governor, or other settings would allow other distributions to trivially replicate its performance. Here's a look at some tweaked Ubuntu 18.10 Cosmic Cuttlefish benchmarks against the latest Intel Clear Linux rolling-release from this i9-9900K 8-core / 16-thread desktop system.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Unless a distro wants to have 15 or 20 or so x86_64 repositories with packages built with march=somearch, Clear Linux will be in the lead for most things becuase they have a more targeted and optimized setup. That targeted and optimized setup would have to be replicated with more architectures and tweaks if a distribution wanted to be both optimized and compatible with all x86_64 processors. That's my opinion on the matter.

    Comment


    • #3
      Originally posted by skeevy420 View Post
      Unless a distro wants to have 15 or 20 or so x86_64 repositories with packages built with march=somearch, Clear Linux will be in the lead for most things becuase they have a more targeted and optimized setup. That targeted and optimized setup would have to be replicated with more architectures and tweaks if a distribution wanted to be both optimized and compatible with all x86_64 processors. That's my opinion on the matter.
      They can have one repository... If they build their packages with GCC FMV (Function Multi-Versioning) like what Clear does. Nothing would prevent FMV code paths for even older CPU microarch.
      Michael Larabel
      https://www.michaellarabel.com/

      Comment


      • #4
        Michael Thank you, this are some really interesting results!

        Comment


        • #5
          Wow, thank you for explanations, Michael.

          Comment


          • #6




            Now, also remove governior altogether (i mean recompile ubuntu kernel without it, as since is not built as module one can't really get rid of it), set processor max_ctate to zero... and see what happens, i guess there will be additional 5-6%
            Last edited by dungeon; 30 October 2018, 02:08 PM.

            Comment


            • #7
              nice thanks

              clear linux uses newer glibc 2.27 https://www.phoronix.com/scan.php?pa...ures-For-Clear wonder how much of a difference that makes ?

              Comment


              • #8
                Originally posted by Michael View Post

                They can have one repository... If they build their packages with GCC FMV (Function Multi-Versioning) like what Clear does. Nothing would prevent FMV code paths for even older CPU microarch.
                Would it not get to a point where that would add unnecessary code size and bloat, possible slowdowns from added version checks? Would separate repositories based on instruction sets, like a Nehalem for Nehalem/Westmere, Sandy Bridge that covers Sandy/Ivy, etc, be a compromise? My apologies if these are bad questions, but benchmarks like this and ones that show the effects of spectre fixes, etc always make me curious on the subject. I didn't consider FMV due to assumed bloat that supporting everything would have to bring.
                Last edited by skeevy420; 30 October 2018, 01:57 PM. Reason: The spelling ability of a kindergartner.

                Comment


                • #9
                  Originally posted by eva2000 View Post
                  nice thanks

                  clear linux uses newer glibc 2.27 https://www.phoronix.com/scan.php?pa...ures-For-Clear wonder how much of a difference that makes ?
                  I think 2.28 they both use now, so that is not a reason of possible diff anymore:



                  I don't see so much big difference like before, as you can see in these benchmark results it mostly comes from clear docker now

                  So we can conclude that glibc version was indeed primary reason for big difference before
                  Last edited by dungeon; 30 October 2018, 02:18 PM.

                  Comment


                  • #10
                    Originally posted by skeevy420 View Post
                    Would it not get to a point where that would add unnecessary code size and bloat, possible slowdowns from added version checks?
                    Choosing code paths is less of a slow-down than the gains from using optimized paths as the benchmarks show. But the size of the unused and unnecessary for you code for not your CPU is a concern. From a distributions perspective FMV binaries beats having 10 different versions of the packages in a distributions repo but the one repo for everyone would be bigger but not that much bigger. For end-users it's a big deal for those running off harddrives since the binaries are bigger. That's likely irrelevant for SSD users. I used Gentoo 10+ years ago and back then -oS (optimized for size) would give a lot faster startup times than -o3. The code produced by -o3 would be faster once the programs were up and running, but they would load slower. So there is a clear downside for FMV binaries. With SSD prices are low as they are right now it's debatable how much weight should be put on this.. not sure how many actually boot off a HDD these days. Some probably do.

                    This article was really interesting, btw. It did make me wonder just how much of a power consumption difference there is between "performance" and "powersave" on Intel and also how much different scaling governors have on the AMD side. I've been using concervative with 92 up and 64 down for generations of AMD CPUs and I'm quite honestly not entirely sure what it was that made me pick those values back in the day. Interestingly the .sh file that configures it has https://www.ibm.com/developerworks/library/l-cpufreq-3/ as a comment, that's IBM 2009 research. Wonder what such an article would look like today..

                    Comment

                    Working...
                    X