Announcement

Collapse
No announcement yet.

"CC_OPTIMIZE_FOR_PERFORMANCE_O3" Performance Tunable Dropped In Linux 6.0

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by skeevy420 View Post

    They're playing the Incel Card?

    When I was younger I gained some bad memories of girls, especially older women, being scary to talk to and rejecting me so I now don't talk to them even though 28 years have passed.
    how to spot nerds...they speak of girls and compilereflags in one sentence

    Comment


    • #12
      Originally posted by dekernel View Post
      So they are giving the easy option to enable -O3 if you want ... or there are apparently distros doing just that so just use that distro and be done with it. Me, personally, I am paid to develop solutions, and no part of that is debugging edge-case kernel crashes. I want stability...period, and I will always take guidance from the people who own the code because they know far more than myself how to best use their product.
      Reminds me of the conservative racecar engineer telling that speed above 150km/h in corner xyz is not save because...techyadayada. But after 10 rounds petrolhead "no brainer" testdriver jumps out of the car shouting: did you see? I beat the 160km/h mark in the corner?! wuhhuu that was fing awesome.


      I guess we have the same here too.

      Comment


      • #13
        I did some testing recently: https://www.phoronix.com/forums/foru...91#post1331591

        None of these are straight O3, but you get the idea. Plus the speedups here and there Micheal found in the article.

        Comment


        • #14
          Originally posted by dekernel View Post
          So they are giving the easy option to enable -O3 if you want ... or there are apparently distros doing just that so just use that distro and be done with it. Me, personally, I am paid to develop solutions, and no part of that is debugging edge-case kernel crashes. I want stability...period, and I will always take guidance from the people who own the code because they know far more than myself how to best use their product.
          Most sensible thing I have read in a while on this topic.
          That and what kozman wrote about fuzzing it yourself if it's important to you.

          Comment


          • #15
            BTW, it can be some kind of https://en.wikipedia.org/wiki/Survivorship_bias
            Not that there lack of -O3 users, but there a lot of -O3 users actually and they noticed ... no difference or breakage, so no reports.

            Also to note: the main goal of GentooLTO was to find and report bugs, and I hope we helped a lot (LTO, -O3, GCC GRAPHITE, etc.).

            Comment


            • #16
              Originally posted by kozman View Post
              Like anything, someone has to take the 1st steps to use -O3 a LOT. Beat the hell out of O3 from all angles. Find those pesky corner cases. Fuzz, if possible, the crap out of O2 and then O3 and compare side by side. I'm no compiler expert but at the very least, it requires years of people beating on it but, most importantly, reporting results and errors.
              I shipped a product with a large, complex codebase using -O3. We had a nightly regression test suite that took about 8 hours on an 8-core 3.4 GHz server. Tested it every night for more than a decade, from probably GCC 4.6 to 9.3. In that time, we never hit any compiler bugs, though we were not on the bleeding edge.

              Originally posted by kozman View Post
              Get -O2 running as close to perfect before you go heralding or cursing what O3 can bring to the table. It seems it still needs some more time to mature and develop. It's not worth losing data to score a couple more % of perf.
              Huh? Are you just saying random words?

              Originally posted by kozman View Post
              If Clear Linux is indeed leveraging O3 a lot, they've probably hit some of the odd effects and behaviors people have talked about.
              Don't just assume that. Maybe they haven't. Why don't you look, before jumping to conclusions?

              Originally posted by kozman View Post
              I just hope they're reporting those effect and, if they can be fixed, contributing fixes to stabilize that O3 feature.
              First, your premise that -O3 isn't already stable is flawed.

              Second, the fact that Clear Linux continues to use -O3 implies they must have fixed any bugs they hit, which they're unlikely to do without upstreaming those fixes, which they can't do without reporting the bugs.

              Originally posted by kozman View Post
              Maybe GCC 13/14 will be in better shape with O3.
              Nobody has ever claimed or presented evidence that GCC 12 is not in good shape with -O3.

              It seems your entire post is a tower of flawed assumptions.

              Comment


              • #17
                Originally posted by dekernel View Post
                So they are giving the easy option to enable -O3 if you want ...
                I don't care about the kernel config option, really. Just this notion that -O3 is risky or not worthwhile.

                Originally posted by dekernel View Post
                Me, personally, I am paid to develop solutions, and no part of that is debugging edge-case kernel crashes. I want stability...period,
                Cool, so maybe use -O0. If you prize stability above all else, then you really have no ground to stand on, here.

                Originally posted by dekernel View Post
                I will always take guidance from the people who own the code because they know far more than myself how to best use their product.
                Eh, it seems to me the kernel developers don't necessarily know a lot more about -O3 than you do, because the topic seems taboo. If no one goes there, it will forever stay shrouded in mystery.

                Comment


                • #18
                  Originally posted by CochainComplex View Post
                  Reminds me of the conservative racecar engineer telling that speed above 150km/h in corner xyz is not save because...techyadayada. But after 10 rounds petrolhead "no brainer" testdriver jumps out of the car shouting: did you see? I beat the 160km/h mark in the corner?! wuhhuu that was fing awesome.
                  That's a flawed analogy, since race engineers usually warn drivers about things which put the car at risk for mechanical failure or perhaps excessive tire wear. Otherwise, they know the driver is going to push the car as hard as it can go.

                  In that scenario, they have some basis for their thinking. In this case, we have nothing but old anecdotes and superstition.

                  Comment


                  • #19
                    Originally posted by binarybanana View Post
                    Regarding your quoted results: is the first set reporting the time to complete a fixed workload and the second set reporting the speed achieved? In other words, "lower is better", in the first test, but "higher is better" in the second?

                    Comment


                    • #20
                      This wasn't surprising in the slightest.

                      Comment

                      Working...
                      X