Announcement

Collapse
No announcement yet.

GCC Benchmarks At Varying Optimization Levels With Core i9 10900K Show An Unexpected Surprise

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    FWIW, I did a quick test on my Ryzen system using C-Ray.

    I found that with -O2, compiled with gcc 10.1, it ran around 3% slower than gcc 9.3
    With -Ofast -march=native, gcc 10.1 version ran about 1% quicker than with gcc 9.3


    Peter B

    Comment


    • #52
      Michael

      It's been a long time with no updates. No one can confirm the issue.

      Comment


      • #53
        Hmmm, this looks to me like a lose end.
        I think a lot of energy has been invested to understand and subsequently address a possible compiler issue.
        It looks like there is no compiler issue.
        However, something has happened on the system under test and it's very unsatisfactory that this has not been tracked down. It could happen again in a maybelless obvious fashion and render benchmarks invalid.

        Comment


        • #54
          It'll be completely anecdotal (no formal testing, just seat-of-the-pants looking for instabilities) but I've removed the dependency on the "ARC" architecture to allow building with "-03" and am running a Linus' master kernel built with GCC 10.2 (and with "-march/-mtune" set to "icelake-client", which I've had set for a while in my personal builds).

          I'm running it now and will report if anything seems to break.

          Comment


          • #55
            Originally posted by arQon View Post
            I don't think I've ever had code NOT break with O3. As in literally "ever". There's always been one or two files that had to be dropped to O2 even if the rest of the project worked.
            (Hell, I've had code fail because of bugs in O*2* - the test suite caught them, but diagnosing root cause was a bitch!).
            I have 15+ years of experience with a complex and varied, but heavily numerical (int and float) C++ codebase compiled with gcc's -O3, and the only time I ever got burned was by type-punning. Even that you can get away with, if you just do it using unions.

            Originally posted by arQon View Post
            The error may well happen with you simply being oblivious to it.
            We have regression tests which look for any change in handling of TBs worth of data. However, when we do compiler upgrades, I typically overlook a bit of noise, due to my assumption that a compiler at -O3 is not bound to respect things like FP associativity.

            However, I'm not one to use bleeding-edge compiler releases, so it's entirely possible that my patience is what saved us from a lot of grief.
            Last edited by coder; 21 December 2020, 02:27 PM.

            Comment


            • #56
              Originally posted by kcrudup View Post
              [I'm] running a Linus' master kernel built with GCC 10.2 [and -O3] (and with "-march/-mtune" set to "icelake-client", which I've had set for a while in my personal builds).
              it's been months and I've had no issues.

              Comment


              • #57
                Originally posted by coder View Post
                However, I'm not one to use bleeding-edge compiler releases, so it's entirely possible that my patience is what saved us from a lot of grief.
                It certainly hasn't HURT your experience, I expect.

                Not to compete on anecdotes, but I've got a LOT more than 15+ years experience, and I've hardly ever met a compiler on ANY platform that hasn't managed to trip itself up at -O(max) on at least ONE file - and that's almost every project, on multiple platforms, from Visual C++ on Windows to GCC on Sparc. Writing any sort of optimising compiler is HARD, and the chances of one NOT having bugs at its max level are, IME, consistently zero. That's just how it goes.

                The most recent incident (which was an O2 bug) was on ARM: and when you're building Internet Of Sh*t devices or other appliances, the toolchains are decided on (and supposedly tested) VERY early in the project. When it's months later and you're getting close to a ship date, and you find that the version of GCC you're using has a bug generating code for that specific core, the ONLY smart move is to say "jep, it's broken", and downgrade the -O for that file. If you have cross-platform code that's already in 6 other products, butchering that code to try to work around a compiler bug for one device is a terrible idea.

                I'm not saying the GCC team don't FIX the bugs: they're very good about that, and by the time that project got as far as MY desk, there was already a release that had the bug fixed. But we had an entire distro that the bring-up team had already committed to the broken rev on, and a toolchain change would have set everything back several months. Not an option when you have factory time booked to build physical devices. You find a way to ship, or the company is out millions. :P

                One apparent misunderstanding I would like to correct in your comment though is this: those bugs often have nothing at all to do with how COMPLEX a piece of code is. I don't remember the specifics of these over the years, but the ARM one was in code that wasn't even REMOTELY "clever" - it just happened to result in a sequence that the optimizer barfed on.

                Comment


                • #58
                  Originally posted by arQon View Post
                  Not to compete on anecdotes, but I've got a LOT more than 15+ years experience,
                  No, I meant with one specific codebase being compiled with gcc -O3.

                  Originally posted by arQon View Post
                  I've hardly ever met a compiler on ANY platform
                  I thought we were talking about gcc.

                  Originally posted by arQon View Post
                  One apparent misunderstanding I would like to correct in your comment though is this: those bugs often have nothing at all to do with how COMPLEX a piece of code is. I don't remember the specifics of these over the years, but the ARM one was in code that wasn't even REMOTELY "clever" - it just happened to result in a sequence that the optimizer barfed on.
                  It's not a misunderstanding -- I said that to suggest the codebase probably exercised a lot of different paths in gcc's optimizer. It has lots of loops, various SIMD code, intrinsics, inline assembler, lots of STL and modern C++, template metaprogramming, I could go on.

                  Let's not lose sight of the original question: "can one trust gcc's -O3 to generate correct code?" In my experience, yes (at least if, like me, you are conservative about compiler upgrades). I understand that you disagree.

                  Comment


                  • #59
                    > I thought we were talking about gcc.

                    We are - I just wanted to emphasise that it's a universal issue practically inherent to any optimising compiler, not the result of the gcc team not being very good or etc.

                    Originally posted by coder View Post
                    Let's not lose sight of the original question: "can one trust gcc's -O3 to generate correct code?" In my experience, yes (at least if, like me, you are conservative about compiler upgrades). I understand that you disagree.
                    I do, and the errata of every compiler release supports that. But I also understand that YOUR experience has been different, and I'm happy to leave it there. (And happy that it HAS worked reliably for you - may it continue to do so!).

                    Comment


                    • #60
                      Originally posted by arQon View Post
                      I do, and the errata of every compiler release supports that.
                      Yeah, I know that, and I guess my conservative approach to upgrading is an implicit acknowledgement that issues do exist. And I'd further allow that code optimization is one of the areas more likely to have bugs, due to its complexity and being an area of continual development.

                      With all that being said, my experience supports a view that any issues with -O3 tend to get sorted out quickly enough that it is dependable enough to use in production code, in mature versions of the compiler.

                      Comment

                      Working...
                      X