Announcement

Collapse
No announcement yet.

AMD Sends Out Zen 3 Compiler Support For GCC + AOCC 2.3 Compiler Released

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD Sends Out Zen 3 Compiler Support For GCC + AOCC 2.3 Compiler Released

    Phoronix: AMD Sends Out Zen 3 Compiler Support For GCC + AOCC 2.3 Compiler Released

    Following last month's release of the Ryzen 5000 "Zen 3" processors, AMD has now begun publishing their official compiler support for this extremely compelling processor family...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    If AMD's goal with Zen 3 was to improve performance of existing code, i.e. games, then it shouldn't first require a recompilation before one can benefit from the newer architecture. And this has evidently been the case and old software does run notably better without code changes. So I'm still not buying the "significant need" for compiler patches argument.

    Intel on the other hand does have a history of such a need, being the dominant leader of the x86 architecture and frequently requiring for the software to follow the hardware. I suspect AMD is doing the opposite for now, making the hardware follow the software, until perhaps their market share allows them to undertake bolder changes to the x86 architecture.

    Comment


    • #3
      Originally posted by sdack View Post
      If AMD's goal with Zen 3 was to improve performance of existing code, i.e. games, then it shouldn't first require a recompilation before one can benefit from the newer architecture. And this has evidently been the case and old software does run notably better without code changes. So I'm still not buying the "significant need" for compiler patches argument.

      Intel on the other hand does have a history of such a need, being the dominant leader of the x86 architecture and frequently requiring for the software to follow the hardware. I suspect AMD is doing the opposite for now, making the hardware follow the software, until perhaps their market share allows them to undertake bolder changes to the x86 architecture.
      Umm, you do realize that if AMD hadn't implemented amd64 then the entire PC industry would be on EPIC architectures now? Then we'd all be stuck totally dependent on recompiling absolutely everything...


      Comment


      • #4
        Originally posted by duby229 View Post
        Umm, you do realize that if AMD hadn't implemented amd64 then ...
        No, that's not how I saw it. I saw it as Intel merely allowing AMD to take the step, knowing they'll benefit from it just the same, and they did.

        Comment


        • #5
          Originally posted by sdack View Post
          No, that's not how I saw it. I saw it as Intel merely allowing AMD to take the step, knowing they'll benefit from it just the same, and they did.
          I'll agree that's effectively what did happen, but it's definitely not what Intel wanted.

          Comment


          • #6
            Look at windows games.. They are compiled with MSVC, now look at the state of features this compiler offers..
            Furthermore most software is compiled with SSE2 in mind today, as this is what windows 10 needs to run..
            AVX512 (all the different feature levels of it) or let alone processor specific optimisations are used extremly rarely..
            in some video encoders and some games, but 99% of the code is SSE2 and no AVX IMHO..

            Some JIT compilers are slowly starting to use it, like the newer dotnetcore runtime or Javascript V8 though.

            Comment


            • #7
              I can see the argument for early/pre-release addition of new instruction sets (3D NOW!, FMA etc) if not already supplied by Intel (and compatible with the Intel definitions) but as Spacefish pointed out, most things tend to be compiled by lowest-common-denominator, using either the Microsoft compiler (games) or gcc/icc. I get the same binaries whether I install RedHat/Ubuntu/whatever on AMD or Intel. The only time it makes a difference is for users who compile their own software or run an OS like Gentoo where everything is compiled optimised for the system it is installed on.

              Comment


              • #8
                Originally posted by Spacefish View Post
                Look at windows games.. They are compiled with MSVC, now look at the state of features this compiler offers..
                Furthermore most software is compiled with SSE2 in mind today, as this is what windows 10 needs to run..
                AVX512 (all the different feature levels of it) or let alone processor specific optimisations are used extremly rarely..
                in some video encoders and some games, but 99% of the code is SSE2 and no AVX IMHO..

                Some JIT compilers are slowly starting to use it, like the newer dotnetcore runtime or Javascript V8 though.
                Well, AVX can make a difference on certain programs, but for most of the linux userspace it just doesn't make a difference. I'm not too sure about games, it might be worthwhile to compile Xonotic or Dhewm or something and test it. It'll probably be within a percent or two is my guess.

                Comment


                • #9
                  Originally posted by duby229 View Post
                  I'll agree that's effectively what did happen, but it's definitely not what Intel wanted.
                  Intel could have implemented their own version of it, but it wouldn't have changed much. They could have forced users to use two different 64-bit implementations for x86, with Intel's version eventually dominating due to their market share, and with AMD having to drop theirs over Intel's at some point. So it didn't matter that much unless Intel had it out badly for AMD. In short, a win for the x86 architecture always meant a bigger win for Intel than for anybody else.

                  The Itanium wasn't exactly what Intel wanted either. Some Intel developers did think of it as the future, but the top management didn't. Top management doesn't care too much for how the money is made, but that lots of it is being made. The Itanium then was competition to their own x86 architecture and the decision was left with the users, who didn't care half as much for it as the developers who had designed it. The x86 architecture won again and so did Intel.

                  Comment


                  • #10
                    Originally posted by duby229 View Post

                    Well, AVX can make a difference on certain programs, but for most of the linux userspace it just doesn't make a difference. I'm not too sure about games, it might be worthwhile to compile Xonotic or Dhewm or something and test it. It'll probably be within a percent or two is my guess.
                    Well, just to clarify here AVX/AVX2 at least will make a big difference(depending on the hardware implementation could be more or less) but not by just recompiling outside the low hanging fruits.

                    Low hanging fruit: Your average simple loops that the compiler can auto vectorize without user intervention in the code.(is not very common).

                    Because moving your vectors from 128 to 256 width will require you to rethink(or hand fix) your cache fetch strategy and direction(as vertical or horizontal) because AVX/2 don't always guarantee the same performance per cycle as SSE on matching family of instructions, among other issues that the compiler simply cannot fix for you.

                    Which explains why simply recompile won't boost your performance too much but when you hand tune the code, yes AVX/2 will be way faster(caveat certain restrictions apply, is not magic or witchcraft)

                    Comment

                    Working...
                    X