Announcement

Collapse
No announcement yet.

AMD Zen 4 AVX-512 Performance Analysis On The Ryzen 9 7950X

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by ms178 View Post
    Right but wasn't AVX-512 particularly better suited for more wide-spread use than other vector ISAs before it? I think the ispc creator blogged about that quite extensively, praising AVX-512 for its usefulness for general purpose compute tasks and performance per area advantages.
    I know it's supposed to have some features, like predication and scatter/gather, which make it more efficient. It also doubles the number of vector registers, in addition to increasing their width.

    However, the degree of clock-throttling it caused in 14 nm Intel CPUs with the feature was very much at odds with using it for simple things like string-processing. This writeup captures the dilemma particularly well:



    So, it really doesn't matter how easy-to-use it might be, the clock-throttling effects greatly limited its use in workloads other than those which heavily utilize it.

    Originally posted by ms178 View Post
    Well, it took AMD more than half a decade to implement it
    Because it was a trap, and they knew it.

    Originally posted by ms178 View Post
    and even though AMD was touting GPUs as better suited for vector code, that effort hasn't materialized yet and I am still waiting for them fulfilling their promises since 2012.
    Huh? They've stumbled, but it's not like they haven't delivered OpenCL, OpenMP, and now HiP. As well as C++ AMP and DirectCompute, on Windows.

    Sure, ROCm was in the wilderness for a long time, but they had legacy, proprietary drivers available for much of that time. They deserve some criticism for that, but it's not as if they weren't working on it the whole time.
    Last edited by coder; 26 September 2022, 10:45 PM.

    Comment


    • #32
      Originally posted by schmidtbag View Post
      Haha fair enough - half-baked was more negative than I intended it to be, but I guess my point was it wasn't a "complete" AVX 512 implementation.
      Complete as in what? It implements all of the same extensions as Ice Lake and Rocket Lake, as well as BF16.

      It's not natively 512-bits wide, but so what? All that matters is how it performs. As bridgman pointed out, Zen 4 has 6 FP dispatch ports, including 2x adds and 2x mul/mac. So, that's still 512-bits of mul/macs and 512-bits of adds you can do per cycle. As long as you've got enough work to keep the pipelines fed, I think the throughput is still pretty competitive.

      Comment


      • #33
        Originally posted by schmidtbag View Post
        Haha fair enough - half-baked was more negative than I intended it to be, but I guess my point was it wasn't a "complete" AVX 512 implementation.
        It's perfectly complete, the only question was whether the implementation would perform well or not and the benchmarks seem to show that it does.

        Comment


        • #34
          Originally posted by coder View Post

          Huh? They've stumbled, but it's not like they haven't delivered OpenCL, OpenMP, and now HiP. As well as C++ AMP and DirectCompute, on Windows.

          Sure, ROCm was in the wilderness for a long time, but they had legacy, proprietary drivers available for much of that time. They deserve some criticism for that, but it's not as if they weren't working on it the whole time.
          Delivering OpenCL, OpenMP and HIP is not the same performance-wise as HSA tremendously cut the overhead for launching kernels, offered cach-coherency between CPU-GPU and promised to ease implementation burdens for programmers: https://ieeexplore.ieee.org/document/7482093
          Last edited by ms178; 27 September 2022, 02:02 PM.

          Comment


          • #35
            Originally posted by Sin2x View Post

            You've been a fan of an instruction set? What's wrong with you?

            Obligatory Linuses quote: https://www.realworldtech.com/forum/...rpostid=193190
            Most of that criticism is invalid for AMDs implementation though. It no longer has the same performance downside or as big a transistor cost.

            Personally I feel AVX-512 would be better if we just forced it to only work in 128bit and 256bit mode, and only used the new improved instructions. The last downside AMD cant get rid of is the 4x times more registerbits to save for the kernel (twice as many register, twice as wide).

            Comment


            • #36
              Originally posted by coder View Post
              It really depends on your workload. If you're running an AVX-512 heavy workload, then it was always a performance and efficiency win! Even on 14 nm, and even in spite of the down-clocking!
              If your entire hot path consists of vectorized code that can take 512bit wide data then yes but outside of that one Anandtech benchmark and some scientific calculations (that are better run on GPUs anyway) there aren't many programs that can do that. Of the SIMD code that I wrote I can think of perhaps one or two loops where AVX512 might help.

              Originally posted by coder View Post
              Where AVX-512 got into trouble was in workloads that used it for around 10% - 20% of the instructions, which was enough to trigger significant downclocking but not enough that it could compensate with its greater throughput. I experienced this, first hand. When we recompiled with AVX-512 completely disabled, we got higher overall throughput in my software.
              Which is very likely what a lot of commercial SW will do - games being a likely candidate that everyone will benchmark - and that is why the AMD take on AVX512 might perform better in a lot of real-world scenarios.

              Originally posted by coder View Post
              You realize you're comparing an Intel 14 nm CPU with a TSMC N5 one, right? Rocket Lake's efficiency was always a joke. A very bad joke. To make matters worse, they solved the AVX-512 clock penalty by giving it an extremely high power budget. However, I think it's also a single-FMA design (somebody correct me if I'm wrong about that). So, power consumption was atrocious and performance wasn't even all that great.
              Yes but there is nothing better to compare against. Are there any ADL benchmarks with its AVX512 haxxored enabled?

              Comment


              • #37
                That's very good results for "firmware" avx 512. Just like zen 1 did pretty good at 256bit with 128 bit units.

                There's no sign of throttling, which intel did way back with avx2 significantly.

                Comment


                • #38
                  Originally posted by ms178 View Post
                  Delivering OpenCL, OpenMP and HIP is not the same performance-wise as HSA tremendously cut the overhead for launching kernels, offered cach-coherency between CPU-GPU and promised to ease implemenation for programmers: https://ieeexplore.ieee.org/document/7482093
                  HSA was a nice dream, but it never gained the necessary industry momentum. I think some of its advantages still live on in the form of ROCm, which I believe was architected to support it. Perhaps bridgman can say more about that.

                  BTW, OpenCL 2.0 has a feature called SVM (Shared Virtual Memory), which I believe is cache-coherent. Also, CXL supports cache-coherency at the interconnect protocol level.

                  Comment


                  • #39
                    Originally posted by carewolf View Post
                    Personally I feel AVX-512 would be better if we just forced it to only work in 128bit and 256bit mode, and only used the new improved instructions. The last downside AMD cant get rid of is the 4x times more registerbits to save for the kernel (twice as many register, twice as wide).
                    There's no going back. At least, not part-way back, like what you describe. But, register bits & context size will be more easily accommodated by ever shrinking process nodes. So, much less of an issue than Intel's decision to implement it on their 14 nm node.

                    AMD's implementation is interesting, when you put it next to ARM's recent announcement of the Neoverse V2. The V1 had 2x 256-bit SVE, but the V2 will feature 4x 128-bit SVE2. I wonder if this represents an growing industry consensus that ultra-wide SIMD isn't a good fit for general-purpose CPUs. Or, maybe it'll just turn out to be a speedbump.

                    Comment


                    • #40
                      Originally posted by MadCatX View Post
                      Yes but there is nothing better to compare against.
                      There were no other socketed, mainstream desktop CPUs with it. However, Intel had some HEDT (socket 2066, I think) CPUs with it, which launched back in 2017 or so.

                      Better yet would be to compare it with a 65 W Tiger Lake H-based NUC Extreme. The H's are 8-core and basically look like they were originally intended to be a mainstream desktop CPU that got cancelled either because of power/clock limitations in Intel's 10 nm SF process (now known as "Intel 10") or yield problems. Or, maybe they simply ran too close to the launch window of Alder Lake.

                      Originally posted by MadCatX View Post
                      Are there any ADL benchmarks with its AVX512 haxxored enabled?
                      I agree that it would be ideal to compare it against an initial-stepping Alder Lake CPU, on a motherboard which allows its AVX-512 to be enabled. I'm not sure if Michael has such a setup, however. Perhaps someone has uploaded these results to the OpenBenchmarking database, although we don't necessarily know what their OC configuration and cooling setup is like.

                      Comment

                      Working...
                      X