Announcement

Collapse
No announcement yet.

AMD Ryzen 9 3900XT vs. Intel Core i9 10900K Linux Gaming Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    It would be interesting to see a few results with the latest Proton version, I don't know which games would be best for benchmarking. Proton does have some extra threading work done on it so who knows, it might benefit AMD.

    You could even run the proton version of linux port games to see if there is a difference in the way threading is done on Linux for some of these games.

    Comment


    • #12
      Originally posted by vladpetric View Post
      Well, when the binary code is the same, it is the microarchitecture of the processor that mostly affects IPC. And in this respect Intel seems to be doing better.
      OK, I have to ask... given that the Intel CPU has a ~12% higher boost clock and is ~4% faster on average, where does the argument about Intel having better IPC come from ?
      Test signature

      Comment


      • #13
        Originally posted by bridgman View Post

        OK, I have to ask... given that the Intel CPU has a ~12% higher boost clock and is ~4% faster on average, where does the argument about Intel having better IPC come from ?
        You're right, it doesn't, so scratch that.

        Looking at the top of the chart shows some pretty significant differences between between Intel and AMD though.

        Comment


        • #14
          I don't see or missed it some where in the article, did Michael say what speed memory was running on the test systems? Gaming performance on Ryzen 3000 seems to benefit a lot from DDR4-3600 (or 3733/3800 if overclocking infinity fabric).
          Last edited by nranger; 10 July 2020, 10:37 PM.

          Comment


          • #15
            Originally posted by vladpetric View Post
            Looking at the top of the chart shows some pretty significant differences between between Intel and AMD though.
            Agreed, although differences that large in specific apps often means that they are taking different code paths.
            Test signature

            Comment


            • #16
              Originally posted by vladpetric View Post

              You're right, it doesn't, so scratch that.

              Looking at the top of the chart shows some pretty significant differences between between Intel and AMD though.
              Gaming is Intel's last remaining strong point. Mostly for 2 reasons:
              • It tends to take advantage of the higher single-core boost clocks
              • It can be more memory latency sensitive than many other tasks
              You can see the latter by comparing Renoir vs Matisse, where the non-chiplet version of the architecture has improved latencies and performs better even with massively reduced L3 caches.

              Comment


              • #17
                Originally posted by nranger View Post
                i don't see or missed it some where in the article, did michael say what speed memory was running on the test systems? Gaming performance on ryzen 3000 seems to benefit a lot from ddr4-3600 (or 3733/3800 if overclocking infinity fabric).
                ddr4-3600
                Michael Larabel
                https://www.michaellarabel.com/

                Comment


                • #18
                  Originally posted by theriddick View Post
                  It would be interesting to see a few results with the latest Proton version, I don't know which games would be best for benchmarking. Proton does have some extra threading work done on it so who knows, it might benefit AMD.

                  You could even run the proton version of linux port games to see if there is a difference in the way threading is done on Linux for some of these games.
                  This was the latest Proton 5.0 on Steam.
                  Michael Larabel
                  https://www.michaellarabel.com/

                  Comment


                  • #19
                    Originally posted by bridgman View Post

                    Agreed, although differences that large in specific apps often means that they are taking different code paths.
                    Ok, there's a lot to be said here, but some of it is my speculation (in the absence of evidence).

                    * Typically most firms don't like doing code specialization, because it's one more dimension that they have to worry about when testing. Libraries do it (those libraries tend to be shared by multiple games). In any case, the specialization tends is done via the CPUID instruction, which primarily identifies available instruction sets (e.g., AVX2 present or not). Over there, the Ryzen line started with a pretty good feature set, but unfortunately parts of the implementation were pretty slow. It takes a lot of micro-ops and many cycles to execute some AVX2 instructions for instance (all the vector gather scatter instructions are super slow, also BMI2 pdep/pext; so if you activate a path that says - hey! AVX2 is there - you might actually get slower overall code ... ). See https://www.agner.org/optimize/instruction_tables.pdf - only covers the first generation Ryzen though ...

                    Why am I saying all this?

                    a) It could technically be the same code path, except that it doesn't run as fast on AMD.

                    b) Most importantly, if you have a more recent Ryzen processors (2000, 3000, or similar Threadrippers), I beg you to contact dr. Fog (on the forum at agner.org) and let him run his instructions benchmark suite on your processor. This would not only provide real evidence for this, but would help the larger community. He is not only good, but the kind of person who brings actual science to computer science

                    * Intel oftentimes sends people out to software firms to help them optimize their software on Intel CPUs. AMD doesn't do it. Typically those optimizations don't result in an overall slower code when run on AMD, though it does help with relative Intel CPU performance.

                    * For a lot of the benchmarks that are close to 0% (really, few people care about 2% change in games), it could be that the CPU is not getting taxed. Again, hypothesis/speculation.
                    Last edited by vladpetric; 11 July 2020, 01:17 PM.

                    Comment


                    • #20
                      Originally posted by smitty3268 View Post

                      Gaming is Intel's last remaining strong point. Mostly for 2 reasons:
                      • It tends to take advantage of the higher single-core boost clocks
                      • It can be more memory latency sensitive than many other tasks
                      You can see the latter by comparing Renoir vs Matisse, where the non-chiplet version of the architecture has improved latencies and performs better even with massively reduced L3 caches.
                      Can you include a link for Renoir vs Matisse? Thanks.

                      Comment

                      Working...
                      X