Announcement

Collapse
No announcement yet.

AMD Ryzen 9 7900X / 7950X Linux Gaming Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Would be interesting to see how well the iGPU handles render offload. With all that bandwidth it should be quite fast. Probably fast enough that the power savings from turning off a beefy dGPU is worth the slight overhead of copying frames around.

    Comment


    • #12
      Aww, I thought this was going to be an iGPU comparison!

      Comment


      • #13
        Thank you Michael for all these benchmarks. You have done so much work, there's so much variety of test including this gaming dedicated review. To be honest I'm glad I went with AMD, because it seems that it matured quite well on Linux (I'm on R5 5600X). I remember for gaming the first and second gen of Zen CPU were struggling so much on linux against Intel, while on windows it was matching Intel best offer, if not beating them. This was due to driver unoptimized. And over the time it is getting better and better, and now AMD have the lead in most games, and other benchmarks, despite Intel also improving. Rocket Lake might be a strong architecture. Although I think AMD was not fully ready with Zen 4, but made it quickly available in order to have the timing advantage over intel's new gen. Probably AMD will release a v2 of their Zen4 CPU few month after Rocket Lake to match the non-ending race. This will probably feature lower TDP/Temperature, 3Dv-cache models and AM4 adapted or refreshed versions as they mentionned earlier.

        Comment


        • #14
          Going big.little still yields problems for Intel, besides the obvious loss of AVX-512 support:

          https://www.phoronix.com/review/ryze...eries-gaming/4

          If you look at the "Total War - Three Kingdoms" benchmark, you'll see the top-end Intel i9-12900K Alder Lake consistently coming in at the last place, even being beaten by AMD's Ryzen 5500!

          This has to mean that Linux is placing the game threads onto the E-cores and leaving them there, instead of moving them to the P-cores.

          Interestingly enough AMD still lost quite a few gaming benchmarks on Windows:



          I hate to admit it, but this unfortunately does hint at Linux still having problems with hybrid architectures, which is rather surprising, considering that Android has been running on such CPUs for many years now...

          Anyone care to provide a good explanation for that?

          Comment


          • #15
            Originally posted by Linuxxx View Post
            This has to mean that Linux is placing the game threads onto the E-cores and leaving them there, instead of moving them to the P-cores.
            Probably, but there turns out to be another hybrid-related performance penalty in Alder Lake. The ring bus stops for the E-core clusters are clocked lower, if the E-cores are enabled. This creates a little bit of a bottleneck, though probably not enough to explain what you pointed out. Hopefully, it'll be resolved in Raptor Lake.

            Comment


            • #16
              Originally posted by Linuxxx View Post
              Going big.little still yields problems for Intel, besides the obvious loss of AVX-512 support:

              https://www.phoronix.com/review/ryze...eries-gaming/4

              If you look at the "Total War - Three Kingdoms" benchmark, you'll see the top-end Intel i9-12900K Alder Lake consistently coming in at the last place, even being beaten by AMD's Ryzen 5500!

              This has to mean that Linux is placing the game threads onto the E-cores and leaving them there, instead of moving them to the P-cores.

              Interestingly enough AMD still lost quite a few gaming benchmarks on Windows:



              I hate to admit it, but this unfortunately does hint at Linux still having problems with hybrid architectures, which is rather surprising, considering that Android has been running on such CPUs for many years now...

              Anyone care to provide a good explanation for that?
              It could be the kernel is just more mature for those CPUs. Just because the CPU is similar, the details of how it is done can drastically change performance. AMD's AVX512 solution is an example of this, it can sometimes beat Intel's implementation because the design itself doesn't require cutting the clock speed in half, and the kernel simply doesn't have to do anything to take advantage of that, but it could have been designed in a different way that required kernel intervention.

              Comment


              • #17
                Originally posted by Linuxxx View Post
                Going big.little still yields problems for Intel, besides the obvious loss of AVX-512 support:
                https://www.phoronix.com/review/ryze...eries-gaming/4
                If you look at the "Total War - Three Kingdoms" benchmark, you'll see the top-end Intel i9-12900K Alder Lake consistently coming in at the last place, even being beaten by AMD's Ryzen 5500!
                This has to mean that Linux is placing the game threads onto the E-cores and leaving them there, instead of moving them to the P-cores.
                Interestingly enough AMD still lost quite a few gaming benchmarks on Windows:
                I hate to admit it, but this unfortunately does hint at Linux still having problems with hybrid architectures, which is rather surprising, considering that Android has been running on such CPUs for many years now...
                Anyone care to provide a good explanation for that?
                it all started years before the battlegrounds we see today just remember you about this:

                intel had a security function SGX you need it to play UHD-Blue-rays and AMD cpus where not able toplay UHD-Blue-rays because of this. but security resarchers found so many bugs and security holes that intel did drop SGX in the Core i-11000 and i-12000

                ‚Äčthat Intel failed to produce a high performance version of AVX-512 with high clock speeds and that AMD with its first try did do a AVX512 implementation what is fast and efficient and has high clock speeds shows that intel for many years is not able to bring usefull features to the market.

                no matter what they so SGX or ARC-GPUs or AVX-512 everything intel does is a desaster.

                so similar to AVX-512 intel was forced to remove SGX and also AVX-512 in the core i-12000...

                "big.little still yields problems f"

                people for years now buy AMD cpus to avoid this big-little intel desaster and now with the ryzen 7000 series AMD is even faster without big-little.

                Intel is a failure for many years and this "big.little" problems are just the symtoms to overpaint this desaster.

                "Interestingly enough AMD still lost quite a few gaming benchmarks on Windows:"

                be sure the ryzen 7700X3D with stagged 3D cache will beat any intel cpu in gaming even on windows.
                Phantom circuit Sequence Reducer Dyslexia

                Comment


                • #18
                  I'm not sure "power efficiency" measured this way is meaningful. Sure, the AMD 5500 IGP-less APU uses less joules per frame at 300 FPS than the 7950X does at 500 FPS. We know more GHz, more volts = more power. But what about 60 FPS and 120 FPS?

                  I think a better measure of power efficiency would be to cap the frame rate at a few common targets, such as 60, 120, and 140 (what you'd use on a 144 Hz VRR monitor), and let the default cpufreq governor work. Then report the average power. If intel_pstate hwp powersave is unsuitable for gaming, or schedutil causes stuttering as is widely reported, that should show up in the frametime box plots.

                  Also,

                  Originally posted by binarybanana View Post
                  Would be interesting to see how well the iGPU handles render offload. With all that bandwidth it should be quite fast. Probably fast enough that the power savings from turning off a beefy dGPU is worth the slight overhead of copying frames around.
                  I haven't tested this scientifically, and I haven't had success finding anything that can monitor VRAM usage on my old Haswell IGP, but I am using hybrid graphics with a low-VRAM dGPU, and I have a hunch that keeping all the tens of GUI windows' framebuffers in the system memory is freeing up a scarce resource for the exclusive use of whatever heavy 3D application gets run on the dGPU.

                  Comment


                  • #19
                    Originally posted by yump View Post
                    I haven't tested this scientifically, and I haven't had success finding anything that can monitor VRAM usage on my old Haswell IGP, but I am using hybrid graphics with a low-VRAM dGPU, and I have a hunch that keeping all the tens of GUI windows' framebuffers in the system memory is freeing up a scarce resource for the exclusive use of whatever heavy 3D application gets run on the dGPU.
                    I don't know your setup well enough to say for sure, but that's certainly theoretically valid. (I covered some related pieces in one of the Pi threads a while back).

                    For me, the most interesting aspect of IGPs is the ability to use passthrough of a beefy GPU for games without having to burn more power on a second discrete GPU just to have an accelerated desktop (especially for video playback). Zen4's may be super-weak (and it's fairly surprising that it doesn't have VP9 ENcode) but it'll do that job very nicely.

                    Comment


                    • #20
                      Originally posted by coder View Post
                      Don't forget that Ryzen 7000 also jumped to DDR5! So, it's not only a CPU generation newer, but also much more memory bandwidth and twice as many channels.
                      Oh. Forgot about that one. So true.

                      Comment

                      Working...
                      X