Announcement

Collapse
No announcement yet.

AMD 3D V-Cache Performance Optimizer Driver Posted For Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD 3D V-Cache Performance Optimizer Driver Posted For Linux

    Phoronix: AMD 3D V-Cache Performance Optimizer Driver Posted For Linux

    AMD today quietly posted a new open-source Linux kernel driver for review... the AMD 3D V-Cache Performance Optimizer Driver. This AMD 3D V-Cache Performance Optimizer Driver for Linux is intended to help optimize performance on systems sporting 3D V-Cache such as the AMD Ryzen "X3D" parts and the EPYC "X" processors...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    ... CCDs with all the same large L3 caches, the work on this driver now seems to indicate that this won't ...
    Uhh, too bad.

    Comment


    • #3
      Sounds like a fine enough idea in theory, as long as this driver doesn’t use core parking and turn the 12/16 core X3D chips into 6/8 core ones like what AMD does on Windows and the game bar or whatever.

      Comment


      • #4
        seems to be for a future agesa version, I can't find a _dsm with this guid in the zen5-1.2.0.2 version I have...

        Comment


        • #5
          Originally posted by X_m7 View Post
          Sounds like a fine enough idea in theory, as long as this driver doesn’t use core parking and turn the 12/16 core X3D chips into 6/8 core ones like what AMD does on Windows and the game bar or whatever.
          What was the deal with Zen 5 on Windows? I know the Zen 4 X3D chips are using core parking there. But I remember reading some Zen 5 review that mentioned it was using some driver that Zen 4 parts weren't. Was it the mobile AI SKUs due to the heterogeneous 5 / 5c cores?

          Comment


          • #6
            Originally posted by Anux View Post
            Uhh, too bad.
            Remember that there probably isn't that huge performance benefit to 2 cores with 3DVCache in cache sensitive applications, because of cross-CCD latency. I mean, we don't know that for certain, but it certainly seems a plausible explanation.

            Originally posted by pWe00Iri3e7Z9lHOX2Qx View Post
            What was the deal with Zen 5 on Windows? I know the Zen 4 X3D chips are using core parking there. But I remember reading some Zen 5 review that mentioned it was using some driver that Zen 4 parts weren't. Was it the mobile AI SKUs due to the heterogeneous 5 / 5c cores?
            I'd guess it's something along the lines of picking between these two performance options:
            1. Game on one CCD, Windows background on another CCD
            2. Game + Windows background on one CCD, other CCD off, higher boosting on active CCD due to power management + no need to worry about cross-CCD latency affecting communication between Game + Windows
            My guess is that in testing, Game + Windows did not saturate one CCD (which I can believe, as most games use at most 4 threads), and Zen 5 initially had some cross-CCD latency issues (now more-or-less fixed), so option 2 made most sense.

            Comment


            • #7
              Originally posted by habilain View Post
              My guess is that in testing, Game + Windows did not saturate one CCD (which I can believe, as most games use at most 4 threads), and Zen 5 initially had some cross-CCD latency issues (now more-or-less fixed), so option 2 made most sense.
              Lol, Lmao even. Old FUD is Old.

              It was vaguely true in the late 00s when 4 core configurations were common that games didn't use more than 4 threads, but ever since the PS4 generation some games will thread out to 8 cores, but most games will take as many cores as you can give them. That doesn't mean you're going to get more FPS out of it but you will get better latency as was widely examined back when AMD launched Ryzen in the first place.

              Comment


              • #8
                Originally posted by X_m7 View Post
                Sounds like a fine enough idea in theory, as long as this driver doesn’t use core parking and turn the 12/16 core X3D chips into 6/8 core ones like what AMD does on Windows and the game bar or whatever.
                Tbh, sounds like a terrible idea if the CPU can't tell whether it's running a cache or IPC sensitive load and it needs to be told. I'll take it if it works, hopefully we won't need to modify all apps to send those hints.

                Comment


                • #9
                  Originally posted by habilain View Post
                  Remember that there probably isn't that huge performance benefit to 2 cores with 3DVCache in cache sensitive applications, because of cross-CCD latency.

                  I fail to see why 2 would be a special case when it clearly works with 12 CCDs (see Epyc 9684X).

                  Comment


                  • #10
                    Originally posted by Luke_Wolf View Post
                    Lol, Lmao even. Old FUD is Old.

                    It was vaguely true in the late 00s when 4 core configurations were common that games didn't use more than 4 threads, but ever since the PS4 generation some games will thread out to 8 cores, but most games will take as many cores as you can give them. That doesn't mean you're going to get more FPS out of it but you will get better latency as was widely examined back when AMD launched Ryzen in the first place.
                    No, that's still the case, depending on graphics API. Most games using APIs before D3D12 or Vulkan will only use 2-4 cores at most.

                    The same games, using the same engine (e.g. Unreal Engine) but the D3D12 renderer will use 16+ threads if available.

                    Comment

                    Working...
                    X