Announcement

Collapse
No announcement yet.

The Current CPU Driver Usage Difference Between RADV/RadeonSI & NVIDIA

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • The Current CPU Driver Usage Difference Between RADV/RadeonSI & NVIDIA

    Phoronix: The Current CPU Driver Usage Difference Between RADV/RadeonSI & NVIDIA

    Yesterday I posted some fresh GPU/driver benchmark results for discrete AMD Radeon and NVIDIA GeForce graphics cards. These were some of the most competitive numbers yet we've seen out of the open-source RadeonSI OpenGL and RADV drivers while using the latest Linux 4.15 kernel, especially for the GTX 1060 vs. RX 580 battle. In the comments were requests to see some CPU utilization numbers, including from one of the Radeon Linux developers, so here is a look at how the CPU usage compares...

    http://www.phoronix.com/scan.php?pag...-Driver-Impact

  • #2
    Everything seems pretty much the same to me.

    Comment


    • #3
      I (temporarily?) downgraded from a GTX 1070 to RX 560.
      RadeonSI is definitely smoother than Nvidia (less frametime spikes) in Serious Sam 3 Fusion and Metro Redux, also CPU performance is nice on my 6700k. The higher fps offered by Nvidia highend cards are imho quite worhtless, considering the bad perceived performance. If one desperately wants to play on Linux, I'd suggest Polaris and reduced detail settings.

      Comment


      • #4
        Small thing, but any chance the graphs could use the same colors? Was kind of confusing on a first look

        Comment


        • #5
          Originally posted by aaahaaap View Post
          Small thing, but any chance the graphs could use the same colors? Was kind of confusing on a first look
          In the future, yes, for two-way comparisons... Had made a change to pts_Graph to be smarter in just dual comparisons but that code is yet to land on OpenBenchmarking.org. Out of my hands besides that as the code is all automated, wasn't like intentionally creating different colors
          Michael Larabel
          http://www.michaellarabel.com/

          Comment


          • #6
            Originally posted by aufkrawall View Post
            I (temporarily?) downgraded from a GTX 1070 to RX 560.
            RadeonSI is definitely smoother than Nvidia (less frametime spikes) in Serious Sam 3 Fusion and Metro Redux, also CPU performance is nice on my 6700k. The higher fps offered by Nvidia highend cards are imho quite worhtless, considering the bad perceived performance. If one desperately wants to play on Linux, I'd suggest Polaris and reduced detail settings.
            Not defending nvidia here, but generally the more frames you throw to the game, the more CPU usage you will get.
            Most of the people usually wants hundrends and hundreds of frames, altough that usually leads to frametime spikes and incosistencies due to higher CPU Usage.
            Probably capping the game with vsync on your GTX 1070 would had delivered better consistency.

            Keep in mind that what I'm saying here might not apply to your case and your RX 560 might be really behaving much better than the RX 560.
            Also you have a i7 6700K which is a beast on it's own XD

            Comment


            • #7
              Good! Looks like cpu time per frame is mostly about the same. Good info for us with lazy cpus.

              Comment


              • #8
                I suppose RadeonSI's OGL shader cache is better than Nvidia's or Valve gave some advice how to decrease stuttering, it just doesn't really cease with Nvidia.
                I tried SS3 Fusion also in 720p with reduced GPU details and min-fps during a lot of action were ~120 with OGL, which shouldn't be much worse compared to Windows DX11.
                CPU usage is just an interesting side fact, but in the end only fps and frametime variance matter.

                Comment


                • #9
                  Thanks for different resolutions Because with that it is easy to spot what CPU usage tendency is... about Dota 2 OpenGL it was asked so look at this - In order FullHD, WQHD, UHD-1... or OK 4K

                  Code:
                       CPU Min  Avg  Max   Fps
                  GTX 1060 0.00 18.1 31.97 109.17
                  RX 580   0.17 19.7 46.11 114.73
                  
                  GTX 1060 0.00 18.3 33.77 108.33
                  RX 580   0.00 18.9 38.67 110.17
                  
                  GTX 1060 0.00 19.2 30.32 68.90
                  RX 580   0.17 16.8 29.08 68.63
                  RX 580 uses more CPU when resolution is lower, but exactly the opposite happens with GTX 1060 as that uses more CPU how resolution gets bigger.

                  Or to say it even like this, nvidia does scale there as "less GPU used, less CPU used" while amdgpu do "less GPU used, more CPU used"

                  Or nvidia burns both if needed, but amdgpu only one anyway

                  Or nvidia scales it, but amdgpu trying to fight it

                  Scale vs Fight approach

                  Both are not 4K cards really, OK might be called 4K-entry gaming cards... but again crawl there is obvious
                  Last edited by dungeon; 01-12-2018, 05:46 AM.

                  Comment


                  • #10
                    Admittedly off-topic, but does somebody know why https://cgit.freedesktop.org/~agd5f/linux/ says "No repositories found" right now? Repo moved, site down/hacked, or other reasons?

                    Comment

                    Working...
                    X