Announcement

Collapse
No announcement yet.

The Power Consumption & Efficiency Of Open-Source GPU Drivers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • The Power Consumption & Efficiency Of Open-Source GPU Drivers

    Phoronix: The Power Consumption & Efficiency Of Open-Source GPU Drivers

    Complementing yesterday's Radeon, Intel, and Nouveau benchmarks using the very latest open-source driver code, here's some power consumption, performance-per-Watt, and thermal numbers when using an assortment of graphics processors on the latest open-source drivers.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    On the first sight... what happens with the Radeon R7 260X Yesterday you had much better results (if i am reading it right, with the same stack) and now it looks broken
    Last edited by dungeon; 26 July 2014, 02:16 PM.

    Comment


    • #3
      Power efficiency vs ... what exactly? Where are the results compared to the binary blobs? These benchmarks are pointless if they don't have a basis for comparison. What use is the benchmark if we don't have a ideal number to compare it to? Where's idle consumption?

      Comment


      • #4
        I think these test results, while reliable, are not fair, and can induce some misunderstanding : the higher you clock your GPU, the higher will be its consumption, right? But it's exponential, while the relation between framerate and frequency is more linear. Moreover, in real-world scenarios, I can't see any advantages in running your graphics card higher than the refresh rate of your computer. It would be interesting either to plot this efficiency (FPS/W) against frequency, to see where the peak is, (this would be very time consuming, and maybe hard to do with the current reclocking state in nouveau), or to have a framerate limit, like 60 (or say 40 fps if some graphics cards can't reach 60), and then to compare power draw. But I think the way you did your testing may not be right, IMHO.

        Comment


        • #5
          Originally posted by M@yeulC View Post
          I think these test results, while reliable, are not fair, and can induce some misunderstanding : the higher you clock your GPU, the higher will be its consumption, right? But it's exponential, while the relation between framerate and frequency is more linear. Moreover, in real-world scenarios, I can't see any advantages in running your graphics card higher than the refresh rate of your computer. It would be interesting either to plot this efficiency (FPS/W) against frequency, to see where the peak is, (this would be very time consuming, and maybe hard to do with the current reclocking state in nouveau), or to have a framerate limit, like 60 (or say 40 fps if some graphics cards can't reach 60), and then to compare power draw. But I think the way you did your testing may not be right, IMHO.
          Running your gpu faster than your refresh rate will reduce latency between frames and thusly input response. It's good for twitch games but that's about it. In general you don't really have much control over how fast your games run unless you use vsync but that kills input response times on fps games.
          Last edited by laykun; 26 July 2014, 07:11 PM.

          Comment


          • #6
            Originally posted by laykun View Post
            Power efficiency vs ... what exactly? Where are the results compared to the binary blobs? These benchmarks are pointless if they don't have a basis for comparison. What use is the benchmark if we don't have a ideal number to compare it to? Where's idle consumption?
            As said already in the articles, the open vs. closed data is coming next week... today's data was between the various GPUs on open drivers.
            Michael Larabel
            https://www.michaellarabel.com/

            Comment


            • #7
              It'll be nice to see performance per watt for Broadwell's integrated GPUs when they are out.

              Comment


              • #8
                Originally posted by laykun View Post
                Running your gpu faster than your refresh rate will reduce latency between frames and thusly input response. It's good for twitch games but that's about it. In general you don't really have much control over how fast your games run unless you use vsync but that kills input response times on fps games.
                Not always. It really depends where is the screen in the process of refreshing itself when you update the buffer. But it may cause tearing. It depends on the gamer's preference.
                Generally, you don't want to overfeed your monitor with frames, and it is a good idea to limit the framerate (you could limit it at 120Hz on you 60Hz monitor if you'd like so, too).
                The ideal is of course to "race the beam" (search Michael Abrash's blog, I remember he explained this one quite well), to render the pixel just before it is updated, but this is far from easy.

                And it is out of question to use vsync if you are under your monitor's refresh rate if you want to reduce latency, as this would mean duplicate frames to wait for the next one.

                I generally prefer a "framerate limit" option to a "vsync" option, but that's personal (It's 2:30 a.m here, I can't really judge if my opinion is funded or not, I'll let you (an me, tomorrow) judge about this.

                Comment


                • #9
                  Originally posted by M@yeulC View Post
                  Not always. It really depends where is the screen in the process of refreshing itself when you update the buffer. But it may cause tearing. It depends on the gamer's preference.
                  Generally, you don't want to overfeed your monitor with frames, and it is a good idea to limit the framerate (you could limit it at 120Hz on you 60Hz monitor if you'd like so, too).
                  The ideal is of course to "race the beam" (search Michael Abrash's blog, I remember he explained this one quite well), to render the pixel just before it is updated, but this is far from easy.

                  And it is out of question to use vsync if you are under your monitor's refresh rate if you want to reduce latency, as this would mean duplicate frames to wait for the next one.

                  I generally prefer a "framerate limit" option to a "vsync" option, but that's personal (It's 2:30 a.m here, I can't really judge if my opinion is funded or not, I'll let you (an me, tomorrow) judge about this.
                  The only fix is going to be adaptive sync. My preference is tearing over delay, as an fps player and developer.

                  Comment


                  • #10
                    Originally posted by M@yeulC View Post
                    I think these test results, while reliable, are not fair, and can induce some misunderstanding : the higher you clock your GPU, the higher will be its consumption, right? But it's exponential, while the relation between framerate and frequency is more linear. Moreover, in real-world scenarios, I can't see any advantages in running your graphics card higher than the refresh rate of your computer. It would be interesting either to plot this efficiency (FPS/W) against frequency, to see where the peak is, (this would be very time consuming, and maybe hard to do with the current reclocking state in nouveau), or to have a framerate limit, like 60 (or say 40 fps if some graphics cards can't reach 60), and then to compare power draw. But I think the way you did your testing may not be right, IMHO.
                    Eh... it's not quite that simple. In general, yes, the higher the frequency, the more power consuming it will be. But with GPUs this is rather difficult to represent, because they're highly parallel and depend on their sheet number of cores rather than frequency. A 400MHz core GPU with 512 stream processors is going to perform better than a 800MHz core with 256 stream processors (in most situations). When you look at the low-end GPUs, they're relatively very inefficient. This is because they're already running under 100% load and they're still not done completing the task on-time.

                    To put this in perspective - an i7 may consume up to 100W under full load, which is somewhat power hungry but it is far more power efficient than the average Pentium. The reason for this (that I can see) is because every intel chip, northbridge, and motherboard has a bare minimum set of silicon that consumes X amount of watts. So, lets say that's 25W that every intel system MUST consume in order to function, your Pentium CPU is proportionately a power hog, where the i7 can just keep stacking on more cores and instruction sets for increments in power usage. If you have a task that doesn't even trigger intel's turbo boost, I'm sure an i7 may even use less watts than an i3 running the same task.

                    All that being said, limiting the frame rate and measuring the power consumption won't prove very much. The results will change but it will basically just make the high-end GPUs look even more efficient than they really are.


                    @laykun
                    vsync's effect on input devices is heavily dependent upon the game. Even in Windows, there are games that appear to have little to no impact from vsync, and then there are others where the input lags so much that the game is seriously unplayable.

                    Comment

                    Working...
                    X