Announcement

Collapse
No announcement yet.

Here's Why Radeon Graphics Are Faster On Linux 3.12

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • chrisb
    replied
    Originally posted by log0 View Post
    So what do you expect AMD to do? Provide custom governors for all the cpus out there, that might be used with AMD graphics cards?
    That is a strawman suggestion. AMD make CPUs, so it's in their interests to make sure that kernel frequency scheduling works well on their CPUs. And since they make APUs, it's in their interests to make sure that the drivers of CPU and GPU don't have negative interactions that reduce performance. If achieving good performance requires a custom governor for AMD products, like Intel have, then they should consider it.

    Leave a comment:


  • JS987
    replied
    Originally posted by chrisb View Post
    That's not true for vsync and triple buffering.
    Also note that most console games actually lock to 30fps for a more consistent smooth experience:
    There is big input lag with triple buffering.
    30 fps isn't enough for me even for videos.

    Leave a comment:


  • schmidtbag
    replied
    Originally posted by Ericg View Post
    Schmidt I need to throw your entire post out the window.... Like not "Prove it wrong," I need to literally pick up the bits, put them in a bucket, and throw the bucket out of a closed window.

    Its not an "intel governor" its the ondemand governor in the subsystem that handles ALL CPU scaling. This change effects every CPU that uses the ondemand governor-- interestingly enough (in perspective of your post) no modern intel CPU actually uses the ondemand governor UNLESS you're on *buntu. Everyone ELSE moved over to the customized Intel P-State driver like 2 or 3 kernel releases ago (i've asked Michael to compare the P-State driver to ondemand with this change). But Ubuntu, for their own reasons, has not moved over yet.

    AMD CPU's are likely affected by this as well, perhaps even just as much as the benchmarked Intel CPU. This whole thing is in regards to a kernel subsystem, not specific branded hardware.
    My bad - I should have been more precise, I forgot this is the internet and everything stated must be 100% accurate. While governors such as "ondemand" or "performance" apply to either AMD or Intel, there are still drivers (if that's even the right word) that affects how these governors work between CPUs. In other words, the governors ARE specific to, at the very least, the CPU family. It could ven be specific to each generation or each model, but I wouldn't know for sure. So for example if you have an AMD system that can clock from 1.2GHz to 3.5Ghz, it doesn't mean an intel CPU can operate the same way and remain stable. If the governors were indifferent to the CPU, problems like this would have been found a long time ago.

    The point of me saying this is there's a possibility that the ondemand governor for AMD might have done a better job at determining what frequency to operate at.

    Leave a comment:


  • chrisb
    replied
    Originally posted by GreatEmerald View Post
    And will make your FPS dip to 30 if 60 can't be sustained, instead of just 59... No, you should have VSync on only for games you know will never dip below 60 (or whatever your refresh rate may be).
    That's not true for vsync and triple buffering.

    Also note that most console games actually lock to 30fps for a more consistent smooth experience:

    Frame rates in video games refer to the speed at which the image is refreshed (typically in frames per second, or FPS). Many underlying processes, such as collision detection and network processing, run at different or inconsistent frequencies or in different physical components of a computer. FPS affect the experience in two ways: low FPS does not give the illusion of motion effectively and affects the user's capacity to interact with the game, while FPS that vary substantially from one second to the next depending on computational load produce uneven, "choppy" movement or animation. Many games lock their frame rate at lower but more sustainable levels to give consistently smooth motion.
    http://en.wikipedia.org/wiki/Frame_rate#Video_games

    Whether having a lower latency with tearing and choppy motion is better or worse than a higher latency with no tearing and smooth motion is a personal preference, but games console developers tend to use the latter.

    Leave a comment:


  • Ericg
    replied
    Originally posted by agd5f View Post
    The GPU can only operate as fast as it can be fed data. If you have a slow CPU, you may not be able to feed the GPU data fast enough to fully utilize it's potential. This is why the lower end GPUs don't see as large an increase in performance with increased CPU speed compared to the high end GPUs. It's always a trade off. For a lot of people, saving the extra power from keeping the CPU (and GPU) clocked lower more of the time is probably more important than having maximum 3D performance. For gamers the opposite is true.
    Was wondering when you were gonna chime in Alex haha.

    I guess we'll need to wait for Michael's power consumption benchmarks to figure out if this is a good change in the subsystem or not... I mean yes we're getting higher performance, but what about non-gaming workloads? Is 3.12 going to kill battery life (compared to 3.11) because of this change? For gaming I have no problem with higher power consumption, its expected. But what about flash? Or other 'constant' workloads that DON'T require maxxed out freqs.

    Leave a comment:


  • Ericg
    replied
    Originally posted by schmidtbag View Post
    That makes me feel a lot better then - at least that means this isn't just a problem with the radeon drivers. As Luke has pointed out with his FX-8120, he didn't get any performance hits between the kernel versions so in my personal opinion, it seems the blame is the intel ondemand governor.

    What I'd be more interested in at this point is seeing a test with the HD6870 (due to having the greatest impact all around) on an AMD FX-8XXX system between kernels 3.11 and 3.12 AND compare that to the intel results. A CPU like that ought to be plenty sufficient to give similar results, so assuming the CPU isn't a bottleneck, that would be a good way to prove that the intel governor was faulty. If the overall frame rate is significantly lower regardless of CPU power state, this might be more than just a governor problem.


    Assuming the intel governor has been faulty all along, at least we now know it is working properly and all future benchmarks can remain accurate and meaningful without Michael having to change the governor.
    Schmidt I need to throw your entire post out the window.... Like not "Prove it wrong," I need to literally pick up the bits, put them in a bucket, and throw the bucket out of a closed window.

    Its not an "intel governor" its the ondemand governor in the subsystem that handles ALL CPU scaling. This change effects every CPU that uses the ondemand governor-- interestingly enough (in perspective of your post) no modern intel CPU actually uses the ondemand governor UNLESS you're on *buntu. Everyone ELSE moved over to the customized Intel P-State driver like 2 or 3 kernel releases ago (i've asked Michael to compare the P-State driver to ondemand with this change). But Ubuntu, for their own reasons, has not moved over yet.

    AMD CPU's are likely affected by this as well, perhaps even just as much as the benchmarked Intel CPU. This whole thing is in regards to a kernel subsystem, not specific branded hardware.

    Leave a comment:


  • agd5f
    replied
    Originally posted by monraaf View Post
    So, is this really a improvement?

    It just means the open source AMD radeon driver depends too much on the cpu instead of, duh, the processing power of the graphic card.
    The other drivers are not affected because they actually use the graphic card instead of the CPU.

    Also I don't think always keeping the CPU on its limit is good either, I can imagine there is much more power wasted now since the goal for power saving is too sleep as much as possible and when in use, on a frequency as low as possible.

    Am I wrong? Maybe, but I doubt this really is the "next big thing"
    The GPU can only operate as fast as it can be fed data. If you have a slow CPU, you may not be able to feed the GPU data fast enough to fully utilize it's potential. This is why the lower end GPUs don't see as large an increase in performance with increased CPU speed compared to the high end GPUs. It's always a trade off. For a lot of people, saving the extra power from keeping the CPU (and GPU) clocked lower more of the time is probably more important than having maximum 3D performance. For gamers the opposite is true.

    Leave a comment:


  • schmidtbag
    replied
    Originally posted by Michael View Post
    That makes me feel a lot better then - at least that means this isn't just a problem with the radeon drivers. As Luke has pointed out with his FX-8120, he didn't get any performance hits between the kernel versions so in my personal opinion, it seems the blame is the intel ondemand governor.

    What I'd be more interested in at this point is seeing a test with the HD6870 (due to having the greatest impact all around) on an AMD FX-8XXX system between kernels 3.11 and 3.12 AND compare that to the intel results. A CPU like that ought to be plenty sufficient to give similar results, so assuming the CPU isn't a bottleneck, that would be a good way to prove that the intel governor was faulty. If the overall frame rate is significantly lower regardless of CPU power state, this might be more than just a governor problem.


    Assuming the intel governor has been faulty all along, at least we now know it is working properly and all future benchmarks can remain accurate and meaningful without Michael having to change the governor.
    Last edited by schmidtbag; 15 October 2013, 02:00 PM.

    Leave a comment:


  • GreatEmerald
    replied
    Originally posted by ua=42 View Post
    On a computer game you should always have v-sync on because of 3 reasons.
    1. Rendering more frames than your monitor can display is pointless, as you don't get to see the extra frames.
    2. Rendering more frames than your monitor can display can cause tearing.
    3. Having v-sync on will let your GPU run cooler.
    And will make your FPS dip to 30 if 60 can't be sustained, instead of just 59... No, you should have VSync on only for games you know will never dip below 60 (or whatever your refresh rate may be).

    Leave a comment:


  • ua=42
    replied
    Originally posted by AJSB View Post
    It's only for testing purposes and to see how drivers are evolving...

    Personally, as a gamer, i care less if any game can or not pass 60 FPS.
    High frame rates can actually be bad...

    I give an example...in "Enemy Territory : QUAKE Wars" i could have my hardware reaching easily 120 FPS...but i was having extremely difficulty to hit anyone before they hit me and kill me...the i read somewhere that gamers should try to keep the FPS as stable as possible....

    I notice that my frame rate was actually jumping from 120 FPS down to as low as 80 FPS...so, in the .cfg file changing some settings i locked/limited (w/o using Vsync) frame rate to 60 FPS...

    Then my Frame Rate changed from 59 FPS min to 61 FPS max...and suddenly i start to hit and kill enemies with ease

    On a computer game you should always have v-sync on because of 3 reasons.
    1. Rendering more frames than your monitor can display is pointless, as you don't get to see the extra frames.
    2. Rendering more frames than your monitor can display can cause tearing.
    3. Having v-sync on will let your GPU run cooler.

    Leave a comment:

Working...
X