Originally posted by ua=42
View Post
Announcement
Collapse
No announcement yet.
Here's Why Radeon Graphics Are Faster On Linux 3.12
Collapse
X
-
Originally posted by Michael View PostNouveau also affected - https://twitter.com/michaellarabel/s...54788672778240
What I'd be more interested in at this point is seeing a test with the HD6870 (due to having the greatest impact all around) on an AMD FX-8XXX system between kernels 3.11 and 3.12 AND compare that to the intel results. A CPU like that ought to be plenty sufficient to give similar results, so assuming the CPU isn't a bottleneck, that would be a good way to prove that the intel governor was faulty. If the overall frame rate is significantly lower regardless of CPU power state, this might be more than just a governor problem.
Assuming the intel governor has been faulty all along, at least we now know it is working properly and all future benchmarks can remain accurate and meaningful without Michael having to change the governor.Last edited by schmidtbag; 15 October 2013, 02:00 PM.
Comment
-
Originally posted by monraaf View PostSo, is this really a improvement?
It just means the open source AMD radeon driver depends too much on the cpu instead of, duh, the processing power of the graphic card.
The other drivers are not affected because they actually use the graphic card instead of the CPU.
Also I don't think always keeping the CPU on its limit is good either, I can imagine there is much more power wasted now since the goal for power saving is too sleep as much as possible and when in use, on a frequency as low as possible.
Am I wrong? Maybe, but I doubt this really is the "next big thing"
Comment
-
Originally posted by schmidtbag View PostThat makes me feel a lot better then - at least that means this isn't just a problem with the radeon drivers. As Luke has pointed out with his FX-8120, he didn't get any performance hits between the kernel versions so in my personal opinion, it seems the blame is the intel ondemand governor.
What I'd be more interested in at this point is seeing a test with the HD6870 (due to having the greatest impact all around) on an AMD FX-8XXX system between kernels 3.11 and 3.12 AND compare that to the intel results. A CPU like that ought to be plenty sufficient to give similar results, so assuming the CPU isn't a bottleneck, that would be a good way to prove that the intel governor was faulty. If the overall frame rate is significantly lower regardless of CPU power state, this might be more than just a governor problem.
Assuming the intel governor has been faulty all along, at least we now know it is working properly and all future benchmarks can remain accurate and meaningful without Michael having to change the governor.
Its not an "intel governor" its the ondemand governor in the subsystem that handles ALL CPU scaling. This change effects every CPU that uses the ondemand governor-- interestingly enough (in perspective of your post) no modern intel CPU actually uses the ondemand governor UNLESS you're on *buntu. Everyone ELSE moved over to the customized Intel P-State driver like 2 or 3 kernel releases ago (i've asked Michael to compare the P-State driver to ondemand with this change). But Ubuntu, for their own reasons, has not moved over yet.
AMD CPU's are likely affected by this as well, perhaps even just as much as the benchmarked Intel CPU. This whole thing is in regards to a kernel subsystem, not specific branded hardware.All opinions are my own not those of my employer if you know who they are.
Comment
-
Originally posted by agd5f View PostThe GPU can only operate as fast as it can be fed data. If you have a slow CPU, you may not be able to feed the GPU data fast enough to fully utilize it's potential. This is why the lower end GPUs don't see as large an increase in performance with increased CPU speed compared to the high end GPUs. It's always a trade off. For a lot of people, saving the extra power from keeping the CPU (and GPU) clocked lower more of the time is probably more important than having maximum 3D performance. For gamers the opposite is true.
I guess we'll need to wait for Michael's power consumption benchmarks to figure out if this is a good change in the subsystem or not... I mean yes we're getting higher performance, but what about non-gaming workloads? Is 3.12 going to kill battery life (compared to 3.11) because of this change? For gaming I have no problem with higher power consumption, its expected. But what about flash? Or other 'constant' workloads that DON'T require maxxed out freqs.All opinions are my own not those of my employer if you know who they are.
Comment
-
Originally posted by GreatEmerald View PostAnd will make your FPS dip to 30 if 60 can't be sustained, instead of just 59... No, you should have VSync on only for games you know will never dip below 60 (or whatever your refresh rate may be).
Also note that most console games actually lock to 30fps for a more consistent smooth experience:
Frame rates in video games refer to the speed at which the image is refreshed (typically in frames per second, or FPS). Many underlying processes, such as collision detection and network processing, run at different or inconsistent frequencies or in different physical components of a computer. FPS affect the experience in two ways: low FPS does not give the illusion of motion effectively and affects the user's capacity to interact with the game, while FPS that vary substantially from one second to the next depending on computational load produce uneven, "choppy" movement or animation. Many games lock their frame rate at lower but more sustainable levels to give consistently smooth motion.
Whether having a lower latency with tearing and choppy motion is better or worse than a higher latency with no tearing and smooth motion is a personal preference, but games console developers tend to use the latter.
Comment
-
Originally posted by Ericg View PostSchmidt I need to throw your entire post out the window.... Like not "Prove it wrong," I need to literally pick up the bits, put them in a bucket, and throw the bucket out of a closed window.
Its not an "intel governor" its the ondemand governor in the subsystem that handles ALL CPU scaling. This change effects every CPU that uses the ondemand governor-- interestingly enough (in perspective of your post) no modern intel CPU actually uses the ondemand governor UNLESS you're on *buntu. Everyone ELSE moved over to the customized Intel P-State driver like 2 or 3 kernel releases ago (i've asked Michael to compare the P-State driver to ondemand with this change). But Ubuntu, for their own reasons, has not moved over yet.
AMD CPU's are likely affected by this as well, perhaps even just as much as the benchmarked Intel CPU. This whole thing is in regards to a kernel subsystem, not specific branded hardware.
The point of me saying this is there's a possibility that the ondemand governor for AMD might have done a better job at determining what frequency to operate at.
Comment
-
Originally posted by chrisb View PostThat's not true for vsync and triple buffering.
Also note that most console games actually lock to 30fps for a more consistent smooth experience:
30 fps isn't enough for me even for videos.
Comment
-
Originally posted by log0 View PostSo what do you expect AMD to do? Provide custom governors for all the cpus out there, that might be used with AMD graphics cards?
Comment
-
Originally posted by Michael View PostThe testing is about the default for the distribution that most people utilize.
Anyone looking to play games on the OSS drivers is also going to be savvy enough to tweak the settings for more performance.
Stop insulting your reader base and do the tests properly.
Comment
Comment