Originally posted by AnonymousCoward
View Post
Announcement
Collapse
No announcement yet.
Greater Radeon Gallium3D Shader Optimization Tests
Collapse
X
-
Originally posted by AnonymousCoward View PostOne more bit of info: Steam is detecting only 268.44 MB of VRAM, when in theory it should be supporting 512MB. This could account for some of the performance issue, especially since I run games at 1920x1080. This thread discusses the issue: http://steamcommunity.com/app/221410...8532588748333/
EDIT: Nevermind, probably just an issue with the way Steam detects available ram according to that thread.
Comment
-
Originally posted by brent View PostMobile APUs are definitely clock-limited, no question about it. You have to modify the kernel to workaround. Delete this if-block.
EDIT: Just tried and it's a definite improvement:
R600 SB = 41.51 FPS
However, the frequency is higher but doesn't appear to be the maximum:
default engine clock: 200000 kHz
current engine clock: 334880 kHz
default memory clock: 800000 kHz
Or can I not really trust the specific reading?
EDIT: Using dynpm again appears to use switch to maximum frequency:
default engine clock: 200000 kHz
current engine clock: 685710 kHz
default memory clock: 800000 kHz
R600 SB = 64.59 FPS
Now we are talking, comes close to Windows 8 now!
Thanks a lot everyone, hopefully we'll get true power management eventually but this will do me for gaming.Last edited by AnonymousCoward; 18 May 2013, 10:07 AM.
Comment
-
Originally posted by AnonymousCoward View PostCool, I'll try it out. What does it do? And thanks for the troubleshooting help in general, all of you.
In my experience, power consumption and heat only increase slightly with this hack, and if the device has suitable cooling, there won't be a problem. AFAIK, in theory the APU's design requires active thermal management though, i.e. the driver has to monitor the temperature and reduce the clock if it gets too hot. Since that hasn't been implemented yet, the driver stays on the safe side and doesn't allow high clocks.
I'd *love* a driver option in the vanilla kernel that allows users to use the full clock range at their own risk.
Comment
-
Originally posted by AnonymousCoward View PostEDIT: Just tried and it's a definite improvement:
r600g default = 21.5fps.
r600g + optional optimizations in mesa master using environment variable + recompiling kernel to remove PM limitations = 64.5 fps.
windows default = 73.5 fps.
That's not bad at all. We just need to get some of these optional/experimental things into shape and enabled by default now.
Comment
-
Originally posted by smitty3268 View PostSo, just to summarize:
r600g default = 21.5fps.
r600g + optional optimizations in mesa master using environment variable + recompiling kernel to remove PM limitations = 64.5 fps.
windows default = 73.5 fps.
That's not bad at all. We just need to get some of these optional/experimental things into shape and enabled by default now.
Although this is true for source games. Serious Sam 3 for instance ran awfully and looked graphically off, and Brutal Legend tended to lock up after a bit of play. This could be true on Windows as well, though, I haven't tested yet.
I will say that performance was likely to be superior to FGLRX based on memory from the performance I was getting when it was working. For example, loading up a save of Last Remnant running under wine gives me performance of 22 FPS when using R600 SB + high profile, where as I was getting something 11-14 FPS with FGLRX. Portal runs about the same as Lost Coast mostly hitting slightly above 60 fps, whereas it was more like 30-40 with FGLRX, even when using the discrete card (which ran even worse than the integrated card, BTW).
Overall, at least in a best case scenario the open souce driver looks very good. Just need better power management and getting some more extensions supported I think.
Comment
Comment