Originally posted by Melcar
View Post
Announcement
Collapse
No announcement yet.
r6xx 3D games
Collapse
X
-
Now i get OpenGL 2.0 as well. Great!
But KMS won't work for me. I can turn it on (with radeon.modeset=1), but then i can't use compiz. If i try to turn it on, my screen gets totaly white. dmesg says kms is enabled. It tried it out with my Ubuntu karmic + xorg-edgers ppa + 2.6.32-4 kernel on an ATI Radeon HD 2600.
Comment
-
Originally posted by Boerkel View PostNow i get OpenGL 2.0 as well. Great!
But KMS won't work for me. I can turn it on (with radeon.modeset=1), but then i can't use compiz. If i try to turn it on, my screen gets totaly white. dmesg says kms is enabled. It tried it out with my Ubuntu karmic + xorg-edgers ppa + 2.6.32-4 kernel on an ATI Radeon HD 2600.
LIBGL_ALWAYS_INDIRECT=1 compiz
Comment
-
After recent updates Darwinia is very playable (good framerate, only minor artifacts) and Doom 3, although with heavy shadow artifacts, has a decent framerate (30fps average in timedemo demo1).
This is with RV670, kernel 2.6.33-rc5 and git libdrm, mesa and drivers. Great work, devs
Comment
-
I have to say that CS 1.6 is working quite well (some shadow glitches, but on specific maps only), but not with decent framerate... It sometimes drops to 5-10 FPS and is totally unplayable. I have to note that using fglrx it didn't drop that much (maybe sometimes to 30-40). I'm using RV670, everything connected to graphics from git and 2.6.33-rc4 kernel (KMS enabled). With UMS it's still painfully slow. Is there any way to change this situation? Will Gallium driver solve this issue?
Comment
-
Originally posted by Wielkie G View PostI have to say that CS 1.6 is working quite well (some shadow glitches, but on specific maps only), but not with decent framerate... It sometimes drops to 5-10 FPS and is totally unplayable. I have to note that using fglrx it didn't drop that much (maybe sometimes to 30-40). I'm using RV670, everything connected to graphics from git and 2.6.33-rc4 kernel (KMS enabled). With UMS it's still painfully slow. Is there any way to change this situation? Will Gallium driver solve this issue?
Comment
-
FWIW I read your previous post the same way as pvtcupcakes, ie that KMS was slow in places but that UMS was worse
Unless one of the devs is familiar with exactly what the app is doing when framerates are low it's going to be hard to do more than guess about what development work is most likely to make a difference. Right now the focus is still more on making the apps run in the first place and accelerating commonly used functions than doing any specific optimization work.
What is the app doing when it gets slow, ie large amounts of detail, specific effects etc ?Test signature
Comment
-
It's Half-Life based game (GoldSource engine), it's very simple and derived from Quake engine. FPS is low when I look at many triangles (for example it's a bit higher when I look on the floor). It may be connected to unoptimized CPU->GPU transfers. AFAIK this engine batches all triangles on every frame, so more triangles -> lower performance because of this bottle-neck. This was for sure the case for fglrx, but mesa could have introduced another bottle-necks. Also note that I play this game through wine (but still OpenGL). I'll try to start the game from console to find some interesting wine messages (if any).
Edit: Nothing much interesting in console, only this (may be produced by mesa):
Code:warning: Unknown nb_ctl request: 4
Last edited by Wielkie G; 24 January 2010, 05:20 PM.
Comment
-
Originally posted by bridgman View PostFWIW I read your previous post the same way as pvtcupcakes, ie that KMS was slow in places but that UMS was worse
Unless one of the devs is familiar with exactly what the app is doing when framerates are low it's going to be hard to do more than guess about what development work is most likely to make a difference.
Not only can you trace and profile every last bit of the entire graphics pipeline from your app down into the hardware execution of the shaders for any given pixel, you can play back frames and step through execution, you can get hotspots in your API usage, and the tools can give you errors that pretty much say "this thing right here is what's wrong with your performance, and here's how to fix it."
As a driver developer, the ability to see the call graphs through the API down to the hardware execution of shaders would help you identify hotspots and performance issues in the drivers without needing to look at the app's source at all. Even if you don't own the app, it would make it possible for users to submit logs with the profiling results.
Comment
Comment