Originally posted by Tobu
View Post
Announcement
Collapse
No announcement yet.
Wine's Big Command Stream D3D Patch-Set Updated
Collapse
X
-
Originally posted by Thaodan View PostI tried your patches with wine 1.7.10 in Wow I got some micro lags when playing with more than 5 player (10+)?
Originally posted by Thaodan View PostHow __GL_THREADED_OPTIMISATIONS affect your patch or how is it with the patch from http://bugs.winehq.org/show_bug.cgi?id=11674#c263?
If you want to use __GL_THREADED_OPTIMISATIONS with my patchset you don't have to apply that hack. Due to a problem unrelated to __GL_THREADED_OPTIMISATIONS I'm currently using glBufferSubData for buffer uploads instead of glMapBufferRange. Just turn of CSMT in the registry.
Comment
-
Originally posted by stefandoesinger View PostWell, our (the Wine developers') and the Mesa developers' time is limited. With regards to running d3d9 applications on Mesa we have the following options:
- Make Wine work better on all OpenGL implementations.
- Make Mesa run all OpenGL applications better.
- Write and maintain lots of special code to make Wine run better on Mesa.
Considering finite resources, we believe 1 is the way to go, and we're helping the Mesa devs with 2. You may disagree and submit code to either project to implement 3. But don't think it's a conspiracy when we disagree with you about what to do with our time.
Comment
-
Originally posted by TemplarGR View PostHow much work is needed for 3? How much of the d3d9 state tracker is completed already? I am no mesa developer, but from what i understand the d3d9 leverages gallium3d so it is not that big of a project.
A good start would be to quantify the performance difference between wined3d and gallium-nine with reproducible benchmarks and then isolate where the performance difference is coming from. And that means not just "it reduces overhead, so it's faster", but something like "There's CPU-side overhead in module X, and GPU-side overhead because shader instructions A, B and C are inefficiently handled by module Y".
If it turns out that there's a fundamental advantage to a gallium state tracker, and that it's not just working around some bugs in Mesa and Wine that could be fixed with e.g. a better GLSL compiler or adding one or two focused GL extensions to support some d3d-isms better the next task is finding a stable API exposed by gallium-nine and used by Wine.
Matteo has done some testing with gallium-nine on r600g. If I remember correctly, he saw a moderate performance gain (~15% or something), but attributed most of that to insufficient optimization in the driver's GLSL compiler. I'll ask him to make sure my memory serves me right.
Comment
-
An update to my previous post: We managed to make gallium-nine + r600g work with StarCraft 2, but not some other game we were actually interested in (sorry, confidential). At lowest settings the game was CPU limited and saw an increase from 60 to 100 fps (wined3d vs gallium-nine). At higher settings (GPU limited) the performance was exactly the same.
Further profiling suggested that the difference came from GLSL constant updating. That's a well-known problem. If the application needs only Shader Model 2 support or you have an Nvidia card then ARB shaders can give you quite a boost. Otherwise we hope that GL_ARB_uniform_buffer_object will help once we use it - but so far we didn't get around to implementing that. Besides constant updating Mesa spent a lot of time in some texture update function (_mesa_update_texture) when using wined3d but not gallium-nine. We did not investigate why or what could we or Mesa could do about it. A fairly new GL extension, GL_ARB_texture_storage, aims at reducing some texture management overhead, but we have to restructure our texture-surface relationship a bit before we can properly use it. That's some restructuring we'll have to do for d3d10/11 anyway.
- Likes 1
Comment
-
Thank You
Originally posted by stefandoesinger View PostAn update to my previous post...
Comment
-
Well i have seen some Gallium-nine videos months ago running different games and it seems to run quite well:
Nvidia GTX 670
Modern Warfare 3 <- no re-clock (is running ( i think ) at almost 1.5/10 of it's max clock speed and the game is at highest settings !!)
GTA IV <- Same as above, no re-clock :S
Crysis 2 <- No re-clock
AMD HD 5770
Crysis 2 <- dynamic power management "ON"
It has surpassed my expectations, well done.
Comment
-
Originally posted by stefandoesinger View PostAn update to my previous post: We managed to make gallium-nine + r600g work with StarCraft 2, but not some other game we were actually interested in (sorry, confidential). At lowest settings the game was CPU limited and saw an increase from 60 to 100 fps (wined3d vs gallium-nine). At higher settings (GPU limited) the performance was exactly the same.
The problem with modern cpus, is that they cannot improve per core performance fast enough to keep up with gpu performance. So anything that reduces cpu strain will be vastly important from now on. That is why AMD introduced MANTLE afterall.
So, Wine could use anything that can improve its performance. A D3D state tracker can eliminate this overhead altogether if properly coded. Although it is not multiplatform, it could provide a big boost for gallium users. Especially with modern games. Imagine games like Rome 2 total war. I am willing to bet a huge sum of money that Wine will face a tremendous challenge in trying to match its Windows performance...
Comment
-
did more testing and i'm yet to see any of my games to crash. although, once i saw the GPU bottleneck comment, i went and disabled double buffering in TR2013
wine-1.7.10 => min 30.9 fps, with double buffering min 25.9
command stream version => min 48.9/max 72.3, with double buffering 30.1/60
looking at reported fps when running on windows with same GPU as me, it's about 80-90% there. that's freaking awesome change.
now, question from complete n00b on directx department. is this able to be reused in DX10/11 and how much different those 2 are?
Comment
-
Originally posted by justmy2cents View Postdid more testing and i'm yet to see any of my games to crash. although, once i saw the GPU bottleneck comment, i went and disabled double buffering in TR2013
wine-1.7.10 => min 30.9 fps, with double buffering min 25.9
command stream version => min 48.9/max 72.3, with double buffering 30.1/60
looking at reported fps when running on windows with same GPU as me, it's about 80-90% there. that's freaking awesome change.
now, question from complete n00b on directx department. is this able to be reused in DX10/11 and how much different those 2 are?
Comment
Comment