Page 3 of 7 FirstFirst 12345 ... LastLast
Results 21 to 30 of 61

Thread: Wine's Big Command Stream D3D Patch-Set Updated

  1. #21
    Join Date
    Nov 2010
    Posts
    352

    Default

    Quote Originally Posted by Tobu View Post
    ppa:foresto/winepatched has this updated patch.
    Thanks, you don't know how long I've waiting for this.

  2. #22

    Default

    Quote Originally Posted by Thaodan View Post
    I tried your patches with wine 1.7.10 in Wow I got some micro lags when playing with more than 5 player (10+)?
    There's a thread on wine-devel@winehq.org concerning WoW and this patchset. Please post there, I don't want to establish a Wine support forum at Phoronix.

    Quote Originally Posted by Thaodan View Post
    How __GL_THREADED_OPTIMISATIONS affect your patch or how is it with the patch from http://bugs.winehq.org/show_bug.cgi?id=11674#c263?
    They are the same thing in concept. Combining them is like trying to speed up a trip from LA to NY by driving a Ferrari inside the airplane. In general it doesn't do any good, increases latency and causes micro-stuttering. In theory exceptions are possible, and one (select locations in Morrowind) has been reported.

    If you want to use __GL_THREADED_OPTIMISATIONS with my patchset you don't have to apply that hack. Due to a problem unrelated to __GL_THREADED_OPTIMISATIONS I'm currently using glBufferSubData for buffer uploads instead of glMapBufferRange. Just turn of CSMT in the registry.

  3. #23
    Join Date
    Dec 2009
    Location
    Greece
    Posts
    351

    Default

    Quote Originally Posted by stefandoesinger View Post
    Well, our (the Wine developers') and the Mesa developers' time is limited. With regards to running d3d9 applications on Mesa we have the following options:

    1. Make Wine work better on all OpenGL implementations.
    2. Make Mesa run all OpenGL applications better.
    3. Write and maintain lots of special code to make Wine run better on Mesa.


    Considering finite resources, we believe 1 is the way to go, and we're helping the Mesa devs with 2. You may disagree and submit code to either project to implement 3. But don't think it's a conspiracy when we disagree with you about what to do with our time.
    How much work is needed for 3? How much of the d3d9 state tracker is completed already? I am no mesa developer, but from what i understand the d3d9 leverages gallium3d so it is not that big of a project.

  4. #24

    Default

    Quote Originally Posted by TemplarGR View Post
    How much work is needed for 3? How much of the d3d9 state tracker is completed already? I am no mesa developer, but from what i understand the d3d9 leverages gallium3d so it is not that big of a project.
    The easy work is done. The hard part that separates a proof of concept from code that is stable and maintainable in the long term is remaining. In other words, following the 80/20 rule, 80% of the work still needs to be done.

    A good start would be to quantify the performance difference between wined3d and gallium-nine with reproducible benchmarks and then isolate where the performance difference is coming from. And that means not just "it reduces overhead, so it's faster", but something like "There's CPU-side overhead in module X, and GPU-side overhead because shader instructions A, B and C are inefficiently handled by module Y".

    If it turns out that there's a fundamental advantage to a gallium state tracker, and that it's not just working around some bugs in Mesa and Wine that could be fixed with e.g. a better GLSL compiler or adding one or two focused GL extensions to support some d3d-isms better the next task is finding a stable API exposed by gallium-nine and used by Wine.

    Matteo has done some testing with gallium-nine on r600g. If I remember correctly, he saw a moderate performance gain (~15% or something), but attributed most of that to insufficient optimization in the driver's GLSL compiler. I'll ask him to make sure my memory serves me right.

  5. #25
    Join Date
    Sep 2013
    Posts
    16

    Default

    Damn, it took me ~ 40 minutes to compire from AUR.
    https://docs.google.com/file/d/0B4Yx...VoNVpfNHM/edit

  6. #26

    Default

    An update to my previous post: We managed to make gallium-nine + r600g work with StarCraft 2, but not some other game we were actually interested in (sorry, confidential). At lowest settings the game was CPU limited and saw an increase from 60 to 100 fps (wined3d vs gallium-nine). At higher settings (GPU limited) the performance was exactly the same.

    Further profiling suggested that the difference came from GLSL constant updating. That's a well-known problem. If the application needs only Shader Model 2 support or you have an Nvidia card then ARB shaders can give you quite a boost. Otherwise we hope that GL_ARB_uniform_buffer_object will help once we use it - but so far we didn't get around to implementing that. Besides constant updating Mesa spent a lot of time in some texture update function (_mesa_update_texture) when using wined3d but not gallium-nine. We did not investigate why or what could we or Mesa could do about it. A fairly new GL extension, GL_ARB_texture_storage, aims at reducing some texture management overhead, but we have to restructure our texture-surface relationship a bit before we can properly use it. That's some restructuring we'll have to do for d3d10/11 anyway.

  7. #27
    Join Date
    Jan 2013
    Location
    Earth
    Posts
    118

    Thumbs up Thank You

    Quote Originally Posted by stefandoesinger View Post
    An update to my previous post...
    I don't think anyone said it yet, so I will: Thank you for your work towards providing improvements to WINE that benefits it's users. It is greatly appreciated and we all look forward to seeing what the future holds for yourself, WINE and it's user base. Thank you, keep up the fantastic work!. =)

  8. #28
    Join Date
    Jul 2013
    Posts
    80

    Default

    Well i have seen some Gallium-nine videos months ago running different games and it seems to run quite well:

    Nvidia GTX 670
    Modern Warfare 3 <- no re-clock (is running ( i think ) at almost 1.5/10 of it's max clock speed and the game is at highest settings !!)
    GTA IV <- Same as above, no re-clock :S
    Crysis 2 <- No re-clock

    AMD HD 5770
    Crysis 2 <- dynamic power management "ON"

    It has surpassed my expectations, well done.

  9. #29
    Join Date
    Dec 2009
    Location
    Greece
    Posts
    351

    Default

    Quote Originally Posted by stefandoesinger View Post
    An update to my previous post: We managed to make gallium-nine + r600g work with StarCraft 2, but not some other game we were actually interested in (sorry, confidential). At lowest settings the game was CPU limited and saw an increase from 60 to 100 fps (wined3d vs gallium-nine). At higher settings (GPU limited) the performance was exactly the same.
    You state the obvious here. Of course the problem with Wine is CPU performance, it is an emulation layer that translates d3d to opengl, it will introduce severe cpu overhead... You didn't need tests to figure that out... That is the very reason your command stream patches are so effective...

    The problem with modern cpus, is that they cannot improve per core performance fast enough to keep up with gpu performance. So anything that reduces cpu strain will be vastly important from now on. That is why AMD introduced MANTLE afterall.

    So, Wine could use anything that can improve its performance. A D3D state tracker can eliminate this overhead altogether if properly coded. Although it is not multiplatform, it could provide a big boost for gallium users. Especially with modern games. Imagine games like Rome 2 total war. I am willing to bet a huge sum of money that Wine will face a tremendous challenge in trying to match its Windows performance...

  10. #30
    Join Date
    Oct 2013
    Posts
    352

    Default

    did more testing and i'm yet to see any of my games to crash. although, once i saw the GPU bottleneck comment, i went and disabled double buffering in TR2013

    wine-1.7.10 => min 30.9 fps, with double buffering min 25.9
    command stream version => min 48.9/max 72.3, with double buffering 30.1/60

    looking at reported fps when running on windows with same GPU as me, it's about 80-90% there. that's freaking awesome change.

    now, question from complete n00b on directx department. is this able to be reused in DX10/11 and how much different those 2 are?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •