Page 5 of 6 FirstFirst ... 3456 LastLast
Results 41 to 50 of 53

Thread: Gaming On Wine: The Good & Bad Graphics Drivers

  1. #41
    Join Date
    Aug 2007
    Posts
    6,675

    Default

    I do not agree in one point. If you are a game dev you have to test your app with Nvidia, Amd and Intel drivers. If not all cards are tested you usually don't find every bug.

  2. #42
    Join Date
    Jan 2008
    Posts
    119

    Default

    Quote Originally Posted by stefandoesinger View Post
    What I intended to show (I hope I was successful in that) is that both r600g and fglrx run Linux native OpenGL apps(namely Unigine, UT2004, Xonotic, Lightsmark) a lot slower than the Windows driver
    What? Today I runned Unigine Heaven and Sanctuary benchmarks under both Windows 8 PRO x64 and Ubuntu 12.10 x64 (Unity) and on my platform (Phenom X3 8650 + HD6670 + Catalyst 13.1 on both OS'es) I get better FPS on Linux. My results (1280x1024 + default settings for both benchmarks + Tesselation: Normal for Heaven and Ambient Occlusion for Sanctuary):
    [W8 - Heaven]
    FPS: 12.9
    Scores: 325
    Min FPS: 5.8
    Max FPS: 30.9

    [W8 - Sanctuary]
    FPS: 30.9
    Scores: 1312
    Min FPS: 16.3
    Max FPS: 38.0

    [Ubuntu - Heaven]
    FPS: 13.9
    Scores: 351
    Min FPS: 6.1
    Max FPS: 27.4

    [Ubuntu - Sanctuary]
    FPS: 35.0
    Scores: 1485
    Min FPS: 26.6
    Max FPS: 43.1

    For my personal projects (I'm game developer) I see the same results - OGL apps has similar speed or little better performance on Linux on both Windows and Linux platforms.

    @tecknurd
    What a bulshit... I have an access to many AMD cards (all models from HD2k, all from HD3k, HD4670-4850, two from HD5k series, two from HD6k series and one from HD7k series and don't see any problems with drivers when I test a games) and some from NV (one card from GF8, one from GF9 and GF200), but my main platform base on AMD Radeon because it allow me to keep OGL specification really well. From time to time I test my code on other AMD cards and NV because as Kano said this is really important step (I don't have an access to Intel GFX). I remember when I implemented Texture Arrays on NV and those drivers accepted even broken code (similar situation is related to eg. glGenerateMipmap and many more), tweaks in NV are really bad...

  3. #43

    Default

    Quote Originally Posted by tecknurd View Post
    WINE developers needs to
    We don't need (well-intentioned) directions, we need manpower. We are 4 people working on the d3d code. All of us are paid by CodeWeavers, without that our work wouldn't be possible. We do have some other responsibilities as well(e.g. school in my case), so we're only working part time on the d3d code.

    What you can do to help:

    • If you know your way around C and OpenGL and have a game that doesn't work, try to fix it.

      This can be a tricky task, but we can help you and give you hints. Contact wine-devel@winehq.org if there are any issues.
    • If you're not a developer, but don't mind compiling Wine from git, run your games with the git code and bisect and report any regressions you find.
    • If you're doing the above on top of the open source drivers, use Mesa git as well.
    • We need QA help on OS X.
    • I have an automated performance monitoring setup, but it needs to run on many more systems. If you're willing to help here, please get in contact with us.


    One annoying factoid is that I spend about one fifth of my time maintaining ~12 operating system installations(Windows, Linux, OS X) on five different computers just to have different GPUs and drivers for testing.

  4. #44

    Default

    Quote Originally Posted by nadro View Post
    What? Today I runned Unigine Heaven and Sanctuary benchmarks under both Windows 8 PRO x64 and Ubuntu 12.10 x64 (Unity) and on my platform (Phenom X3 8650 + HD6670 + Catalyst 13.1 on both OS'es) I get better FPS on Linux.
    Fglrx manages to keep up in Unigine, especially in GPU-limited setups - and Unigine doesn't need a high resolution to be GPU limited. It does lose to Windows by a small margin in Unigine when the performance is CPU limited. It's the other games where the big differences shows up.

    See http://tinyurl.com/b2fdqx8 for my AMD Radeon HD 5770 results.

  5. #45
    Join Date
    Apr 2011
    Posts
    387

    Default

    Quote Originally Posted by stefandoesinger View Post
    We don't need (well-intentioned) directions, we need manpower. We are 4 people working on the d3d code. All of us are paid by CodeWeavers, without that our work wouldn't be possible. We do have some other responsibilities as well(e.g. school in my case), so we're only working part time on the d3d code.

    What you can do to help:

    • If you know your way around C and OpenGL and have a game that doesn't work, try to fix it.

      This can be a tricky task, but we can help you and give you hints. Contact wine-devel@winehq.org if there are any issues.
    • If you're not a developer, but don't mind compiling Wine from git, run your games with the git code and bisect and report any regressions you find.
    • If you're doing the above on top of the open source drivers, use Mesa git as well.
    • We need QA help on OS X.
    • I have an automated performance monitoring setup, but it needs to run on many more systems. If you're willing to help here, please get in contact with us.


    One annoying factoid is that I spend about one fifth of my time maintaining ~12 operating system installations(Windows, Linux, OS X) on five different computers just to have different GPUs and drivers for testing.

    Your situation is understandable. Have you try to ask for help outside volunteers? I mean there are many companies that have interest for Wine to succeed like Intel, Google, RedHat, Canonical, and others. Can you tell them that you need manpower? Have you try to ask Intel, AMD, Nvidia to modify their driver so you not need HLSL bytecode to GLSL translation. I think there is potential, and i prefer a new Wine version Instead a new Kernel version.

  6. #46

    Default

    Guess what my posts here are intended to achieve :-) . We do get help from outside individuals and companies. The problem with games is that there are just so many games and graphics cards that it is impossible to test, fix and QA all of them. Interest from other companies is mostly focused on productivity applications. This is why we need lots of help from volunteers.

    At CodeWeavers we have some statistics which games our users run. World of Warcraft leads the pack. At less than 1% of total share. The entire thing is a fairly flat distribution. Every customer wants a different game.

    Wrt GLSL vs assembler: GLSL is not the problem. I prefer an excellent GLSL compiler over vendor-specific assembler extensions. But we really need an excellent compiler that goes 100% of the way, not a mediocre one that gets 80% of the use cases right. This 80% vs 100% consideration applies to all other areas of OpenGL, and is the main difference between the Nvidia driver and all others.

    We have very limited flexibility in avoiding corner cases. If a game hits a d3d corner case, it will hit the same corner case in OpenGL. E.g. Diablo III uses a depth texture as texture and depth buffer simultanously. The depth test is on, but depth write is off. This is legal in both d3d and gl. Nvidia and r600g get this right. Fglrx does not. We cannot work around this bug. Yes, we could in theory create a new texture, but this makes the code messy, fixes one game and breaks 5 others. Believe me, we tried.

    Likewise, if a GPU has hardware support for 256 hardware vertex shader constants, the game requires 254 for one of its shaders, and the driver consumes 4 for its private use, then this aint work. 254 + 4 > 256. This affects many drivers for dx9 cards, and is a real pain on OSX. r300g is slightly better here, but only Nvidia gives us all 256 constants it advertises.

  7. #47
    Join Date
    Feb 2010
    Posts
    519

    Default

    Quote Originally Posted by stefandoesinger View Post
    We have very limited flexibility in avoiding corner cases. If a game hits a d3d corner case, it will hit the same corner case in OpenGL. E.g. Diablo III uses a depth texture as texture and depth buffer simultanously. The depth test is on, but depth write is off. This is legal in both d3d and gl. Nvidia and r600g get this right. Fglrx does not. We cannot work around this bug. Yes, we could in theory create a new texture, but this makes the code messy, fixes one game and breaks 5 others. Believe me, we tried.
    Do you guys at CodeWeavers have direct contact with AMD's fglrx team for bug reports?

  8. #48

    Default

    Quote Originally Posted by PsynoKhi0 View Post
    Do you guys at CodeWeavers have direct contact with AMD's fglrx team for bug reports?
    Yes. Usually we file bugs at their inofficial bugzilla and nudge them about it. Personally I prefer to put the effort into fixing r600g though.

    The bug I was talking about here was reported as http://ati.cchtml.com/show_bug.cgi?id=426. I don't know if Matteo has made any further effort to get the bug fixed, but I assume he has. We also have access to their beta drivers.

    With both Nvidia and AMD it requires a bit of luck to get bugs fixed in time. I guess it depend on their internal workload. Apple is really bad here, as I've explained at FOSDEM.

  9. #49
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    5,332

    Default

    Quote Originally Posted by stefandoesinger View Post
    We have very limited flexibility in avoiding corner cases. If a game hits a d3d corner case, it will hit the same corner case in OpenGL. E.g. Diablo III uses a depth texture as texture and depth buffer simultanously. The depth test is on, but depth write is off. This is legal in both d3d and gl. Nvidia and r600g get this right. Fglrx does not. We cannot work around this bug. Yes, we could in theory create a new texture, but this makes the code messy, fixes one game and breaks 5 others. Believe me, we tried.
    Having just hit that exact thing myself two weeks ago, it's an undefined thing according to the GL standard, and causes the loss of all early-Z/hi-Z optimizations on my card (hd4k). Changing my code not to do that gave around 1000x speedup, all on r600g.

    I agree it's not Wine's place to do hacks like that, but this one is really not valid GL. Blame Blizzard (or not, since it is legal in DX).

  10. #50
    Join Date
    Aug 2009
    Posts
    103

    Default

    Quote Originally Posted by stefandoesinger View Post
    We don't need (well-intentioned) directions, we need manpower. We are 4 people working on the d3d code. All of us are paid by CodeWeavers, without that our work wouldn't be possible. We do have some other responsibilities as well(e.g. school in my case), so we're only working part time on the d3d code.

    What you can do to help:

    • If you know your way around C and OpenGL and have a game that doesn't work, try to fix it.

      This can be a tricky task, but we can help you and give you hints. Contact wine-devel@winehq.org if there are any issues.
    • If you're not a developer, but don't mind compiling Wine from git, run your games with the git code and bisect and report any regressions you find.
    • If you're doing the above on top of the open source drivers, use Mesa git as well.
    • We need QA help on OS X.
    • I have an automated performance monitoring setup, but it needs to run on many more systems. If you're willing to help here, please get in contact with us.


    One annoying factoid is that I spend about one fifth of my time maintaining ~12 operating system installations(Windows, Linux, OS X) on five different computers just to have different GPUs and drivers for testing.
    I take you up on the first 3 :-) Or I'll try to. Lot's of learning needed on my side.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •