Announcement

Collapse
No announcement yet.

Gaming On Wine: The Good & Bad Graphics Drivers

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Gaming On Wine: The Good & Bad Graphics Drivers

    Phoronix: Gaming On Wine: The Good & Bad Graphics Drivers

    If you're wondering what graphics hardware / driver is best if you intend to do much gaming within Wine or CrossOver on Linux, a CodeWeavers' developer responsible for much of the Direct3D layer spoke about his experiences with gaming on Wine for the best results...

    http://www.phoronix.com/vr.php?view=MTI5NjU

  • #2
    That is a pretty damning assessment against AMD. Between this and their lack of support for the Southern Islands/HD7000 series cards, I do not plan on making any future AMD GPU (or CPU) purchases. I say that sitting in between three PCs, two Linux and one Windows, each with AMD cards in them.

    The video itself was pretty good. Sometimes those talk videos are dry and boring, but that one was pretty good and well-paced. I also appreciate the honesty from the presenter, in acknowledging where WINE and Linux gaming in general falls short.

    I really appreciate your assessment notes, Michael.

    Comment


    • #3
      This comming from an entirely Nvidia friendly piece of software does not surprise me.

      Latest case being Path of Exile. In the Wine discussion thread on the game forum, we managed to find out that disabling GLSL use for Wine (registry key) makes all AMD cards no longer DirectX9 capable. Nvidia cards on the other hand are still DX9 capable. Since both companies differ only in custom extensions, I bet Wine uses NV extensions for shaders while it sees GLSL disabled. AMD has to go back to ARB shader extensions which means only GL 2.1 shading is available.

      I do admit AMD has their own shit to clean up (catalyst posting different capabilities when it detects Wine running in some cases), but Nvidia special treatment from Wine devs is in some cases too much.

      Comment


      • #4
        Originally posted by haplo602 View Post
        This comming from an entirely Nvidia friendly piece of software does not surprise me.

        Latest case being Path of Exile. In the Wine discussion thread on the game forum, we managed to find out that disabling GLSL use for Wine (registry key) makes all AMD cards no longer DirectX9 capable. Nvidia cards on the other hand are still DX9 capable. Since both companies differ only in custom extensions, I bet Wine uses NV extensions for shaders while it sees GLSL disabled. AMD has to go back to ARB shader extensions which means only GL 2.1 shading is available.

        I do admit AMD has their own shit to clean up (catalyst posting different capabilities when it detects Wine running in some cases), but Nvidia special treatment from Wine devs is in some cases too much.
        Code:
                if ( !get_config_key( hkey, appkey, "UseGLSL", buffer, size) )
                {
                    if (!strcmp(buffer,"disabled"))
                    {
                        ERR_(winediag)("The GLSL shader backend has been disabled. You get to keep all the pieces if it breaks.\n");

        Comment


        • #5
          This should probably be considered too: http://phoronix.com/forums/showthread.php?72506

          If you don't buy AMD cards and get a nvidia card instead you don't support the company that at least tries to release documentation and even open source code and support an open source unfriendly company. Good job.

          Comment


          • #6
            Both AMD and NVidia are failing to support Linux developers and users. The entire reason I bought those HD6000 series cards was, in fact, because of AMD's promises to support Linux.

            But, here we are years later and the AMD end-user experience is actually worse than the NVidia end-user experience. That is my hands-on experience with their proprietary drivers. The implementation matters. As an end-user, I don't give a fk why one experience is better than the other; I just recognize what is.

            Ultimately, fk em both. Intel may end up with the best Linux graphics experience here soon.

            I won't go into my work experiences with HPC (bio drug sim) GPUs, but AMD's failure there colors my opinion quite a bit too.
            Last edited by TurbulentToothpick; 02-08-2013, 04:03 AM.

            Comment


            • #7
              r600g is getting good even for Wine

              Personally i'm impressed with r600g performance with Wine. I was able to play two Tomb Raider series games (Legend and Anniversary Edition) in a HD 5650 on my HP notebook with mesa-git and they ran really smooth (I don't know if this is great at all, but I was expecting a lot less)

              Additionally, I tested Windows version of Heroes of Newerth in Wine and got nearly the same fps (around 30) for Direct3D and OpenGL rendering (in the native version too)

              Comment


              • #8
                Why did I already know this would come down to AMD bashing when I just read the title?

                To all the problems stated here with AMD graphics, I'm having surprisingly little.

                I think in compaison people should start bashing Nvidia, because they don't even make an effort to support open-source on the desktop. Why would you support them not supporting us?

                Comment


                • #9
                  Originally posted by .CME. View Post
                  Code:
                          if ( !get_config_key( hkey, appkey, "UseGLSL", buffer, size) )
                          {
                              if (!strcmp(buffer,"disabled"))
                              {
                                  ERR_(winediag)("The GLSL shader backend has been disabled. You get to keep all the pieces if it breaks.\n");
                  you can find lots of NVIDIA preferential treatment in arb_program_shader.c, examples:

                  Code:
                      /* Always enable the NV extension if available. Unlike fragment shaders, there is no
                       * mesurable performance penalty, and we can always make use of it for clipplanes.
                       */
                      if (gl_info->supported[NV_VERTEX_PROGRAM3])
                      {
                          shader_addline(buffer, "OPTION NV_vertex_program3;\n");
                          priv_ctx.target_version = NV3;
                          shader_addline(buffer, "ADDRESS aL;\n");
                  Code:
                      enum
                      {
                          /* plain GL_ARB_vertex_program or GL_ARB_fragment_program */
                          ARB,
                          /* GL_NV_vertex_progam2_option or GL_NV_fragment_program_option */
                          NV2,
                          /* GL_NV_vertex_program3 or GL_NV_fragment_program2 */
                          NV3
                      } target_version;
                  I did not go deep into the code, just a couple of searches, but you either get plain ARB_vertex/fragment program (meaning ATI/AMD) or NV_* extensions on Nvidia. I guess that if you delete all the NV_* handling, it will break the same as on ATI/AMD.
                  Last edited by haplo602; 02-08-2013, 05:20 AM.

                  Comment


                  • #10
                    Originally posted by Bitiquinho View Post
                    Personally i'm impressed with r600g performance with Wine. I was able to play two Tomb Raider series games (Legend and Anniversary Edition) in a HD 5650 on my HP notebook with mesa-git and they ran really smooth (I don't know if this is great at all, but I was expecting a lot less)

                    Additionally, I tested Windows version of Heroes of Newerth in Wine and got nearly the same fps (around 30) for Direct3D and OpenGL rendering (in the native version too)
                    The same with me. Steam for Linux working without issue, CS:S, X3. CS:GO under wine too. I am using r600g with HD4850 (Mesa 9.0.2). The only problems I have is with slow Trine2 and incorrectly rendered Arma2OA under Wine (fragment shaders using too many registers).

                    Comment


                    • #11
                      It's basically nothing new to me. AMD drivers are very bad for wine. I only test some games like Rage and L4D2, Rage usually starts with some artefacts but not even after 1 min gameplay it crashes my HD 5670. L4D2 crashes even before. Most games like Nvidia cards with one exeption that is Killing Floor. As i already reported a bug to the KF devs and never got a response which is a bit annoying i still have to swap my gfx cards for the game i want to play. At least the gfx driver is automatically switched using gfxdetect

                      Comment


                      • #12
                        Originally posted by Ragas View Post
                        Why did I already know this would come down to AMD bashing when I just read the title?

                        To all the problems stated here with AMD graphics, I'm having surprisingly little.

                        I think in compaison people should start bashing Nvidia, because they don't even make an effort to support open-source on the desktop. Why would you support them not supporting us?
                        why would we support nvidia for not supporting us?
                        lol
                        they do support us with good drivers, yeah not open source but who cares if the end result is what matters.
                        my AMD gfx experience was so bad that i never want to buy their cards again. yeah it was 2 years ago, probably somewhat has changed and their drivers (both prop and opensource) are better now, but i havent had much of a problems with nvidia props so that experience counts for me.

                        Comment


                        • #13
                          Originally posted by haplo602 View Post
                          This comming from an entirely Nvidia friendly piece of software does not surprise me.

                          Latest case being Path of Exile. In the Wine discussion thread on the game forum, we managed to find out that disabling GLSL use for Wine (registry key) makes all AMD cards no longer DirectX9 capable. Nvidia cards on the other hand are still DX9 capable. Since both companies differ only in custom extensions, I bet Wine uses NV extensions for shaders while it sees GLSL disabled. AMD has to go back to ARB shader extensions which means only GL 2.1 shading is available.

                          I do admit AMD has their own shit to clean up (catalyst posting different capabilities when it detects Wine running in some cases), but Nvidia special treatment from Wine devs is in some cases too much.

                          What the hell are you talking about you and some others here. D3D and OGL are compilers, their job is to just compile an SL source or SL vm_bytecode to a form that a computer(GPU) can understand. Then comes the important job, the rasterizer/synthesizer inside the GPU driver executes those shaders and produces graphics. Compilers communicate many times with the rasterizer (compiler sends data and takes an answer back). The thing is that when you don't have the D3D rasterizer inside your GPU driver, you can only install and emulate D3D. Someone uses two different rasterizes and needs emulation, wile someone uses one unified and you can disable this emulation. And that's what it makes me feel bad, that with an 1% tweak on a GPU's rasterizer you can run D3D bytecode by the OpenGL driver or at least accelerate during emulation with an 80%+ efficiency. They don't do it because are cartel(mafia) with Microsoft. Think about it why those two companies help Microsoft even now to develop D3D? Why they help with consoles wile they can sell more cards if all games where for PC? Why they closed graphics inside ACICs and now they speak for compute shaders (if/else type) and fusion? A CPU with 5-10 optional 3D instructions and bit_wise operations has 50-70% the FPS per flop against a GPU and with compute shaders are equal (larabee tested against a gtx280). Wile a CPU can have a 512bit_Fmac_2.5dmips/mhz interface with 1-2 million transistors units and 1.5 million transistors L1 (proven by open_cores). I believe that Intel can help a lot the situation.

                          Comment


                          • #14
                            Originally posted by artivision View Post
                            What the hell are you talking about you and some others here. D3D and OGL are compilers, their job is to just compile an SL source or SL vm_bytecode to a form that a computer(GPU) can understand. Then comes the important job, the rasterizer/synthesizer inside the GPU driver executes those shaders and produces graphics. Compilers communicate many times with the rasterizer (compiler sends data and takes an answer back). The thing is that when you don't have the D3D rasterizer inside your GPU driver, you can only install and emulate D3D. Someone uses two different rasterizes and needs emulation, wile someone uses one unified and you can disable this emulation. And that's what it makes me feel bad, that with an 1% tweak on a GPU's rasterizer you can run D3D bytecode by the OpenGL driver or at least accelerate during emulation with an 80%+ efficiency. They don't do it because are cartel(mafia) with Microsoft. Think about it why those two companies help Microsoft even now to develop D3D? Why they help with consoles wile they can sell more cards if all games where for PC? Why they closed graphics inside ACICs and now they speak for compute shaders (if/else type) and fusion? A CPU with 5-10 optional 3D instructions and bit_wise operations has 50-70% the FPS per flop against a GPU and with compute shaders are equal (larabee tested against a gtx280). Wile a CPU can have a 512bit_Fmac_2.5dmips/mhz interface with 1-2 million transistors units and 1.5 million transistors L1 (proven by open_cores). I believe that Intel can help a lot the situation.
                            What the hell are you talking about ? OGL and D3D are not compatible on the shading language and some other things. Sure you can accomplish mostly the same in both, but it takes different paths and options. D3D is implemented with a specific driver architecture in mind, while OGL has no such limitations.

                            larabee was a failure as far as I remember. also the main cpu bottleneck is memory access. more importantly, GPUs are single purpose hardware. they are meant to execute a very small instruction set and with specific limits (branching, loops etc.). you cannot compare CPUs and GPUs on efficiency. Have a look at UVD vs any modern CPU for video acceleration as an example of a fixed funcion unit against a general purpose one.

                            you are reading too much into conspiracies.

                            Comment


                            • #15
                              Originally posted by haplo602 View Post
                              you can find lots of NVIDIA preferential treatment in arb_program_shader.c, examples:

                              Code:
                                  /* Always enable the NV extension if available. Unlike fragment shaders, there is no
                                   * mesurable performance penalty, and we can always make use of it for clipplanes.
                                   */
                                  if (gl_info->supported[NV_VERTEX_PROGRAM3])
                                  {
                                      shader_addline(buffer, "OPTION NV_vertex_program3;\n");
                                      priv_ctx.target_version = NV3;
                                      shader_addline(buffer, "ADDRESS aL;\n");
                              Code:
                                  enum
                                  {
                                      /* plain GL_ARB_vertex_program or GL_ARB_fragment_program */
                                      ARB,
                                      /* GL_NV_vertex_progam2_option or GL_NV_fragment_program_option */
                                      NV2,
                                      /* GL_NV_vertex_program3 or GL_NV_fragment_program2 */
                                      NV3
                                  } target_version;
                              I did not go deep into the code, just a couple of searches, but you either get plain ARB_vertex/fragment program (meaning ATI/AMD) or NV_* extensions on Nvidia. I guess that if you delete all the NV_* handling, it will break the same as on ATI/AMD.
                              yes, and that is one of the reasons why the GLSL backend is now the default and you should NOT disable it.

                              Comment

                              Working...
                              X