Announcement

Collapse
No announcement yet.

Gaming On Wine: The Good & Bad Graphics Drivers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    It's basically nothing new to me. AMD drivers are very bad for wine. I only test some games like Rage and L4D2, Rage usually starts with some artefacts but not even after 1 min gameplay it crashes my HD 5670. L4D2 crashes even before. Most games like Nvidia cards with one exeption that is Killing Floor. As i already reported a bug to the KF devs and never got a response which is a bit annoying i still have to swap my gfx cards for the game i want to play. At least the gfx driver is automatically switched using gfxdetect

    Comment


    • #12
      Originally posted by Ragas View Post
      Why did I already know this would come down to AMD bashing when I just read the title?

      To all the problems stated here with AMD graphics, I'm having surprisingly little.

      I think in compaison people should start bashing Nvidia, because they don't even make an effort to support open-source on the desktop. Why would you support them not supporting us?
      why would we support nvidia for not supporting us?
      lol
      they do support us with good drivers, yeah not open source but who cares if the end result is what matters.
      my AMD gfx experience was so bad that i never want to buy their cards again. yeah it was 2 years ago, probably somewhat has changed and their drivers (both prop and opensource) are better now, but i havent had much of a problems with nvidia props so that experience counts for me.

      Comment


      • #13
        Originally posted by haplo602 View Post
        This comming from an entirely Nvidia friendly piece of software does not surprise me.

        Latest case being Path of Exile. In the Wine discussion thread on the game forum, we managed to find out that disabling GLSL use for Wine (registry key) makes all AMD cards no longer DirectX9 capable. Nvidia cards on the other hand are still DX9 capable. Since both companies differ only in custom extensions, I bet Wine uses NV extensions for shaders while it sees GLSL disabled. AMD has to go back to ARB shader extensions which means only GL 2.1 shading is available.

        I do admit AMD has their own shit to clean up (catalyst posting different capabilities when it detects Wine running in some cases), but Nvidia special treatment from Wine devs is in some cases too much.

        What the hell are you talking about you and some others here. D3D and OGL are compilers, their job is to just compile an SL source or SL vm_bytecode to a form that a computer(GPU) can understand. Then comes the important job, the rasterizer/synthesizer inside the GPU driver executes those shaders and produces graphics. Compilers communicate many times with the rasterizer (compiler sends data and takes an answer back). The thing is that when you don't have the D3D rasterizer inside your GPU driver, you can only install and emulate D3D. Someone uses two different rasterizes and needs emulation, wile someone uses one unified and you can disable this emulation. And that's what it makes me feel bad, that with an 1% tweak on a GPU's rasterizer you can run D3D bytecode by the OpenGL driver or at least accelerate during emulation with an 80%+ efficiency. They don't do it because are cartel(mafia) with Microsoft. Think about it why those two companies help Microsoft even now to develop D3D? Why they help with consoles wile they can sell more cards if all games where for PC? Why they closed graphics inside ACICs and now they speak for compute shaders (if/else type) and fusion? A CPU with 5-10 optional 3D instructions and bit_wise operations has 50-70% the FPS per flop against a GPU and with compute shaders are equal (larabee tested against a gtx280). Wile a CPU can have a 512bit_Fmac_2.5dmips/mhz interface with 1-2 million transistors units and 1.5 million transistors L1 (proven by open_cores). I believe that Intel can help a lot the situation.

        Comment


        • #14
          Originally posted by artivision View Post
          What the hell are you talking about you and some others here. D3D and OGL are compilers, their job is to just compile an SL source or SL vm_bytecode to a form that a computer(GPU) can understand. Then comes the important job, the rasterizer/synthesizer inside the GPU driver executes those shaders and produces graphics. Compilers communicate many times with the rasterizer (compiler sends data and takes an answer back). The thing is that when you don't have the D3D rasterizer inside your GPU driver, you can only install and emulate D3D. Someone uses two different rasterizes and needs emulation, wile someone uses one unified and you can disable this emulation. And that's what it makes me feel bad, that with an 1% tweak on a GPU's rasterizer you can run D3D bytecode by the OpenGL driver or at least accelerate during emulation with an 80%+ efficiency. They don't do it because are cartel(mafia) with Microsoft. Think about it why those two companies help Microsoft even now to develop D3D? Why they help with consoles wile they can sell more cards if all games where for PC? Why they closed graphics inside ACICs and now they speak for compute shaders (if/else type) and fusion? A CPU with 5-10 optional 3D instructions and bit_wise operations has 50-70% the FPS per flop against a GPU and with compute shaders are equal (larabee tested against a gtx280). Wile a CPU can have a 512bit_Fmac_2.5dmips/mhz interface with 1-2 million transistors units and 1.5 million transistors L1 (proven by open_cores). I believe that Intel can help a lot the situation.
          What the hell are you talking about ? OGL and D3D are not compatible on the shading language and some other things. Sure you can accomplish mostly the same in both, but it takes different paths and options. D3D is implemented with a specific driver architecture in mind, while OGL has no such limitations.

          larabee was a failure as far as I remember. also the main cpu bottleneck is memory access. more importantly, GPUs are single purpose hardware. they are meant to execute a very small instruction set and with specific limits (branching, loops etc.). you cannot compare CPUs and GPUs on efficiency. Have a look at UVD vs any modern CPU for video acceleration as an example of a fixed funcion unit against a general purpose one.

          you are reading too much into conspiracies.

          Comment


          • #15
            Originally posted by haplo602 View Post
            you can find lots of NVIDIA preferential treatment in arb_program_shader.c, examples:

            Code:
                /* Always enable the NV extension if available. Unlike fragment shaders, there is no
                 * mesurable performance penalty, and we can always make use of it for clipplanes.
                 */
                if (gl_info->supported[NV_VERTEX_PROGRAM3])
                {
                    shader_addline(buffer, "OPTION NV_vertex_program3;\n");
                    priv_ctx.target_version = NV3;
                    shader_addline(buffer, "ADDRESS aL;\n");
            Code:
                enum
                {
                    /* plain GL_ARB_vertex_program or GL_ARB_fragment_program */
                    ARB,
                    /* GL_NV_vertex_progam2_option or GL_NV_fragment_program_option */
                    NV2,
                    /* GL_NV_vertex_program3 or GL_NV_fragment_program2 */
                    NV3
                } target_version;
            I did not go deep into the code, just a couple of searches, but you either get plain ARB_vertex/fragment program (meaning ATI/AMD) or NV_* extensions on Nvidia. I guess that if you delete all the NV_* handling, it will break the same as on ATI/AMD.
            yes, and that is one of the reasons why the GLSL backend is now the default and you should NOT disable it.

            Comment


            • #16
              Originally posted by haplo602 View Post
              What the hell are you talking about ? OGL and D3D are not compatible on the shading language and some other things. Sure you can accomplish mostly the same in both, but it takes different paths and options. D3D is implemented with a specific driver architecture in mind, while OGL has no such limitations.

              larabee was a failure as far as I remember. also the main cpu bottleneck is memory access. more importantly, GPUs are single purpose hardware. they are meant to execute a very small instruction set and with specific limits (branching, loops etc.). you cannot compare CPUs and GPUs on efficiency. Have a look at UVD vs any modern CPU for video acceleration as an example of a fixed funcion unit against a general purpose one.

              you are reading too much into conspiracies.

              The point is that a tweaked OpenGL driver can directly run HLSL bytecode, or at least be fast during emulation. You must now that we don't lose mach when we transform HLSL bytecode to GLSL (that's on the CPU). Even if a game does that many times we can always do it statically (if the GPU company wants), producing 1-2gb file inside a home folder. The problem is that after the translation, the outcome GLSL bytecode isn't that good as when it is created from an OpenGL game. They mast tweak their OpenGL drivers for this job.

              As for the other thing, have you even seen a shader source before? Shaders are written in a C like dialect but with limitations and distorted systems and algorithms. Distorted= When with C you have "TALE" for example (when a digit goes from the front and another comes from the back), "Distorted TALE" can be a digit goes from the front and two comes from the back. That's why a C compiler can't compile shaders and a shader compiler can't compile C programs. Also a GPU can't run C programs (except some fusion) and a CPU can't run shaders (those software rasterizers are shader emulators at the same time). The end thing is that a graphics are really much more efficient when they ASIC_ed, thats because today's games have many different things to calculate. ASICs are good only for one algorithm (like face identification), compute shaders with OpenCL (if/else type) are good. The situation we are now exists for other reasons.

              Comment


              • #17
                Originally posted by .CME. View Post
                yes, and that is one of the reasons why the GLSL backend is now the default and you should NOT disable it.
                well the original point was Nvidia preferential treatment by Wine and not what's the default :-)

                Comment


                • #18
                  Originally posted by artivision View Post
                  The point is that a tweaked OpenGL driver can directly run HLSL bytecode, or at least be fast during emulation. You must now that we don't lose mach when we transform HLSL bytecode to GLSL (that's on the CPU). Even if a game does that many times we can always do it statically (if the GPU company wants), producing 1-2gb file inside a home folder. The problem is that after the translation, the outcome GLSL bytecode isn't that good as when it is created from an OpenGL game. They mast tweak their OpenGL drivers for this job.

                  As for the other thing, have you even seen a shader source before? Shaders are written in a C like dialect but with limitations and distorted systems and algorithms. Distorted= When with C you have "TALE" for example (when a digit goes from the front and another comes from the back), "Distorted TALE" can be a digit goes from the front and two comes from the back. That's why a C compiler can't compile shaders and a shader compiler can't compile C programs. Also a GPU can't run C programs (except some fusion) and a CPU can't run shaders (those software rasterizers are shader emulators at the same time). The end thing is that a graphics are really much more efficient when they ASIC_ed, thats because today's games have many different things to calculate. ASICs are good only for one algorithm (like face identification), compute shaders with OpenCL (if/else type) are good. The situation we are now exists for other reasons.
                  stop talking nonsene. shader language is C-like just because it is convenient and many programers adapt to C-style syntax very well. if the prevalent language was BASIC, then the shaders would have BASIC-like syntax. there's no magic behind that.

                  as to the bytecode translation, I do agree in part.

                  cpu vs gpu programs, CPUs are general purpose architectures, they are not meant to execute SIMD streams in quantity, they lack the HW resourcers. GPUs are SIMD tailored engines, that's why they are not afficient with loops, branching and out of order execution. both are tools for different jobs.

                  the situation we are now exists because of evolution. let's see where AMD HSA future leads to.

                  Comment


                  • #19
                    Wine D3D layer is really bad and it's not an issue caused by AMD drivers, but Wine developers doesn't keep a OGL specification too much. I think that these guys tested D3D layer only under NV drivers (as anyone should know, NV drivers are really bad in OGL specification cases, these drivers runs even badly written code and it's not good, because it's cause problems on other drivers; indie developers doesn't have too much cash for buy different gfx just for a code test, thats why NV as a developer platform sucks if someone aren't an expert in OGL specification, because it higher posibility that wrote code may be bugged).
                    Last edited by nadro; 08 February 2013, 09:23 AM.

                    Comment


                    • #20
                      Originally posted by haplo602 View Post
                      stop talking nonsene. shader language is C-like just because it is convenient and many programers adapt to C-style syntax very well. if the prevalent language was BASIC, then the shaders would have BASIC-like syntax. there's no magic behind that.

                      as to the bytecode translation, I do agree in part.

                      cpu vs gpu programs, CPUs are general purpose architectures, they are not meant to execute SIMD streams in quantity, they lack the HW resourcers. GPUs are SIMD tailored engines, that's why they are not afficient with loops, branching and out of order execution. both are tools for different jobs.

                      the situation we are now exists because of evolution. let's see where AMD HSA future leads to.

                      RISC CPUs are SIMD and MIMD tailored engines. The difference is that GPUs except the special units they have CPU like shader cores with extra instructions. The difference with Mips3D and Nvidia3D subsets for example, is that General3D subsets of 5-10 instructions are made to accelerate general colors-textures-3Dobjects of any kind, wile Special3D subsets are part of ASICs and used to accelerate special algorithms and special systems that are not part of any standard language specifications. Only the dialect is the same with C, the shader program cannot be compiled with C compilers. Today's graphics are ASIC_ed and that's all. Modern APIs like OpenGL 4.3 have compute shaders and they can help some one write an OpenCL game that only requires the compute_shader_compiler_frontend, uses the CPU at the same time with GPU, and its faster and more beautiful on both, using to metal access on all PUs.

                      Comment

                      Working...
                      X