Announcement

Collapse
No announcement yet.

Gaming On Wine: The Good & Bad Graphics Drivers

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by haplo602 View Post
    What the hell are you talking about ? OGL and D3D are not compatible on the shading language and some other things. Sure you can accomplish mostly the same in both, but it takes different paths and options. D3D is implemented with a specific driver architecture in mind, while OGL has no such limitations.

    larabee was a failure as far as I remember. also the main cpu bottleneck is memory access. more importantly, GPUs are single purpose hardware. they are meant to execute a very small instruction set and with specific limits (branching, loops etc.). you cannot compare CPUs and GPUs on efficiency. Have a look at UVD vs any modern CPU for video acceleration as an example of a fixed funcion unit against a general purpose one.

    you are reading too much into conspiracies.

    The point is that a tweaked OpenGL driver can directly run HLSL bytecode, or at least be fast during emulation. You must now that we don't lose mach when we transform HLSL bytecode to GLSL (that's on the CPU). Even if a game does that many times we can always do it statically (if the GPU company wants), producing 1-2gb file inside a home folder. The problem is that after the translation, the outcome GLSL bytecode isn't that good as when it is created from an OpenGL game. They mast tweak their OpenGL drivers for this job.

    As for the other thing, have you even seen a shader source before? Shaders are written in a C like dialect but with limitations and distorted systems and algorithms. Distorted= When with C you have "TALE" for example (when a digit goes from the front and another comes from the back), "Distorted TALE" can be a digit goes from the front and two comes from the back. That's why a C compiler can't compile shaders and a shader compiler can't compile C programs. Also a GPU can't run C programs (except some fusion) and a CPU can't run shaders (those software rasterizers are shader emulators at the same time). The end thing is that a graphics are really much more efficient when they ASIC_ed, thats because today's games have many different things to calculate. ASICs are good only for one algorithm (like face identification), compute shaders with OpenCL (if/else type) are good. The situation we are now exists for other reasons.

    Comment


    • #17
      Originally posted by .CME. View Post
      yes, and that is one of the reasons why the GLSL backend is now the default and you should NOT disable it.
      well the original point was Nvidia preferential treatment by Wine and not what's the default :-)

      Comment


      • #18
        Originally posted by artivision View Post
        The point is that a tweaked OpenGL driver can directly run HLSL bytecode, or at least be fast during emulation. You must now that we don't lose mach when we transform HLSL bytecode to GLSL (that's on the CPU). Even if a game does that many times we can always do it statically (if the GPU company wants), producing 1-2gb file inside a home folder. The problem is that after the translation, the outcome GLSL bytecode isn't that good as when it is created from an OpenGL game. They mast tweak their OpenGL drivers for this job.

        As for the other thing, have you even seen a shader source before? Shaders are written in a C like dialect but with limitations and distorted systems and algorithms. Distorted= When with C you have "TALE" for example (when a digit goes from the front and another comes from the back), "Distorted TALE" can be a digit goes from the front and two comes from the back. That's why a C compiler can't compile shaders and a shader compiler can't compile C programs. Also a GPU can't run C programs (except some fusion) and a CPU can't run shaders (those software rasterizers are shader emulators at the same time). The end thing is that a graphics are really much more efficient when they ASIC_ed, thats because today's games have many different things to calculate. ASICs are good only for one algorithm (like face identification), compute shaders with OpenCL (if/else type) are good. The situation we are now exists for other reasons.
        stop talking nonsene. shader language is C-like just because it is convenient and many programers adapt to C-style syntax very well. if the prevalent language was BASIC, then the shaders would have BASIC-like syntax. there's no magic behind that.

        as to the bytecode translation, I do agree in part.

        cpu vs gpu programs, CPUs are general purpose architectures, they are not meant to execute SIMD streams in quantity, they lack the HW resourcers. GPUs are SIMD tailored engines, that's why they are not afficient with loops, branching and out of order execution. both are tools for different jobs.

        the situation we are now exists because of evolution. let's see where AMD HSA future leads to.

        Comment


        • #19
          Wine D3D layer is really bad and it's not an issue caused by AMD drivers, but Wine developers doesn't keep a OGL specification too much. I think that these guys tested D3D layer only under NV drivers (as anyone should know, NV drivers are really bad in OGL specification cases, these drivers runs even badly written code and it's not good, because it's cause problems on other drivers; indie developers doesn't have too much cash for buy different gfx just for a code test, thats why NV as a developer platform sucks if someone aren't an expert in OGL specification, because it higher posibility that wrote code may be bugged).
          Last edited by nadro; 02-08-2013, 08:23 AM.

          Comment


          • #20
            Originally posted by haplo602 View Post
            stop talking nonsene. shader language is C-like just because it is convenient and many programers adapt to C-style syntax very well. if the prevalent language was BASIC, then the shaders would have BASIC-like syntax. there's no magic behind that.

            as to the bytecode translation, I do agree in part.

            cpu vs gpu programs, CPUs are general purpose architectures, they are not meant to execute SIMD streams in quantity, they lack the HW resourcers. GPUs are SIMD tailored engines, that's why they are not afficient with loops, branching and out of order execution. both are tools for different jobs.

            the situation we are now exists because of evolution. let's see where AMD HSA future leads to.

            RISC CPUs are SIMD and MIMD tailored engines. The difference is that GPUs except the special units they have CPU like shader cores with extra instructions. The difference with Mips3D and Nvidia3D subsets for example, is that General3D subsets of 5-10 instructions are made to accelerate general colors-textures-3Dobjects of any kind, wile Special3D subsets are part of ASICs and used to accelerate special algorithms and special systems that are not part of any standard language specifications. Only the dialect is the same with C, the shader program cannot be compiled with C compilers. Today's graphics are ASIC_ed and that's all. Modern APIs like OpenGL 4.3 have compute shaders and they can help some one write an OpenCL game that only requires the compute_shader_compiler_frontend, uses the CPU at the same time with GPU, and its faster and more beautiful on both, using to metal access on all PUs.

            Comment


            • #21
              am I the only one which dosent have issues with amd in the whole linux world? lol. i seriously dont get the problems so many people have. its only in some rare cases i have problems with some games but i usually get them fixed pretty fast. i think you all set up your catalyst drivers wrong

              Comment


              • #22
                I know everyone is taking away this or that from the video but was I the only one who noticed when he said 10% improvement just by using 64bit? Come the fuck on Valve and others (HiB) 32 bit is dying please oh please 64 bit please!!!!

                I play Skyrim with AMD and no problems , within 10 minutes on a similar spec'd system with a Nvidia fermi card it tanked the system... Not championing AMD (driver issues since ATI began) just saying shit falls both ways, so unless you have some meaningful comparisons or charts please shut the fuck up.

                Originally posted by totex71 View Post
                am I the only one which dosent have issues with amd in the whole linux world? lol. i seriously dont get the problems so many people have. its only in some rare cases i have problems with some games but i usually get them fixed pretty fast. i think you all set up your catalyst drivers wrong
                I have been mostly lucky, everything including HDMI sound out works great.... Audio input is a bit sketchy (apparently buffer underuns) on two of my systems with the same onboard card... one using ALSA the other PULSE. (not to strike this against AMD persay I think it's Intel audio hehe)
                Last edited by nightmarex; 02-08-2013, 09:10 AM.

                Comment


                • #23
                  Originally posted by Ragas View Post
                  Why did I already know this would come down to AMD bashing when I just read the title?
                  Hehehe... Indeed.
                  Also what his tests show is that performance with AMD drivers in WINE is bad (or rather, worse than with nvidia), so I'd be really interested in knowing how they came to the conclusion that Catalyst was at fault, rather than WINE or anything else. Do they develop and test on geforce, call it a day and blame AMD for any problem?
                  Damnit I should have bookmarked the WINE Bugzilla pages where Henri Verbeet acknowledged WINE was doing stuff out of OpenGL specs, after the culprit was first thought to be Catalyst, before proper investigation.

                  Originally posted by xpander
                  they do support us with good drivers, yeah not open source but who cares if the end result is what matters.
                  What matters is a balance between your short term personal convenience and everybody else's long term sustainable benefit, actually. Let that sink in for a bit.
                  Mandatory xkcd link.

                  Comment


                  • #24
                    Originally posted by artivision View Post
                    What the hell are you talking about you and some others here. D3D and OGL are compilers, their job is to just compile an SL source or SL vm_bytecode to a form that a computer(GPU) can understand.
                    The compiler is one part of the driver. The compilers translate shader source in GLSL or HLSL to GPU hardware ISA, but the driver handles the rest of the work, ie the stream of state change and drawing commmands, getting work to the hardware and getting results back. The compiler is a separate piece of code called by the driver only when it encounters an API command to compile a shader.

                    Originally posted by artivision View Post
                    Then comes the important job, the rasterizer/synthesizer inside the GPU driver executes those shaders and produces graphics. Compilers communicate many times with the rasterizer (compiler sends data and takes an answer back).
                    Actually no -- the compiler is not involved with actual drawing operations. The compiler generates code which is run on the shader cores, and that code in turn runs when the driver tells the GPU hardware to (for example) run the vertex shader on each element in an array of vertices, reassemble the vertices into triangles, and then generate pixels from the triangles and run the pixel shader on each pixel in a triangle.

                    Originally posted by artivision View Post
                    The thing is that when you don't have the D3D rasterizer inside your GPU driver, you can only install and emulate D3D. Someone uses two different rasterizes and needs emulation, wile someone uses one unified and you can disable this emulation.
                    The front end of shader compilers is pretty much always totally different between D3D and OpenGL. The back end (optimization and code gen) is usually common, with an IR in between. For example, Catalyst uses AMDIL as the common IR, while the open source drivers use TGSI.

                    Originally posted by artivision View Post
                    And that's what it makes me feel bad, that with an 1% tweak on a GPU's rasterizer you can run D3D bytecode by the OpenGL driver or at least accelerate during emulation with an 80%+ efficiency. They don't do it because are cartel(mafia) with Microsoft. Think about it why those two companies help Microsoft even now to develop D3D? Why they help with consoles wile they can sell more cards if all games where for PC?
                    ??? The GPU's hardware doesn't know or care what API is being used. The driver translates API-specific operations into a series of hardware commands, and the compiler translates API-specific shader programs into shader hardware ISA. It's been 100% common from the first programmable GPU as far as I know.

                    Comment


                    • #25
                      Originally posted by haplo602 View Post
                      you can find lots of NVIDIA preferential treatment in arb_program_shader.c, examples
                      It's not Nvidia's fault AMD Catalyst was and is a terrible mess. I had one, but it was only usable with Open Source graphic driver.

                      Comment


                      • #26
                        I was interested by the side chat in the talk. The worker threads, when implemented, sounds like it will help performance a lot (especially for StrictOrdering)... Not surprising to hear that DX11 support will take another 10 years (+some Google "Summer of Code" projects) to implement!

                        I was hoping for a bit more in depth discussion to clarify some points - but obviously the talk was not designed to cater to "noobs" like myself . I presume memory pressure refers to exceeding VRAM limits (rather than memory bandwidth bottlenecks)?

                        Yeh great talk overall!! Nice to have the video...

                        Comment


                        • #27
                          "Nvidia, the way it's meant to be played." comes to mind. I am also not surprised if they worked for a long time with the nvidia blob only that it works best there. Of course other drivers aren't perfect but I think the Wine people could also do their share.

                          Comment


                          • #28
                            Catalyst beta seem to be working fine on my A10-5800K. Been playing CS Source, TF2, Xonotic, HoN and Supertuxkart without any issues. There are a few non-3D related quirks besides, like the blasted watermark and default underscan setting.

                            I just may consider nVidia in the future, though.

                            Comment


                            • #29
                              @PsynoKhi0

                              I would not worry much when fglrx would just slow. But the driver crashes with some games.

                              Comment


                              • #30
                                ATI drivers still suck

                                Originally posted by nightmarex View Post
                                I play Skyrim with AMD and no problems , within 10 minutes on a similar spec'd system with a Nvidia fermi card it tanked the system... Not championing AMD (driver issues since ATI began) just saying shit falls both ways, so unless you have some meaningful comparisons or charts please shut the fuck up.
                                I play Skyrim with a GTX460 for hours to no end (until i fall asleep) just fine. What you describe might happen if you miss to modify the Skyrim binary to become "Large Address Aware", which you do only once in older versions. This patch was made official at least since Dec 2011 so users using the steam version would not notice.

                                It is a long know fact, ATI=problems for wine. You are very lucky to have games work out of the box, while with Nvidia it is rather common. The ati experience in linux is very poor, and i see Intel far more committed to the community so the supporting cause should go there instead. I would never buy a system with ati, but an Ivybridge system next year should probably work good.

                                It's been many years and AMD has failed to deliver solid gpu support (even windows users complain), that is a fact. Linux is rather low in their list of priorities, next to "don't care". You are very lucky to find a select model with a select catalyst version with select versions of Xorg, etc. to make ATI work with select games; instead of Nvidia's most things just work out of the box experience.

                                Comment

                                Working...
                                X