Announcement

Collapse
No announcement yet.

R600 Open-Source Driver WIth GLSL, OpenGL 2.0

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Qaridarium
    realy Opengl in wine is not so shiny as in theorie.....
    there are Windows-Only-OpenGL-Extansions!
    exampel: WOW in openGL modus you can't see the mini-map---

    only DirectX modus brings the full game to work!...

    I'm so sorry to you kano but windows games use windows-only-extansions in the OpenGL modus... useless for wine!
    In case of OpenGL mode WoW renders the map using pbuffers. At this point only the closed source Nvidia and AMD drivers support those. Blizzard should have updated this code a long time ago to use FBOs but they didn't.

    Comment


    • Qaridarium, I really don't understand what your point is. It is obvious that emulating HLSL shaders via GLSL is always going to take more resources than the real thing. In some cases, the difference may be small enough to not matter. In others, you are going to need to upgrade your hardware before you can get decent performance. Finally, there may be some cases that you cannot emulate correctly no matter how hard you try (it's impossible to emulate geometry shaders without EXT_geometry_shader; it's impossible to emulate hull/tesselation shaders in OpenGL right now).

      Why are you acting so surprised at this? If you want native performance, play a native game. If you want to emulate a game, you need to be aware that you'll get lower performance and compatibility.

      Comment


      • Originally posted by Qaridarium
        thats wrong! wine wins tons of benchmarks!
        wine win on 3Dmark2000 and 3Dmark2001!!!
        These do not use HLSL. They are an entirely different beasts.

        you have a wrong unterstanding abaut the HLSL to GLSL bridge
        there is no need to translate it all the time!

        only the game starts slower!

        after that the complete translatet GSGL code load in the card and run nonstop.
        in theorie there is no speed 'lose' but you can also doe optimations...
        you can handle DX8 code in DX10/DX11 style...
        You translate the code once, when the shader is compiled, but this translation has runtime overhead. In the worst case, the overhead can overflow the capabilities of your card, meaning the resulting code will not run or will fall back to software emulation.

        Meaning you might need a newer card to run old code through wine, when an older card would have sufficed in native D3D.

        a DX9 based game runs well on a X1950.. but the same game loses in wine on this card...
        but a much slower card like the 4350 or 54xx can "Win"
        thats because wine translate the old code into a new openGL3.2 stylish code...

        much better texture compression save ramspeed and bring more fps!
        Win in support yes (see above). Win in speed not really, at least not with these specific cards you quoted.

        what da fu.k?????

        "EXT_geometry_shader" is a nvidia only extansion but OpenGL3.2 do not need this for the same because in ogl3.2 there is a geometry_shader !
        Oh, please.

        Code:
        $ glxinfo
        [...]
        OpenGL renderer string: ATI Radeon HD 4800 Series
        OpenGL version string: 3.2.9232
        [...]
        , GL_EXT_geometry_shader4,
        This is on my Ati 4850 with 9.12 drivers.

        you also can emulate a 'tesselation shader' thats because of the amd-OGL extansions! ...
        DX11-level tesselation works differently than Ati's DX10-level tesselation hardware. It's close but not identical and all discussions I've read on this indicate that these extensions can't be used to emulate DX11-level tesselation. Feel free to prove me wrong, though.

        you do not get the Point of wine...... wine isn't a emulator.-..

        there is no emulator!......

        wine also does not emulate shader HLSL code... wine is a compiler!
        wine is a shader compiler compiles old shader in newstylish shader
        compile HLSL shader into GLSL shader....

        there is no emulator! nativ hardware speed! NO emulator!
        Yeah right, Wine is not an emulator because it recompiles HLSL code to GLSL. I guess pcsx2 is not an emulator either then? Hey, it recompiles mips code into x86!

        Comment


        • Q, BlackStar is saying that when Wine translates shaders it often has to insert additional instructions into the shader code, and it's those additional instructions that could slow down execution relative to running natively on Windows.

          If you reply with "but 3DMarkxxx is faster so that's not true" I'm going to vote for a ban
          Test signature

          Comment


          • Originally posted by bridgman View Post
            Q, BlackStar is saying that when Wine translates shaders it often has to insert additional instructions into the shader code, and it's those additional instructions that could slow down execution relative to running natively on Windows.
            But isn't the point of all the OpenGL 3.2 "Wine extensions" to obviate the need to do this?

            Comment


            • Originally posted by Qaridarium
              Long time ago i test=BlackStar]Win in support yes (see above). Win in speed not really, at least not with these specific cards you quoted.
              this... X850 vs hd4350...

              theoretical the X850 is much faster more shader power more ramspeed...

              but in wine the hd4350 is over 30% faster in 3Dmark03!
              You originally said X1950 vs HD4350 and I really doubt the latter will outperform the former in any meaningful test. X850 is very different in capabilities from the X1950 (SM2.0b vs SM3.0), so the result of this comparison does not transfer to the former.

              Not to mention that this 30% number is meaningless on its own. Did you use the same system? CPU? OS? Driver version? Wine version?

              you can handle DX11-tessellation on a 5870 by using openGL!
              No, you cannot. Not yet. AMD_vertex_shader_tessellator is a very different beast than DX11 tessellator shaders, and we'll have to wait for OpenGL 3.3/4.0 before the necessary functionality is exposed. My guess is that this won't happen before Nvidia releases its own DX11 hardware.

              yes you can't use old hardware for new extensions but the same hardware can do the same....
              Yes, iff the drivers expose this functionality.

              I won't argue the point on Wine/emulation, other than to say that HLSL to GLSL recompilation was not even conceived when the "wine is not an emulator" moto was penned. The "not an emulator" part refers to x86 instructions, not shader code.

              Comment


              • Originally posted by Alex W. Jackson View Post
                But isn't the point of all the OpenGL 3.2 "Wine extensions" to obviate the need to do this?
                Nope. The new interop extensions improve compatibility in a few parts of the pipeline (e.g. VBO loading, polygon rendering) but they don't affect shaders directly.

                Comment


                • Originally posted by BlackStar View Post
                  Nope. The new interop extensions improve compatibility in a few parts of the pipeline (e.g. VBO loading, polygon rendering) but they don't affect shaders directly.
                  From the definition of ARB_fragment_coord_conventions on opengl.org (emphasis added):

                  What is the primary goal of this extension have?

                  RESOLVED: The goal is to increase the cross-API portability
                  of fragment shaders. Most fragment shader inputs (texture
                  coordinate sets, colors) are treated identically among OpenGL
                  and other 3D APIs such as the various versions of Direct3D.
                  The chief exception is the fragment coordinate XY values which
                  depend on the 3D API's particular window space conventions.

                  We seek to avoid situations where shader source code must
                  be non-trivially modified
                  to support differing window-space
                  conventions. We also want minimize the performance effect on
                  fragment shader execution. Rather than an application modifying
                  the shader source to add extra operations and parameters/uniforms
                  to adjust the native window coordinate origin, we want to control
                  the hardware's underlying convention for how the window origin
                  is provided to the shader.
                  ?

                  Comment


                  • Originally posted by Alex W. Jackson View Post
                    From the definition of ARB_fragment_coord_conventions on opengl.org (emphasis added):
                    Bah, forgot about coordinate conversions. This could have some positive impact, but wasn't this available as a NV-specific extension prior to GL3.2?

                    Comment


                    • The article states that I don't get to play "Unigine Heaven on Linux", but.. I do get to play chromium-bsu AND glchess! playing 1080p movies also works just fine, so I'm happy enough as of now.

                      Comment

                      Working...
                      X