Originally posted by Qaridarium
Announcement
Collapse
No announcement yet.
R600 Open-Source Driver WIth GLSL, OpenGL 2.0
Collapse
X
-
Qaridarium, I really don't understand what your point is. It is obvious that emulating HLSL shaders via GLSL is always going to take more resources than the real thing. In some cases, the difference may be small enough to not matter. In others, you are going to need to upgrade your hardware before you can get decent performance. Finally, there may be some cases that you cannot emulate correctly no matter how hard you try (it's impossible to emulate geometry shaders without EXT_geometry_shader; it's impossible to emulate hull/tesselation shaders in OpenGL right now).
Why are you acting so surprised at this? If you want native performance, play a native game. If you want to emulate a game, you need to be aware that you'll get lower performance and compatibility.
Comment
-
Originally posted by Qaridariumthats wrong! wine wins tons of benchmarks!
wine win on 3Dmark2000 and 3Dmark2001!!!
you have a wrong unterstanding abaut the HLSL to GLSL bridge
there is no need to translate it all the time!
only the game starts slower!
after that the complete translatet GSGL code load in the card and run nonstop.
in theorie there is no speed 'lose' but you can also doe optimations...
you can handle DX8 code in DX10/DX11 style...
Meaning you might need a newer card to run old code through wine, when an older card would have sufficed in native D3D.
a DX9 based game runs well on a X1950.. but the same game loses in wine on this card...
but a much slower card like the 4350 or 54xx can "Win"
thats because wine translate the old code into a new openGL3.2 stylish code...
much better texture compression save ramspeed and bring more fps!
what da fu.k?????
"EXT_geometry_shader" is a nvidia only extansion but OpenGL3.2 do not need this for the same because in ogl3.2 there is a geometry_shader !
Code:$ glxinfo [...] OpenGL renderer string: ATI Radeon HD 4800 Series OpenGL version string: 3.2.9232 [...] , GL_EXT_geometry_shader4,
you also can emulate a 'tesselation shader' thats because of the amd-OGL extansions! ...
you do not get the Point of wine...... wine isn't a emulator.-..
there is no emulator!......
wine also does not emulate shader HLSL code... wine is a compiler!
wine is a shader compiler compiles old shader in newstylish shader
compile HLSL shader into GLSL shader....
there is no emulator! nativ hardware speed! NO emulator!
Comment
-
Q, BlackStar is saying that when Wine translates shaders it often has to insert additional instructions into the shader code, and it's those additional instructions that could slow down execution relative to running natively on Windows.
If you reply with "but 3DMarkxxx is faster so that's not true" I'm going to vote for a banTest signature
Comment
-
Originally posted by bridgman View PostQ, BlackStar is saying that when Wine translates shaders it often has to insert additional instructions into the shader code, and it's those additional instructions that could slow down execution relative to running natively on Windows.
Comment
-
Originally posted by QaridariumLong time ago i test=BlackStar]Win in support yes (see above). Win in speed not really, at least not with these specific cards you quoted.
theoretical the X850 is much faster more shader power more ramspeed...
but in wine the hd4350 is over 30% faster in 3Dmark03!
Not to mention that this 30% number is meaningless on its own. Did you use the same system? CPU? OS? Driver version? Wine version?
you can handle DX11-tessellation on a 5870 by using openGL!
yes you can't use old hardware for new extensions but the same hardware can do the same....
I won't argue the point on Wine/emulation, other than to say that HLSL to GLSL recompilation was not even conceived when the "wine is not an emulator" moto was penned. The "not an emulator" part refers to x86 instructions, not shader code.
Comment
-
Originally posted by Alex W. Jackson View PostBut isn't the point of all the OpenGL 3.2 "Wine extensions" to obviate the need to do this?
Comment
-
Originally posted by BlackStar View PostNope. The new interop extensions improve compatibility in a few parts of the pipeline (e.g. VBO loading, polygon rendering) but they don't affect shaders directly.
What is the primary goal of this extension have?
RESOLVED: The goal is to increase the cross-API portability
of fragment shaders. Most fragment shader inputs (texture
coordinate sets, colors) are treated identically among OpenGL
and other 3D APIs such as the various versions of Direct3D.
The chief exception is the fragment coordinate XY values which
depend on the 3D API's particular window space conventions.
We seek to avoid situations where shader source code must
be non-trivially modified to support differing window-space
conventions. We also want minimize the performance effect on
fragment shader execution. Rather than an application modifying
the shader source to add extra operations and parameters/uniforms
to adjust the native window coordinate origin, we want to control
the hardware's underlying convention for how the window origin
is provided to the shader.
Comment
-
Comment