Hey, if we're already in the process of "let's make a low level API for every platform!", can we just declare Gallium3D the official new Linux rendering API? It's extremely close to Direct3D10/11, so it shouldn't be a huge jump for existing Windows developers =P
Former AMD Developer: OpenGL Is Broken
Collapse
X
-
Originally posted by ChrisXY View PostA little bit of shader source code? Modern graphical intensive games are easily 15+ Gigabyte...
Originally posted by ChrisXY View PostI don't imagine it would take more than one isolated if block with a write to and a read from the config file and a loop... Maybe add a progress bar? Is this really significant in relation to everything else a modern game contains?
Originally posted by ChrisXY View PostMore exposed than now? You can already print their code as they are fed to the compiler:
MESA_GLSL=dump
Originally posted by ChrisXY View PostYes, with a proper vendor independent redistributable binary format but until then...
Comment
-
-
Originally posted by Ancurio View PostHey, if we're already in the process of "let's make a low level API for every platform!", can we just declare Gallium3D the official new Linux rendering API? It's extremely close to Direct3D10/11, so it shouldn't be a huge jump for existing Windows developers =P
Comment
-
-
Originally posted by mdias View PostStorage space is not the problem, it's the time it takes to read it. Sure, it's just text, but it would be smaller if it were bytecode.
It is a lot more complicated than an if block... You will need to check if the user switched graphics card, or is running your software with another GPU. What if a new OpenGL implementation/patch was installed and generates different (faster? bug-fix?) binaries? Maybe you need to check the specific version of the driver now too... Which means; are you willing to make your user wait for another round of shader compiles every time he upgrades his drivers? What about realtime streaming content?
You don't stream shaders! Well stupid(IMO) drivers often do compile on first use only and then continue to bake a more optimized version in the background to switch to that later. But that is something you actually can prevent with ARB_get_program_binary. Because it forces compiling so it actually can give you the binary.
And with guarantied compiling its easy to run it in the background now. E.g. start compiling after the main menu of a game loaded!
ARB_get_program_binary was designed to just tell you if it doesn't like the binary. You just check a boolean.
Its trivial to implement:
Code:if (!(haveBinaryFileForShader() && tryToUseBinaryFileInThisGLContext())) { compileShaderFilesAndSaveBinary() }
Originally posted by mdias View PostSure, you can also decompile .NET code. That doesn't mean people prefer to ship the source instead of the bytecode "executables". Plus, it will make it harder to read the code.
Until then no one can say it's not a problem, because it is. Khronos membership is very expensive, I would suppose they have enough resouces to create an hardware independent bytecode format. It's not that hard...
The GLSL parsing is actually not expensive, its the optimization that eats a lot of time. And most of the optimization has to do with the underlying GPU architecture.
I suspect that in many AAA D3D games you have way less of compiling time just because GPU vendors actually deliver highly optimized shader binarys with there drivers.
Comment
-
-
Originally posted by Kraut View PostThe file size of a shader or the shader binary doesn't matter. Most likely a single texture will be bigger then all your shaders together.
You don't stream shaders! Well stupid(IMO) drivers often do compile on first use only and then continue to bake a more optimized version in the background to switch to that later. But that is something you actually can prevent with ARB_get_program_binary. Because it forces compiling so it actually can give you the binary.
Originally posted by Kraut View PostAnd I don't get it why you are so fearful of recompiling shaders after a driver update? Its not like amd/nvidia release drivers every second day.
Originally posted by Kraut View PostWell I can understand that some developers want it to hide there source code. But we should be honest here and say it is an obfuscate feature and not a technical one.
Originally posted by Kraut View PostThe GLSL parsing is actually not expensive, its the optimization that eats a lot of time. And most of the optimization has to do with the underlying GPU architecture.
I suspect that in many AAA D3D games you have way less of compiling time just because GPU vendors actually deliver highly optimized shader binarys with there drivers.
Comment
-
-
Originally posted by mdias View PostThe thing is you can have the GLSL parsing done and some optimization passes done too on the bytecode. Only hardware specific optimizations are left to be made, which I doubt is the most expensive step. Then again it depends on the optimizer aggressiveness. In any case, there's no single disadvantage to having the bytecode shaders, and there are several advantages to it.
Comment
-
-
Originally posted by curaga View Postr600sb is pretty much all hw-specific, and enabling it made shader compiles several times slower (5-10x).
I don't know enough about r600sb to comment further, but maybe it's initial aim was to optimize the shader output and not the compilation process itself?
In any case compiling from GLSL all the way to hw specific binary instructions will always be slower than starting from optimized bytecode. I know it seems irrelevant when thinking about 1 or 2 shaders, but AAA games may have thousands.
Comment
-
-
Originally posted by mdias View Postfact: nVidia blob drivers are more relaxed when following the standard + game developers are lazy to follow the GL standard, therefore many applications work fine on nVidia, but not on AMD, and then people complain that it's AMD's fault...
Comment
-
-
Driver updates aren't (shouldn't be?) that uncommon. Plus, on linux side you can have new daily packages like I do. If you're shipping a AAA game, chances are every time you need to recompile all shaders, you're gonna be a nuisance to the user.
Comment
-
-
Originally posted by mdias View PostDriver updates aren't (shouldn't be?) that uncommon. Plus, on linux side you can have new daily packages like I do. If you're shipping a AAA game, chances are every time you need to recompile all shaders, you're gonna be a nuisance to the user.
And I don't think YOUR driver updating behavior is a good argument against shader binarys.
Originally posted by mdias View PostThe thing is you can have the GLSL parsing done and some optimization passes done too on the bytecode. Only hardware specific optimizations are left to be made, which I doubt is the most expensive step. Then again it depends on the optimizer aggressiveness. In any case, there's no single disadvantage to having the bytecode shaders, and there are several advantages to it.
-parsing GLSL
-non Hardware related optimization
-Hardware related optimization
That the cost of parsing GLSL is insignificant. I like to quote one of the AMD developer on that, but can't find his comment anymore.
Most of the non GPU related optimizations can be done on GLSL itself. On mobile platforms and esp. with HLSL to GLSL converters this seems to be a big problem. The mobile compilers don't optimize as much presumably because of weaker CPUs, power usage and pure incompetence on the side of hardware manufactures at software development. The last point is actually the best argument for a shader bytecode I can think of.
There are already Projects that use the mesa parser/optimizer to tackle this issue: http://www.ohloh.net/p/glsl-optimizer
Saying bytecode comes only with advantages is naive.
If you create a new GPU API (seems to be fashionable right now) you could go with bytecode only shaders and I would be fine with it.
Integrating bytecode in OpenGL would bloat it even more. GLSL will never go away for obvious backwards compatibility reasons and the vendors get one more thing to fuck up at implementing.
Even if I don't have hard numbers on the factor how much non Hardware optimization VS Hardware optimization needs in performance. I suspect it will be always more for Hardware optimization. Just alone considering how different the GPU architectures are.
So you most likely still want to use binarys to speed up the loading.
Comment
-
Comment