If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
Some drivers will recompile shaders under specific conditions, for instance when a uniform value becomes 0 or 1, in order to eliminate noop expressions (e.g. add 0), eliminate conditionals, reduce register pressure and total instruction count.
Nvidia used to be very aggressive with such optimizations back in the geforce 5 and 6 days, much to the annoyance of game developers. Typical workarounds include small epsilons (e.g. 0.001 or 0.999) or multiple side-by-side shaders that are bound depending on uniform combos.
As gpus got more complex, these optimizations became less prevalent, but it could very well be that some drivers might still use them.
allquixotic: ok I could believe the cases where a shader is being optimized/changed at runtime, which would require a recompile. I find it hard to believe though that memory consumption would be a reason. Shaders are small so I think you would need 100's or 1000's for it to be an issue.
Really vertex and texture data should be the main consumers of the vram.
Qaridarium: for some reason your confusing vertex data with a shader.
Let's assume this is correct.
Aren't the Mac OS X OpenGL drivers far worse?
The OS X drivers are far worse as far as comparing them against the Nvidia binary on Linux, and maybe even fglrx under certain circumstances. But the OS X drivers are a lot better than the open drivers most of the time.
Remember, we're talking about the Source engine. It's pretty much going to be limited to using OpenGL 2.1 anyway. All the content in Source engine games is assuming that the user has DirectX 9.0c (OpenGL 2.1) class hardware, nothing newer. So any support for GL3+ is irrelevant in the comparison.
Comment