I bet shader bytecode is added
Valve and some others want a shader bytecode rather than having to ship all their GLSL shaders directly.
Explain please. And links if possible.
Originally Posted by smitty3268
"shader bytecode" == precompiled shaders?
"shaders directly" == shaders source code?
Bindless graphics and all included in the specification is a step in the right direction, but what we really need to take a leap forward is a general low-level GPU shader language which is a kind of superset of GLSL and CUDA and let the programmer code the graphics pipeline. This would eliminate most of the changes in OpenGL extensions(exept shader changes) and leave it up to the developer to utilize the GPU features.
Lowering the CPU overhead is only going to get us so far, what we really need is to move the control of the GPU from the CPU and implement it in the shaders directly.
bindless textures and buffer streaming is the way of the future
opengl does badly need a standardized compiled glsl format for a variety of reasons. program_binary is NOT this, but it's still nice.
Last edited by peppercats; 05-03-2014 at 07:53 PM.
Not a shot in hell having CUDA sprinkled in OpenGL. That's the job of CUDA competition, OpenCL.
Originally Posted by efikkan
Almost everyone of these ideas are either current latest gen GPGPUs or future generation. Seems a waste or they won't produce OpenGL 5 for another 12 months to allow that next generation of hardware to arrive, prior to releasing OpenGL 5.
You did not get the point at all. I'm talking of CUDA-style features in shaders which OpenCL does not have, like accessing data from PCIe-devices, better memory structures and pointer handling, better GPU threading and so on. We need a single language with this type of features, but vendor neutral of course.
Originally Posted by Marc Driftmeyer
The Maxwell line of GPUs are bringing immense GPU capabilities which neither OpenGL 4.4 nor DirectX 12 will be utilizing. Some of these GPUs will even feature embedded ARM cores. With the Maxwell GPUs we are close to eliminating the CPU overhead all together by moving all the rendering control to the GPU. In essense you can pass a single call to the GPU requesting a frame, then the GPU will detect which objects to render, fetch data if necessary, occlude excess objects, calculate LoD and complete the rendering. This is the direction we need to go in order to achieve greater performance (for 4k).
Doesn't look like it. No mention of it was made at GTC which would have been the intended target audience.
Originally Posted by efikkan
Apparently "Project Denver" now refers simply to the Tegra K1 SoC.
Yes, that's what i mean. It has to be something standard that all GL5 implementations can read, though, so it's not compiled down to full binary instructions. D3D provides something similar, or it could be like TGSI intructions in gallium.
Originally Posted by mark45
Where i saw Valve requesting it was the Youtube videos from their Steam Developer days conference. A couple NVidia guys were up there, and 1 was in favor of doing it and the other was against it, and they both said the idea was under discussion.
I think some devs want it just so there is less overhead from compiling shaders during their games. Others are more comfortable shipping shaders that way because it protects their IP better if you have to disassemble them to make sense of what the shader is doing.
Given how hard the current JS minifiers make understanding JS scripts, protecting the IP could well be done with a GLSL minifier. Yet I haven't seen a single one so far.
> DirectX 12.0 is also going to be optimizing the performance potential of Microsoft's 3D graphics API.
That's because DX12 is almost a carbon copy of Mantle.
If you read the press release for new DX12 changes from GDC just 2 weeks after that article, you'll see Charlie was right.