Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 21

Thread: Possible Features To Find In OpenGL 5.0

  1. #11
    Join Date
    Oct 2008
    Posts
    3,219

    Default I bet shader bytecode is added

    Valve and some others want a shader bytecode rather than having to ship all their GLSL shaders directly.

  2. #12
    Join Date
    May 2012
    Posts
    912

    Default

    Quote Originally Posted by smitty3268 View Post
    Valve and some others want a shader bytecode rather than having to ship all their GLSL shaders directly.
    Explain please. And links if possible.
    "shader bytecode" == precompiled shaders?
    "shaders directly" == shaders source code?

  3. #13
    Join Date
    Jun 2010
    Posts
    168

    Default

    Bindless graphics and all included in the specification is a step in the right direction, but what we really need to take a leap forward is a general low-level GPU shader language which is a kind of superset of GLSL and CUDA and let the programmer code the graphics pipeline. This would eliminate most of the changes in OpenGL extensions(exept shader changes) and leave it up to the developer to utilize the GPU features.

    Lowering the CPU overhead is only going to get us so far, what we really need is to move the control of the GPU from the CPU and implement it in the shaders directly.

  4. #14
    Join Date
    Feb 2013
    Posts
    371

    Default

    bindless textures and buffer streaming is the way of the future

    edit:
    opengl does badly need a standardized compiled glsl format for a variety of reasons. program_binary is NOT this, but it's still nice.
    Last edited by peppercats; 05-03-2014 at 08:53 PM.

  5. #15
    Join Date
    Oct 2012
    Location
    Washington State
    Posts
    511

    Default

    Quote Originally Posted by efikkan View Post
    Bindless graphics and all included in the specification is a step in the right direction, but what we really need to take a leap forward is a general low-level GPU shader language which is a kind of superset of GLSL and CUDA and let the programmer code the graphics pipeline. This would eliminate most of the changes in OpenGL extensions(exept shader changes) and leave it up to the developer to utilize the GPU features.

    Lowering the CPU overhead is only going to get us so far, what we really need is to move the control of the GPU from the CPU and implement it in the shaders directly.
    Not a shot in hell having CUDA sprinkled in OpenGL. That's the job of CUDA competition, OpenCL.

    Almost everyone of these ideas are either current latest gen GPGPUs or future generation. Seems a waste or they won't produce OpenGL 5 for another 12 months to allow that next generation of hardware to arrive, prior to releasing OpenGL 5.

  6. #16
    Join Date
    Jun 2010
    Posts
    168

    Default

    Quote Originally Posted by Marc Driftmeyer View Post
    Not a shot in hell having CUDA sprinkled in OpenGL. That's the job of CUDA competition, OpenCL.

    Almost everyone of these ideas are either current latest gen GPGPUs or future generation. Seems a waste or they won't produce OpenGL 5 for another 12 months to allow that next generation of hardware to arrive, prior to releasing OpenGL 5.
    You did not get the point at all. I'm talking of CUDA-style features in shaders which OpenCL does not have, like accessing data from PCIe-devices, better memory structures and pointer handling, better GPU threading and so on. We need a single language with this type of features, but vendor neutral of course.

    The Maxwell line of GPUs are bringing immense GPU capabilities which neither OpenGL 4.4 nor DirectX 12 will be utilizing. Some of these GPUs will even feature embedded ARM cores. With the Maxwell GPUs we are close to eliminating the CPU overhead all together by moving all the rendering control to the GPU. In essense you can pass a single call to the GPU requesting a frame, then the GPU will detect which objects to render, fetch data if necessary, occlude excess objects, calculate LoD and complete the rendering. This is the direction we need to go in order to achieve greater performance (for 4k).

  7. #17
    Join Date
    May 2011
    Posts
    1,599

    Default

    Quote Originally Posted by efikkan View Post
    Some of these GPUs will even feature embedded ARM cores.
    Doesn't look like it. No mention of it was made at GTC which would have been the intended target audience.

    Apparently "Project Denver" now refers simply to the Tegra K1 SoC.

  8. #18
    Join Date
    Oct 2008
    Posts
    3,219

    Default

    Quote Originally Posted by mark45 View Post
    Explain please. And links if possible.
    "shader bytecode" == precompiled shaders?
    "shaders directly" == shaders source code?
    Yes, that's what i mean. It has to be something standard that all GL5 implementations can read, though, so it's not compiled down to full binary instructions. D3D provides something similar, or it could be like TGSI intructions in gallium.

    d3d: http://msdn.microsoft.com/en-us/libr...=vs.85%29.aspx

    Where i saw Valve requesting it was the Youtube videos from their Steam Developer days conference. A couple NVidia guys were up there, and 1 was in favor of doing it and the other was against it, and they both said the idea was under discussion.

    I think some devs want it just so there is less overhead from compiling shaders during their games. Others are more comfortable shipping shaders that way because it protects their IP better if you have to disassemble them to make sense of what the shader is doing.

  9. #19
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    5,284

    Default

    Given how hard the current JS minifiers make understanding JS scripts, protecting the IP could well be done with a GLSL minifier. Yet I haven't seen a single one so far.

  10. #20

    Default

    > DirectX 12.0 is also going to be optimizing the performance potential of Microsoft's 3D graphics API.

    That's because DX12 is almost a carbon copy of Mantle.

    http://semiaccurate.com/2014/03/18/m...le-calls-dx12/

    If you read the press release for new DX12 changes from GDC just 2 weeks after that article, you'll see Charlie was right.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •