Announcement

Collapse
No announcement yet.

Possible Features To Find In OpenGL 5.0

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    I bet shader bytecode is added

    Valve and some others want a shader bytecode rather than having to ship all their GLSL shaders directly.

    Comment


    • #12
      Originally posted by smitty3268 View Post
      Valve and some others want a shader bytecode rather than having to ship all their GLSL shaders directly.
      Explain please. And links if possible.
      "shader bytecode" == precompiled shaders?
      "shaders directly" == shaders source code?

      Comment


      • #13
        Bindless graphics and all included in the specification is a step in the right direction, but what we really need to take a leap forward is a general low-level GPU shader language which is a kind of superset of GLSL and CUDA and let the programmer code the graphics pipeline. This would eliminate most of the changes in OpenGL extensions(exept shader changes) and leave it up to the developer to utilize the GPU features.

        Lowering the CPU overhead is only going to get us so far, what we really need is to move the control of the GPU from the CPU and implement it in the shaders directly.

        Comment


        • #14
          bindless textures and buffer streaming is the way of the future

          edit:
          opengl does badly need a standardized compiled glsl format for a variety of reasons. program_binary is NOT this, but it's still nice.
          Last edited by peppercats; 05-03-2014, 07:53 PM.

          Comment


          • #15
            Originally posted by efikkan View Post
            Bindless graphics and all included in the specification is a step in the right direction, but what we really need to take a leap forward is a general low-level GPU shader language which is a kind of superset of GLSL and CUDA and let the programmer code the graphics pipeline. This would eliminate most of the changes in OpenGL extensions(exept shader changes) and leave it up to the developer to utilize the GPU features.

            Lowering the CPU overhead is only going to get us so far, what we really need is to move the control of the GPU from the CPU and implement it in the shaders directly.
            Not a shot in hell having CUDA sprinkled in OpenGL. That's the job of CUDA competition, OpenCL.

            Almost everyone of these ideas are either current latest gen GPGPUs or future generation. Seems a waste or they won't produce OpenGL 5 for another 12 months to allow that next generation of hardware to arrive, prior to releasing OpenGL 5.

            Comment


            • #16
              Originally posted by Marc Driftmeyer View Post
              Not a shot in hell having CUDA sprinkled in OpenGL. That's the job of CUDA competition, OpenCL.

              Almost everyone of these ideas are either current latest gen GPGPUs or future generation. Seems a waste or they won't produce OpenGL 5 for another 12 months to allow that next generation of hardware to arrive, prior to releasing OpenGL 5.
              You did not get the point at all. I'm talking of CUDA-style features in shaders which OpenCL does not have, like accessing data from PCIe-devices, better memory structures and pointer handling, better GPU threading and so on. We need a single language with this type of features, but vendor neutral of course.

              The Maxwell line of GPUs are bringing immense GPU capabilities which neither OpenGL 4.4 nor DirectX 12 will be utilizing. Some of these GPUs will even feature embedded ARM cores. With the Maxwell GPUs we are close to eliminating the CPU overhead all together by moving all the rendering control to the GPU. In essense you can pass a single call to the GPU requesting a frame, then the GPU will detect which objects to render, fetch data if necessary, occlude excess objects, calculate LoD and complete the rendering. This is the direction we need to go in order to achieve greater performance (for 4k).

              Comment


              • #17
                Originally posted by efikkan View Post
                Some of these GPUs will even feature embedded ARM cores.
                Doesn't look like it. No mention of it was made at GTC which would have been the intended target audience.

                Apparently "Project Denver" now refers simply to the Tegra K1 SoC.

                Comment


                • #18
                  Originally posted by mark45 View Post
                  Explain please. And links if possible.
                  "shader bytecode" == precompiled shaders?
                  "shaders directly" == shaders source code?
                  Yes, that's what i mean. It has to be something standard that all GL5 implementations can read, though, so it's not compiled down to full binary instructions. D3D provides something similar, or it could be like TGSI intructions in gallium.

                  d3d: http://msdn.microsoft.com/en-us/libr...=vs.85%29.aspx

                  Where i saw Valve requesting it was the Youtube videos from their Steam Developer days conference. A couple NVidia guys were up there, and 1 was in favor of doing it and the other was against it, and they both said the idea was under discussion.

                  I think some devs want it just so there is less overhead from compiling shaders during their games. Others are more comfortable shipping shaders that way because it protects their IP better if you have to disassemble them to make sense of what the shader is doing.

                  Comment


                  • #19
                    Given how hard the current JS minifiers make understanding JS scripts, protecting the IP could well be done with a GLSL minifier. Yet I haven't seen a single one so far.

                    Comment


                    • #20
                      > DirectX 12.0 is also going to be optimizing the performance potential of Microsoft's 3D graphics API.

                      That's because DX12 is almost a carbon copy of Mantle.

                      http://semiaccurate.com/2014/03/18/m...le-calls-dx12/

                      If you read the press release for new DX12 changes from GDC just 2 weeks after that article, you'll see Charlie was right.

                      Comment

                      Working...
                      X