Announcement

Collapse
No announcement yet.

Possible Features To Find In OpenGL 5.0

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Possible Features To Find In OpenGL 5.0

    Phoronix: Possible Features To Find In OpenGL 5.0

    While OpenGL 5.0 hasn't been released yet, there's already much speculation what the future holds for OpenGL as more companies look at optimizing its performance and 3D capabilities...

    http://www.phoronix.com/vr.php?view=MTY4MDg

  • #2
    [offtopic]
    So far I find the ads acceptable. They don't flash or move or blink or produce sound. Though flash is always off because it is a necessary evil for the time being. If every website acted like this I wouldn't need an adblocker.
    [/offtopic]

    Comment


    • #3
      If this will require specific hardware to use then I'm going to wait for the 2nd generation of hardware that will support it (because the 2nd will help kink out the bugs from the 1st)

      Comment


      • #4
        Originally posted by schmidtbag View Post
        If this will require specific hardware to use then I'm going to wait for the 2nd generation of hardware that will support it (because the 2nd will help kink out the bugs from the 1st)
        I heard Nvidia is going to support part of DirectX 12 starting from the GTX 500 series. So I hope at least the performance part of OpenGL 5 / DX12 will be supported by "current" hardware.

        Comment


        • #5
          I heard GL 5 will support the Turbo Mode (TM) extension, once you toggle it on (glSetTurboMode(true)) OpenGL starts running faster and consuming less CPU.

          Comment


          • #6
            My mind while reading this article:
            1st paragraph: "Well, it's a reminder of the title of the article, just in case I forget it already."
            2nd paragraph: "Talking more about other techonologies than about those features the title suggests I'm supposed to find in the article."
            3rd paragraph: "Oh look, a link to the so said features!"

            I didn't like this article. I feel while reading that I'm not getting what the title suggests, too many irrelevant information and bad quality. I had the impression you didn't know what to write.

            You could have listed the most important features found in the document, as well as the ones that it's sure we will find or those that will not. Maybe some developers reaction (if any), etc.

            My 2 cents.
            Last edited by alazar; 05-03-2014, 02:49 PM.

            Comment


            • #7
              I always wanted OpenGL to do a clean cut from legacy.

              Have this done with a major version. E.g. going from OpenGL 4.x to OpenGL 5.0.
              I know this will never happen but I can dream can't I:

              Would like to seem:
              - more performance through:
              -less cpu micromanaging
              Why can't you just give coordinate positions to the GPU, let the GPU OpenGL code only on the GPU side figure out
              what it must draw, ask for missing pieces (streaming maps anyone) and then have the GPU code do all the drawing without
              further CPU managing.

              - less legacy crap, old ways of doing things must be thrown out. GPU's are very different from GPU's 15 - 20 years ago.
              A lot of good work with the core profiles has been achieved but I really would like to see the api simplified,
              become cleaner and leaner.

              - Have somewhat parity with DirectX12. Have somewhat, almost performance and capabilities parity with DirectX 12.
              DirectX 12 introduces some very good features for increasing performance and programming model.
              (A good source talking about the new DirectX 12 features is the msdn blog: blogs (dot) msdn (dot) com)

              Comment


              • #8
                @alazar

                that is generaly how phoronix artiicles work. no meaningful or insightful infromation in the article, mostly links to other peoples commentary about it. this one is unique because it doesnt have a ton of links to other phoronix articles. this one is also different as it doesnt have a stupid name for the title, something like "HURD gets some love", or the article is basicaly taking a non news event and blowing it out of porportion only to have another article come out in a few hours stating what everyone in the comments of the previos article explained. so if you want to use phoronix effectivly, just click link on main page for comments, that is where you will get the most meaningful infromation.

                Comment


                • #9
                  A small addition to my unlikely wishes:

                  - get rid of the concept of intents
                  The GPU is supposed to execute instructions, not ambiguous wishes.
                  Programmers should write clear instructions, not have all kinds of semi-useless intents that may or may not be wrong.
                  The GPu should be able to go over the code while compiling and notice what the intent will be instead of guessing.
                  The intents are useless bloat and should be replaced with clear code, describing what to do instead of half assed hinting.

                  Comment


                  • #10
                    Originally posted by plonoma View Post
                    A small addition to my unlikely wishes:

                    - get rid of the concept of intents
                    The GPU is supposed to execute instructions, not ambiguous wishes.
                    Programmers should write clear instructions, not have all kinds of semi-useless intents that may or may not be wrong.
                    The GPu should be able to go over the code while compiling and notice what the intent will be instead of guessing.
                    The intents are useless bloat and should be replaced with clear code, describing what to do instead of half assed hinting.
                    Just for the sake of argument can you explain what you mean ? Because GPU very much indeed execute clearly define instruction with very clear result.

                    Comment


                    • #11
                      I bet shader bytecode is added

                      Valve and some others want a shader bytecode rather than having to ship all their GLSL shaders directly.

                      Comment


                      • #12
                        Originally posted by smitty3268 View Post
                        Valve and some others want a shader bytecode rather than having to ship all their GLSL shaders directly.
                        Explain please. And links if possible.
                        "shader bytecode" == precompiled shaders?
                        "shaders directly" == shaders source code?

                        Comment


                        • #13
                          Bindless graphics and all included in the specification is a step in the right direction, but what we really need to take a leap forward is a general low-level GPU shader language which is a kind of superset of GLSL and CUDA and let the programmer code the graphics pipeline. This would eliminate most of the changes in OpenGL extensions(exept shader changes) and leave it up to the developer to utilize the GPU features.

                          Lowering the CPU overhead is only going to get us so far, what we really need is to move the control of the GPU from the CPU and implement it in the shaders directly.

                          Comment


                          • #14
                            bindless textures and buffer streaming is the way of the future

                            edit:
                            opengl does badly need a standardized compiled glsl format for a variety of reasons. program_binary is NOT this, but it's still nice.
                            Last edited by peppercats; 05-03-2014, 07:53 PM.

                            Comment


                            • #15
                              Originally posted by efikkan View Post
                              Bindless graphics and all included in the specification is a step in the right direction, but what we really need to take a leap forward is a general low-level GPU shader language which is a kind of superset of GLSL and CUDA and let the programmer code the graphics pipeline. This would eliminate most of the changes in OpenGL extensions(exept shader changes) and leave it up to the developer to utilize the GPU features.

                              Lowering the CPU overhead is only going to get us so far, what we really need is to move the control of the GPU from the CPU and implement it in the shaders directly.
                              Not a shot in hell having CUDA sprinkled in OpenGL. That's the job of CUDA competition, OpenCL.

                              Almost everyone of these ideas are either current latest gen GPGPUs or future generation. Seems a waste or they won't produce OpenGL 5 for another 12 months to allow that next generation of hardware to arrive, prior to releasing OpenGL 5.

                              Comment

                              Working...
                              X