Announcement

Collapse
No announcement yet.

Khronos Publishes Its Slides About OpenGL-Next

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by TheSoulz View Post
    And why is that?
    Is there any reason why the new OpenGL cant be added to the renderer???
    i dont think so.
    Time and effort to re-code and re-certify the SW, for no advantage whatsoever?

    Comment


    • #42
      sub-committee

      Originally posted by liam View Post
      This is the one that had me scratching my head, proverbially speaking.
      Khronos is a consortium. They are nothing if not a committee surrounded by pr.
      Consortium's aren't neccessarily bad, Linaro is one, but, for a committee to come out and say THEIR next api won't be designed by committee seems oxymoronic, especially with all the names attached to this effort.
      Perhaps the intent was to communicate that part of the design reqs are to avoid the pitfalls of Design-by-Committee? That's great, but how does a committee make that happen?
      By splitting off a sub-committee

      Comment


      • #43
        Originally posted by justmy2cents View Post
        now imagine new gl api in details with as many members as khronos has.
        I agree with you and one of the head-scratchers is that modern GPUs, to get the most out of them, require an understanding of their architectures. To have a low-level, abstract API seems a bit of a contradiction. How are they going to make an API that works well for all the different GPU architectures out there? What does an Intel IGP have in common with Maxwell? And how will it be optimized for architectures as of yet unreleased, like Pascal or whatever?

        Even if you look at something like CUDA, to get maximum performance on each architecture your source has to understand each architecture and be updated accordingly. After all Kepler has certain features that Fermi doesn't, not to mention different SM sizes, etc.

        This "low-level" talk is always coming from the context of having one piece of target hardware (Xbox or PS, etc.). I'm still not convinced that this is going to work for multiple architectures across multiple vendors on multiple form factors (desktop, mobile, etc.).

        Comment


        • #44
          As far as a committee committee goes... I imagine all the big market players, the ones who eventually support it and send that into our homes, might not agree on a certain way of doing things, and that top level committee keeps it in check, like quality control, planning development stages and such. If I remember correctly, last time they simply couldn't decide how to go about certain things when rewriting opengl at 3.0 and because of that, they just stayed their course. Now there's enough of a push and momentum to get it done for it to actually happen.

          Because those big market players have a say, they will also be more likely to support or even push it.
          Last edited by profoundWHALE; 21 August 2014, 10:09 AM.

          Comment


          • #45
            Originally posted by johnc View Post
            I agree with you and one of the head-scratchers is that modern GPUs, to get the most out of them, require an understanding of their architectures. To have a low-level, abstract API seems a bit of a contradiction. How are they going to make an API that works well for all the different GPU architectures out there? What does an Intel IGP have in common with Maxwell? And how will it be optimized for architectures as of yet unreleased, like Pascal or whatever?
            When they say low-level, it's not so much the vendor specific hw state or ISA that they are talking about, it's more a matter or giving applications more control over memory management and draw scheduling.

            Comment


            • #46
              OpenGL "Next" is going to be extremely important, and since it will probably be the only major compatibility breakage for many years we need to do it right, otherwise OpenGL might become "irrelevant".

              The previous revolution in GPU technology consisted of moving some functionality from API-calls into the shading language, which allows us to program parts of the GPU pipeline directly, including texturing, vertex manipulation, tessellation and so on. This has allowed us to create lots of amazing effects like water surface, bump mapping, etc. but a problem still remain; pipeline stages are still too rigid and usage of large amounts of objects does still require lots of API-calls. Even in OpenGL 4.x and Direct3D 11 there is a large performance difference between rendering one big static mesh and many dynamic small ones. Animation and manipulation is still very expensive and limited. Even today's GPUs are capable of rendering millions of polygons at a high framerate, but we never see any games close to that since the API-cost of manipulating so many detailed objects is simply too high. Even if we virtually eliminated the overhead, it would still be hard to send enough API-calls to keep the GPU busy enough.

              The logical solution to this is to expand the "shader" programs into a general low-level GPU programming language(C-style, not asm), expanding on the ideas of CUDA. This would allow the programmer to design most of the pipeline, and way more flexibility in doing culling, vertex creation, physics interaction etc. directly within the shader program. This would simplify driver development, and would even allow for a legacy OpenGL implementation on top of it (kind of like we already do in OpenGL with fixed pipeline). Anyone could then add their preferred abstraction on top of it, or use the low-level shading directly. Programming on the CPU and GPU would then be more similar and seamless, unlike today's rigid GPU programming with thousands of different API-calls to communicate. In addition, then there would be no need for separate "OpenGL", "OpenCL", etc. A low-level GPU language would allow us to do compute, graphics, heck even audio processing in theory.
              Last edited by efikkan; 21 August 2014, 10:58 AM.

              Comment


              • #47
                Originally posted by perpetualrabbit View Post
                Originally posted by liam
                Perhaps the intent was to communicate that part of the design reqs are to avoid the pitfalls of Design-by-Committee? That's great, but how does a committee make that happen?
                By splitting off a sub-committee
                or maybe, as it happened often times in other contexts, picking an existing implementation and building on it - if not adopting it as is.. - rather than really starting from scratch...

                Comment


                • #48
                  Originally posted by sarmad View Post
                  Glad to see all those companies involved. The last thing we need is competition at an API level. I am quite surprised Apple is in the list. I guess they realize if their OS doesn't support a standard API it will have little chance of getting apps ported to it.
                  Ever heard of OpenCL? Maybe you want to look into that...

                  Comment


                  • #49
                    Originally posted by efikkan View Post
                    OpenGL "Next" is going to be extremely important, and since it will probably be the only major compatibility breakage for many years we need to do it right, otherwise OpenGL might become "irrelevant".
                    sorry, but... by "we", do you imply you have insider knowledge?

                    also, wouldn't moving everything into shader cause whole lots of other problems? shader running on GPU wouldn't be able to interact with devices or filesystems to load resources for example. some form of standard api (not all functionality) is still required as i see it
                    Last edited by justmy2cents; 23 August 2014, 09:11 PM.

                    Comment


                    • #50
                      Originally posted by johnc View Post
                      Developers won't have to deal with that kind of mess in DX; score another point for DX.
                      There already is a mess with DX with DX 9 vs DX 11. It also isn't portable. Double fail.

                      Comment

                      Working...
                      X