Announcement

Collapse
No announcement yet.

Next-Gen OpenGL To Be Announced Next Month

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Next-Gen OpenGL To Be Announced Next Month

    Phoronix: Next-Gen OpenGL To Be Announced Next Month

    The Khronos Group has shared details about their BoF sessions to be hosted next month during SIGGRAPH and it includes detailing the next-generation OpenGL / OpenGL ES specifications...

    http://www.phoronix.com/vr.php?view=MTc0MjA

  • #2
    Arguably Khronos may release *just* minor improvements.

    First. We got Android Extension Pack. Meaning if OpenGL ES 4.0 was around the corner Google would not made their bundle, right? (But pointed everybody to their SIGGRAPH booth about next gen mobile graphics...

    Second. Do we have any new hw in GPUs? Seriously. Something major would require either new hw (AMD is not there yet, Nvidia wont agree with AMD on whats important, Intel.. stil playing catch up, with unique needs and plans too).

    Thirdly. AZDO seam like almost ready package. What we need is explicit caching, explicit separate-thread-please-i'm-forwarding-this-in-advance-only in driver/compiler/linker. Some agreed upon common preprocessor for shader maybe. Nothing requiring new hw (and thus new major OpenGL).

    And yes. PRs may still slap big fat 5.0 on it, just for fun.

    Any other speculations?

    Comment


    • #3
      I still can't get a clear answer - do current DX11/OGL4.x cards support DX12 and OGL5? And, I still don't understand why this needs a new specification - why can't older versions just simply be modified to increase efficiency? In other words, the way I see it, I don't think there's going to be some magic function or variable you define where suddenly CPU efficiency increases. The performance increase is, from what I gather, just the way the instructions are interpreted and carried out. Python is a good example of this - regular C-python has relatively poor performance, but you can take the EXACT same code and run it under a different interpreter like jython or pypy and your performance dramatically increases. I understand openGL doesn't work the same way as python, I'm just wondering why exactly this isn't as simple as I think it is.

      Comment


      • #4
        And AMD still haven't open sourced Mantle yet.
        Now when OpenGL 5 with low overhead comes, perhaps Mantle is obsolete.

        Comment


        • #5
          Originally posted by przemoli View Post
          Arguably Khronos may release *just* minor improvements.

          First. We got Android Extension Pack. Meaning if OpenGL ES 4.0 was around the corner Google would not made their bundle, right? (But pointed everybody to their SIGGRAPH booth about next gen mobile graphics...

          Second. Do we have any new hw in GPUs? Seriously. Something major would require either new hw (AMD is not there yet, Nvidia wont agree with AMD on whats important, Intel.. stil playing catch up, with unique needs and plans too).

          Thirdly. AZDO seam like almost ready package. What we need is explicit caching, explicit separate-thread-please-i'm-forwarding-this-in-advance-only in driver/compiler/linker. Some agreed upon common preprocessor for shader maybe. Nothing requiring new hw (and thus new major OpenGL).

          And yes. PRs may still slap big fat 5.0 on it, just for fun.

          Any other speculations?
          Yeah, don't see GL5 coming. They'll probably just run the Nvidia AZDO demo and call it a day.

          Comment


          • #6
            Originally posted by schmidtbag View Post
            I still can't get a clear answer - do current DX11/OGL4.x cards support DX12 and OGL5? And, I still don't understand why this needs a new specification - why can't older versions just simply be modified to increase efficiency? In other words, the way I see it, I don't think there's going to be some magic function or variable you define where suddenly CPU efficiency increases. The performance increase is, from what I gather, just the way the instructions are interpreted and carried out. Python is a good example of this - regular C-python has relatively poor performance, but you can take the EXACT same code and run it under a different interpreter like jython or pypy and your performance dramatically increases. I understand openGL doesn't work the same way as python, I'm just wondering why exactly this isn't as simple as I think it is.
            Problem is not that the drivers are sending millions of additional opcodes and trashing the command dispatchers or not using certain uber optimised paths or nothing like that.

            The actual problem is hardware bandwidth/latency even if PCIe is really fast is not that fast so every upload to GPU ram is gonna hurt a lot, so the efficiency focus is a standard way to remove the upload process as much as possible and keep data inside the GPU ram as much as is possible to save PCIe trips to the CPU and backwards, ofc this will increase GPU ram usage(you can't have both ways) and start up times(you have to upload more data to avoid multiple/serial uploads), for example:

            Current OpenGL/DX game: upload TextureA, wait for upload(hurts and hurts and hurts), process, Upload TextureB, wait for upload(hurts and hurts and hurts), process,render.

            Next Gen OpenGL/DX game: upload TextureA,B,C,N... to buffersA,B,C,N ...(<-- only once per scene), reference buffer A, process, reference buffer B, process, render

            Of course many more factors need to work that way, the example is just a very bastardised way to show part of the problem

            Comment


            • #7
              Originally posted by schmidtbag View Post
              I still can't get a clear answer - do current DX11/OGL4.x cards support DX12 and OGL5? And, I still don't understand why this needs a new specification - why can't older versions just simply be modified to increase efficiency? In other words, the way I see it, I don't think there's going to be some magic function or variable you define where suddenly CPU efficiency increases. The performance increase is, from what I gather, just the way the instructions are interpreted and carried out. Python is a good example of this - regular C-python has relatively poor performance, but you can take the EXACT same code and run it under a different interpreter like jython or pypy and your performance dramatically increases. I understand openGL doesn't work the same way as python, I'm just wondering why exactly this isn't as simple as I think it is.
              Current DX/GL drivers do a lot of input validation and take care of resource management (allocation, buffering...). Assuming DX12 will be like Mantle, this validation, resource management will go away and become game engine responsibility, allowing higher performance because the game knows better what to do when.

              Comment


              • #8
                log0 took the other part of the answer

                Comment


                • #9
                  Originally posted by schmidtbag View Post
                  I still can't get a clear answer - do current DX11/OGL4.x cards support DX12 and OGL5? And, I still don't understand why this needs a new specification - why can't older versions just simply be modified to increase efficiency? In other words, the way I see it, I don't think there's going to be some magic function or variable you define where suddenly CPU efficiency increases. The performance increase is, from what I gather, just the way the instructions are interpreted and carried out. Python is a good example of this - regular C-python has relatively poor performance, but you can take the EXACT same code and run it under a different interpreter like jython or pypy and your performance dramatically increases. I understand openGL doesn't work the same way as python, I'm just wondering why exactly this isn't as simple as I think it is.
                  I don't know OpenGL myself so I won't talk about that. But the python example you mentioned explains exactly why you might need a new specification rather than just a new driver/engine/compiler/etc. Your Python code can have dramatic performance increase just by switching to jython, but it will still not be anywhere near C performance because C has static types and it allows you to handle your memory efficiently while python is bound by the garbage collector and the dynamic types which are by design slower than manual memory management and static types no matter what compiler you use.

                  Comment


                  • #10
                    Thanks for the info everyone, the explanations helped.

                    Comment


                    • #11
                      Another new standard to not be used or implemented in any meaningful way.

                      Comment


                      • #12
                        It would be a special kind of fail to release ES4 when ES3 is practically nonexistent on user devices.

                        Khronos, don't fuck it up, make it so ES3 devices will run ES4.

                        Comment


                        • #13
                          look the example of dx11 and dx10

                          Originally posted by curaga View Post
                          It would be a special kind of fail to release ES4 when ES3 is practically nonexistent on user devices.

                          Khronos, don't fuck it up, make it so ES3 devices will run ES4.
                          look the example of dx11 and dx10 and dx10.1 almost no games with dx10 in the market

                          Comment


                          • #14
                            Originally posted by log0 View Post
                            Current DX/GL drivers do a lot of input validation and take care of resource management (allocation, buffering...). Assuming DX12 will be like Mantle, this validation, resource management will go away and become game engine responsibility, allowing higher performance because the game knows better what to do when.
                            Free memory access for everyone everywhere, hooray! The input validation is there for a good reason. In best case, you just hang the hardware and have to reset. More worse, the application gets access to regions it's not allowed.

                            Comment


                            • #15
                              Originally posted by rikkinho View Post
                              look the example of dx11 and dx10 and dx10.1 almost no games with dx10 in the market
                              You mean DX10.1 right? IIRC it's because AMD was the first one to support it by a long shot, there where a few games that implemented it, even some that removed the capability because Nvidia paid them off as DX10.1 made the games run noticeably faster then DX10.

                              Comment

                              Working...
                              X