Page 1 of 8 123 ... LastLast
Results 1 to 10 of 75

Thread: Next-Gen OpenGL To Be Announced Next Month

  1. #1
    Join Date
    Jan 2007
    Posts
    15,389

    Default Next-Gen OpenGL To Be Announced Next Month

    Phoronix: Next-Gen OpenGL To Be Announced Next Month

    The Khronos Group has shared details about their BoF sessions to be hosted next month during SIGGRAPH and it includes detailing the next-generation OpenGL / OpenGL ES specifications...

    http://www.phoronix.com/vr.php?view=MTc0MjA

  2. #2
    Join Date
    Sep 2010
    Posts
    716

    Default

    Arguably Khronos may release *just* minor improvements.

    First. We got Android Extension Pack. Meaning if OpenGL ES 4.0 was around the corner Google would not made their bundle, right? (But pointed everybody to their SIGGRAPH booth about next gen mobile graphics...

    Second. Do we have any new hw in GPUs? Seriously. Something major would require either new hw (AMD is not there yet, Nvidia wont agree with AMD on whats important, Intel.. stil playing catch up, with unique needs and plans too).

    Thirdly. AZDO seam like almost ready package. What we need is explicit caching, explicit separate-thread-please-i'm-forwarding-this-in-advance-only in driver/compiler/linker. Some agreed upon common preprocessor for shader maybe. Nothing requiring new hw (and thus new major OpenGL).

    And yes. PRs may still slap big fat 5.0 on it, just for fun.

    Any other speculations?

  3. #3
    Join Date
    Dec 2010
    Location
    MA, USA
    Posts
    1,440

    Default

    I still can't get a clear answer - do current DX11/OGL4.x cards support DX12 and OGL5? And, I still don't understand why this needs a new specification - why can't older versions just simply be modified to increase efficiency? In other words, the way I see it, I don't think there's going to be some magic function or variable you define where suddenly CPU efficiency increases. The performance increase is, from what I gather, just the way the instructions are interpreted and carried out. Python is a good example of this - regular C-python has relatively poor performance, but you can take the EXACT same code and run it under a different interpreter like jython or pypy and your performance dramatically increases. I understand openGL doesn't work the same way as python, I'm just wondering why exactly this isn't as simple as I think it is.

  4. #4
    Join Date
    Dec 2011
    Posts
    2,153

    Default

    And AMD still haven't open sourced Mantle yet.
    Now when OpenGL 5 with low overhead comes, perhaps Mantle is obsolete.

  5. #5
    Join Date
    Jul 2010
    Posts
    520

    Default

    Quote Originally Posted by przemoli View Post
    Arguably Khronos may release *just* minor improvements.

    First. We got Android Extension Pack. Meaning if OpenGL ES 4.0 was around the corner Google would not made their bundle, right? (But pointed everybody to their SIGGRAPH booth about next gen mobile graphics...

    Second. Do we have any new hw in GPUs? Seriously. Something major would require either new hw (AMD is not there yet, Nvidia wont agree with AMD on whats important, Intel.. stil playing catch up, with unique needs and plans too).

    Thirdly. AZDO seam like almost ready package. What we need is explicit caching, explicit separate-thread-please-i'm-forwarding-this-in-advance-only in driver/compiler/linker. Some agreed upon common preprocessor for shader maybe. Nothing requiring new hw (and thus new major OpenGL).

    And yes. PRs may still slap big fat 5.0 on it, just for fun.

    Any other speculations?
    Yeah, don't see GL5 coming. They'll probably just run the Nvidia AZDO demo and call it a day.

  6. #6
    Join Date
    Jun 2009
    Posts
    1,188

    Default

    Quote Originally Posted by schmidtbag View Post
    I still can't get a clear answer - do current DX11/OGL4.x cards support DX12 and OGL5? And, I still don't understand why this needs a new specification - why can't older versions just simply be modified to increase efficiency? In other words, the way I see it, I don't think there's going to be some magic function or variable you define where suddenly CPU efficiency increases. The performance increase is, from what I gather, just the way the instructions are interpreted and carried out. Python is a good example of this - regular C-python has relatively poor performance, but you can take the EXACT same code and run it under a different interpreter like jython or pypy and your performance dramatically increases. I understand openGL doesn't work the same way as python, I'm just wondering why exactly this isn't as simple as I think it is.
    Problem is not that the drivers are sending millions of additional opcodes and trashing the command dispatchers or not using certain uber optimised paths or nothing like that.

    The actual problem is hardware bandwidth/latency even if PCIe is really fast is not that fast so every upload to GPU ram is gonna hurt a lot, so the efficiency focus is a standard way to remove the upload process as much as possible and keep data inside the GPU ram as much as is possible to save PCIe trips to the CPU and backwards, ofc this will increase GPU ram usage(you can't have both ways) and start up times(you have to upload more data to avoid multiple/serial uploads), for example:

    Current OpenGL/DX game: upload TextureA, wait for upload(hurts and hurts and hurts), process, Upload TextureB, wait for upload(hurts and hurts and hurts), process,render.

    Next Gen OpenGL/DX game: upload TextureA,B,C,N... to buffersA,B,C,N ...(<-- only once per scene), reference buffer A, process, reference buffer B, process, render

    Of course many more factors need to work that way, the example is just a very bastardised way to show part of the problem

  7. #7
    Join Date
    Jul 2010
    Posts
    520

    Default

    Quote Originally Posted by schmidtbag View Post
    I still can't get a clear answer - do current DX11/OGL4.x cards support DX12 and OGL5? And, I still don't understand why this needs a new specification - why can't older versions just simply be modified to increase efficiency? In other words, the way I see it, I don't think there's going to be some magic function or variable you define where suddenly CPU efficiency increases. The performance increase is, from what I gather, just the way the instructions are interpreted and carried out. Python is a good example of this - regular C-python has relatively poor performance, but you can take the EXACT same code and run it under a different interpreter like jython or pypy and your performance dramatically increases. I understand openGL doesn't work the same way as python, I'm just wondering why exactly this isn't as simple as I think it is.
    Current DX/GL drivers do a lot of input validation and take care of resource management (allocation, buffering...). Assuming DX12 will be like Mantle, this validation, resource management will go away and become game engine responsibility, allowing higher performance because the game knows better what to do when.

  8. #8
    Join Date
    Jun 2009
    Posts
    1,188

    Default

    log0 took the other part of the answer

  9. #9
    Join Date
    Jul 2013
    Posts
    341

    Default

    Quote Originally Posted by schmidtbag View Post
    I still can't get a clear answer - do current DX11/OGL4.x cards support DX12 and OGL5? And, I still don't understand why this needs a new specification - why can't older versions just simply be modified to increase efficiency? In other words, the way I see it, I don't think there's going to be some magic function or variable you define where suddenly CPU efficiency increases. The performance increase is, from what I gather, just the way the instructions are interpreted and carried out. Python is a good example of this - regular C-python has relatively poor performance, but you can take the EXACT same code and run it under a different interpreter like jython or pypy and your performance dramatically increases. I understand openGL doesn't work the same way as python, I'm just wondering why exactly this isn't as simple as I think it is.
    I don't know OpenGL myself so I won't talk about that. But the python example you mentioned explains exactly why you might need a new specification rather than just a new driver/engine/compiler/etc. Your Python code can have dramatic performance increase just by switching to jython, but it will still not be anywhere near C performance because C has static types and it allows you to handle your memory efficiently while python is bound by the garbage collector and the dynamic types which are by design slower than manual memory management and static types no matter what compiler you use.

  10. #10
    Join Date
    Dec 2010
    Location
    MA, USA
    Posts
    1,440

    Default

    Thanks for the info everyone, the explanations helped.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •