Time and effort to re-code and re-certify the SW, for no advantage whatsoever?
Originally Posted by TheSoulz
By splitting off a sub-committee
Originally Posted by liam
I agree with you and one of the head-scratchers is that modern GPUs, to get the most out of them, require an understanding of their architectures. To have a low-level, abstract API seems a bit of a contradiction. How are they going to make an API that works well for all the different GPU architectures out there? What does an Intel IGP have in common with Maxwell? And how will it be optimized for architectures as of yet unreleased, like Pascal or whatever?
Originally Posted by justmy2cents
Even if you look at something like CUDA, to get maximum performance on each architecture your source has to understand each architecture and be updated accordingly. After all Kepler has certain features that Fermi doesn't, not to mention different SM sizes, etc.
This "low-level" talk is always coming from the context of having one piece of target hardware (Xbox or PS, etc.). I'm still not convinced that this is going to work for multiple architectures across multiple vendors on multiple form factors (desktop, mobile, etc.).
As far as a committee committee goes... I imagine all the big market players, the ones who eventually support it and send that into our homes, might not agree on a certain way of doing things, and that top level committee keeps it in check, like quality control, planning development stages and such. If I remember correctly, last time they simply couldn't decide how to go about certain things when rewriting opengl at 3.0 and because of that, they just stayed their course. Now there's enough of a push and momentum to get it done for it to actually happen.
Because those big market players have a say, they will also be more likely to support or even push it.
Last edited by profoundWHALE; 08-21-2014 at 10:09 AM.
When they say low-level, it's not so much the vendor specific hw state or ISA that they are talking about, it's more a matter or giving applications more control over memory management and draw scheduling.
Originally Posted by johnc
OpenGL "Next" is going to be extremely important, and since it will probably be the only major compatibility breakage for many years we need to do it right, otherwise OpenGL might become "irrelevant".
The previous revolution in GPU technology consisted of moving some functionality from API-calls into the shading language, which allows us to program parts of the GPU pipeline directly, including texturing, vertex manipulation, tessellation and so on. This has allowed us to create lots of amazing effects like water surface, bump mapping, etc. but a problem still remain; pipeline stages are still too rigid and usage of large amounts of objects does still require lots of API-calls. Even in OpenGL 4.x and Direct3D 11 there is a large performance difference between rendering one big static mesh and many dynamic small ones. Animation and manipulation is still very expensive and limited. Even today's GPUs are capable of rendering millions of polygons at a high framerate, but we never see any games close to that since the API-cost of manipulating so many detailed objects is simply too high. Even if we virtually eliminated the overhead, it would still be hard to send enough API-calls to keep the GPU busy enough.
The logical solution to this is to expand the "shader" programs into a general low-level GPU programming language(C-style, not asm), expanding on the ideas of CUDA. This would allow the programmer to design most of the pipeline, and way more flexibility in doing culling, vertex creation, physics interaction etc. directly within the shader program. This would simplify driver development, and would even allow for a legacy OpenGL implementation on top of it (kind of like we already do in OpenGL with fixed pipeline). Anyone could then add their preferred abstraction on top of it, or use the low-level shading directly. Programming on the CPU and GPU would then be more similar and seamless, unlike today's rigid GPU programming with thousands of different API-calls to communicate. In addition, then there would be no need for separate "OpenGL", "OpenCL", etc. A low-level GPU language would allow us to do compute, graphics, heck even audio processing in theory.
Last edited by efikkan; 08-21-2014 at 10:58 AM.
or maybe, as it happened often times in other contexts, picking an existing implementation and building on it - if not adopting it as is.. - rather than really starting from scratch...
Originally Posted by perpetualrabbit
Ever heard of OpenCL? Maybe you want to look into that...
Originally Posted by sarmad
sorry, but... by "we", do you imply you have insider knowledge?
Originally Posted by efikkan
also, wouldn't moving everything into shader cause whole lots of other problems? shader running on GPU wouldn't be able to interact with devices or filesystems to load resources for example. some form of standard api (not all functionality) is still required as i see it
Last edited by justmy2cents; 08-23-2014 at 09:11 PM.
There already is a mess with DX with DX 9 vs DX 11. It also isn't portable. Double fail.
Originally Posted by johnc