Originally posted by johnc
View Post
Announcement
Collapse
No announcement yet.
Next-Gen OpenGL To Be Announced Next Month
Collapse
X
-
Originally posted by blackiwid View Postthey could do both, if vendor = Nvidia : use better full opengl renderer else use minimal....
Comment
-
Originally posted by gamerk2 View PostThe PS3/PS4 has a low level libgcm library, which is used a lot, but even then, a lot of the higher order control is done via PSGL (Essentially, OpenGL ES 2.0) for simplicities sake.
Might be some small casual games do, but most of other games doesn't.
Also PSGL it's just library on top of libgcm so...
Comment
-
Originally posted by johnc View PostJust out of curiosity, from a developer's perspective, what does the "legacy cruft" matter? Other than running into outdated documentation on the web, when would developers ever encounter it?
Comment
-
Originally posted by gamerk2 View PostHigher level API's save a lot more then a few hundred hours. Remember how many different generations of GPUs are still out there. You have Intel IGPs, two different AMD GPU architectures (VLIW4 And GCN), several NVIDIA architectures,
But then their is the other dymention u say there are several generations who cares u start with the newest and never programm the support for the older cards, 5 years later nearly nobody has such old hardware and if so they have extremly bad driver support who cares. Its not the job of grafics companies to bring top notch driver support to 5 year old gpus.
And if u have for one generation nearly perfect drivers, and u start the support for the next generation, its way less expensive to maintain the support for the old gpus then implementing it initialy.
What have we today a new driver for each single game that comes out. there are less gpus than different games so I doubt that its so hard to do.
I heard this aargument (why consoles bring more fps per buck) years ago. And it was exactly that argument that there are to much different setups that such loiw level apis could not support, what have we today a api that supports Jaguar APUs with 128 shader-cores that cost 25 euro, even embeded 6W versions, ohh tablet even 3,5W versions... to 1200 euro grafic cards no matter how much ram u have no matter if u use harddisk or ssd no matter if what x86 cpu u combine it. The advantages differ, for some setups it even maybe gives no advantage, but it works on all, and if you combine your hardware not totaly stupid u gain big fps advantages.
btw arm and x86 gaming market has very few overlaps, they dont use opengl they use openal as primary api, that maybe could be a way between opengl garbage and mantle a middle way. But only if khronos would focus on gaming needs, they dont. So I dont see a api tahts designed for everything but gaming to suceed in the gaming market.
and its ok why not having opengl for ever as a api for maximum compatibility like "windows compatibility mode" and use mantle for the 1-2 newest gaming generations.
Its not that 100% of all games need every single fps it can get, there are so much games where driver effency doesnt be the big problem and only a few where there exist no or only a few cpus that are fast enough to bring good results.
all f2p games look like 5 year old games, of course because else only they would exclude 95% of the possible customers, but also on the buy to play side, take blizzard they have not one high-hardware needing game, or valve, same.
But if I upgrade a gaming rig, I dont want to buy every time when I upgrade my grafic card my cpu too, because this apis are so extremly bad that u need double the cpu cores for the same speed.
Comment
-
Originally posted by johnc View PostJust out of curiosity, from a developer's perspective, what does the "legacy cruft" matter? Other than running into outdated documentation on the web, when would developers ever encounter it?
Perhaps you can see the problem now No matter what they do, they'll anger devs. Remove the deprecated and slow functionality - you just made programming in GL harder for everyone. Don't remove them - devs are angry because the obvious way is slow.
Comment
-
Originally posted by Kivada View PostYou mean DX10.1 right? IIRC it's because AMD was the first one to support it by a long shot, there where a few games that implemented it, even some that removed the capability because Nvidia paid them off as DX10.1 made the games run noticeably faster then DX10.
Comment
-
Originally posted by curaga View PostIn GL's case, it means you have a selection of N ways to do any one thing. Also in GL's case, the most obvious, perfect fit that is simple to use and only requires a couple lines, is the slow and deprecated one, whereas the performant one is complicated to use and requires hundreds of lines to implement.
Perhaps you can see the problem now No matter what they do, they'll anger devs. Remove the deprecated and slow functionality - you just made programming in GL harder for everyone. Don't remove them - devs are angry because the obvious way is slow.
Comment
-
Originally posted by profoundWHALE View PostWould a 2 tier system work? By that I mean have a very high level, fast to code for and get going version. Then the next tier would be faster but take longer to code for. The bonus is that it should store both tiers at the same time, but you specify which to use in case something goes wrong.
Comment
Comment