Announcement

Collapse
No announcement yet.

Next-Gen OpenGL To Be Announced Next Month

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by johnc View Post
    Just out of curiosity, from a developer's perspective, what does the "legacy cruft" matter? Other than running into outdated documentation on the web, when would developers ever encounter it?
    Before you can work on a big project, you have to clear the workbench of junk.

    Comment


    • #42
      Originally posted by blackiwid View Post
      they could do both, if vendor = Nvidia : use better full opengl renderer else use minimal....
      Not going to happen I think, at least not yet. The target market/platform is just too small, much smaller than Windows/Mantle. I am actually quite impressed that game devs are seriously starting to target Linux/OpenGL. I am not sure it is financially profitable for AAA games/publishers. I could be wrong of course, don't have the numbers.

      Comment


      • #43
        Originally posted by gamerk2 View Post
        The PS3/PS4 has a low level libgcm library, which is used a lot, but even then, a lot of the higher order control is done via PSGL (Essentially, OpenGL ES 2.0) for simplicities sake.
        This isn't actually true. None of AAA engines use PSGL on PS3.
        Might be some small casual games do, but most of other games doesn't.
        Also PSGL it's just library on top of libgcm so...

        Comment


        • #44
          Originally posted by johnc View Post
          Just out of curiosity, from a developer's perspective, what does the "legacy cruft" matter? Other than running into outdated documentation on the web, when would developers ever encounter it?
          I guess it matters for driver developers, but not for game developers. I think what they should do is make OGL5 simple and clean with low overhead, and then reimplement OGL 4.4 using OGL 5. This way hardware vendors will only need to focus on implementing the simpler OGL 5 and will get OGL 4.4 for free.

          Comment


          • #45
            Originally posted by gamerk2 View Post
            Higher level API's save a lot more then a few hundred hours. Remember how many different generations of GPUs are still out there. You have Intel IGPs, two different AMD GPU architectures (VLIW4 And GCN), several NVIDIA architectures,
            And u have as many big companies as there different gpus are. And especialy if you mention intel and the big arm sockels, there would be trillions of people that would have big advantages if that garbage gpus would produce 20-30% better results. Ok here u have more the problem with extremly bad gpus at least thats true for intel with often very strong cpus so the mantle effect would not bring much.

            But then their is the other dymention u say there are several generations who cares u start with the newest and never programm the support for the older cards, 5 years later nearly nobody has such old hardware and if so they have extremly bad driver support who cares. Its not the job of grafics companies to bring top notch driver support to 5 year old gpus.

            And if u have for one generation nearly perfect drivers, and u start the support for the next generation, its way less expensive to maintain the support for the old gpus then implementing it initialy.

            What have we today a new driver for each single game that comes out. there are less gpus than different games so I doubt that its so hard to do.

            I heard this aargument (why consoles bring more fps per buck) years ago. And it was exactly that argument that there are to much different setups that such loiw level apis could not support, what have we today a api that supports Jaguar APUs with 128 shader-cores that cost 25 euro, even embeded 6W versions, ohh tablet even 3,5W versions... to 1200 euro grafic cards no matter how much ram u have no matter if u use harddisk or ssd no matter if what x86 cpu u combine it. The advantages differ, for some setups it even maybe gives no advantage, but it works on all, and if you combine your hardware not totaly stupid u gain big fps advantages.

            btw arm and x86 gaming market has very few overlaps, they dont use opengl they use openal as primary api, that maybe could be a way between opengl garbage and mantle a middle way. But only if khronos would focus on gaming needs, they dont. So I dont see a api tahts designed for everything but gaming to suceed in the gaming market.

            and its ok why not having opengl for ever as a api for maximum compatibility like "windows compatibility mode" and use mantle for the 1-2 newest gaming generations.

            Its not that 100% of all games need every single fps it can get, there are so much games where driver effency doesnt be the big problem and only a few where there exist no or only a few cpus that are fast enough to bring good results.

            all f2p games look like 5 year old games, of course because else only they would exclude 95% of the possible customers, but also on the buy to play side, take blizzard they have not one high-hardware needing game, or valve, same.

            But if I upgrade a gaming rig, I dont want to buy every time when I upgrade my grafic card my cpu too, because this apis are so extremly bad that u need double the cpu cores for the same speed.

            Comment


            • #46
              Originally posted by johnc View Post
              Just out of curiosity, from a developer's perspective, what does the "legacy cruft" matter? Other than running into outdated documentation on the web, when would developers ever encounter it?
              In GL's case, it means you have a selection of N ways to do any one thing. Also in GL's case, the most obvious, perfect fit that is simple to use and only requires a couple lines, is the slow and deprecated one, whereas the performant one is complicated to use and requires hundreds of lines to implement.

              Perhaps you can see the problem now No matter what they do, they'll anger devs. Remove the deprecated and slow functionality - you just made programming in GL harder for everyone. Don't remove them - devs are angry because the obvious way is slow.

              Comment


              • #47
                Originally posted by Kivada View Post
                You mean DX10.1 right? IIRC it's because AMD was the first one to support it by a long shot, there where a few games that implemented it, even some that removed the capability because Nvidia paid them off as DX10.1 made the games run noticeably faster then DX10.
                As far as I remember Nvidia paid for use postprocess AA shaders (that was faster with Dx10 then Dx10.1/GL3.2 with MSAA access from shaders) - now every card give access for msaa from shaders (on GL3.2 from GF8800), but nobody cares, and use postprocess AA (and can use it with dx9/gles) like FXAA, MLAA, TXAA... becouse it's faster.

                Comment


                • #48
                  Originally posted by curaga View Post
                  In GL's case, it means you have a selection of N ways to do any one thing. Also in GL's case, the most obvious, perfect fit that is simple to use and only requires a couple lines, is the slow and deprecated one, whereas the performant one is complicated to use and requires hundreds of lines to implement.

                  Perhaps you can see the problem now No matter what they do, they'll anger devs. Remove the deprecated and slow functionality - you just made programming in GL harder for everyone. Don't remove them - devs are angry because the obvious way is slow.
                  Would a 2 tier system work? By that I mean have a very high level, fast to code for and get going version. Then the next tier would be faster but take longer to code for. The bonus is that it should store both tiers at the same time, but you specify which to use in case something goes wrong.

                  Comment


                  • #49
                    Originally posted by profoundWHALE View Post
                    Would a 2 tier system work? By that I mean have a very high level, fast to code for and get going version. Then the next tier would be faster but take longer to code for. The bonus is that it should store both tiers at the same time, but you specify which to use in case something goes wrong.
                    This already exists. It's called SDL / SFML / Allegro / L?ve2D.

                    Comment


                    • #50
                      Originally posted by Ancurio View Post
                      This already exists. It's called SDL / SFML / Allegro / L?ve2D.
                      The way people are talking here it sounds like there wasn't. My mistake.

                      So I'm assuming it goes like this:
                      Low Level: Mantle, Metal
                      Mid Level: OpenGL/Directx
                      High Level: SDL

                      Comment

                      Working...
                      X