Announcement

Collapse
No announcement yet.

Intel, AMD, NVIDIA Working To Reduce OpenGL Overhead

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by NeoBrain View Post
    Porting hlsl parsing code to linux is anything but hard, you know.
    I'm pretty sure there's already code for that, in wine, in all those failed directx state trackers etc

    It's just that there's currently a grand total of 0 applications using mantle. There may soon be one, but mantle support keeps being postponed until they get their directx version out of beta. A few others are announced, but far off.

    On linux, there's 0 applications, 0 announced. Why would anyone care?

    Comment


    • #12
      Originally posted by Ferdinand View Post
      This gives me a big question: If AMD knew that OpenGL would come close to zero driver overhead why create Mantle? You would hope that the reason is that Mantle is so superior that OpenGL could never catchup but... I am not sure.
      Well, one reason for Mantle is that it gives more optimisation opportunities, on top of eliminating driver overhead. Removing legacy cruft is also a part of the decision, I'd reckon. Still, I don't like the idea of Mantle due to it being a potential minefield of game-specific issues that can never get fixed due to games being proprietary (as opposed to FOSS drivers, or even blob drivers, as they get updated longer than any individual games do).

      Comment


      • #13
        Originally posted by rohcQaH View Post
        It's just that there's currently a grand total of 0 applications using mantle.

        Battlefield 4 and Thief are currently running on Mantle. Pretty sure Star Citizen said that they will be supporting Mantle as well.

        Comment


        • #14
          Actually Only BF4 has Mantle support. Thief will get that patched in at a later time...

          Comment


          • #15
            If mantel comes to linux I think it will be as opengl extension. And even if mantel itself won't take of except a few game titles, AMD pushes for a decent API without all the crappy legacy overhead. And that is something I really appreciate.
            My System still has "phenom" 840 (well its an athlon really) and in bf4 multiplayer in big arrays I get two times the fps, and that on a GCN 1.0 card (7870)!

            Comment


            • #16
              Originally posted by Jan Klesnil View Post
              I guess they will be presenting pretty the same techniques as was NVidia presenting at Steam Dev Days.

              More interesting stuff is that the key technique to reducing the number of draw calls is the GL_ARB_multi_draw_indirect extension that was created by AMD as GL_AMD_multi_draw_indirect in 2010 and later approved by Kronos.

              Looking at the Mantle API (or what is known about it) it looks like AMD wrapped all the advanced techniques from OpenGL, switched from GLSL to DirectX shaders (therefore no Linux support) and called it Mantle. I guess it was easier job than to fix their OpenGL drivers and convince game developers to use OpenGL with it thousands versions, profiles, extensions etc.
              Only HLSL support is correct from Your statement.

              Mantle DO NOT force You to use instancing. (Ofc. game devs will use it, as its nice GPU optimization ... but) Its about NOT HAVING fat GPU driver working as a middle man for every request to the GPU.
              App composing their own command queues? No such thing in OpenGL.
              App using shader compiler as any other run on execution library? No such thing in OpenGL*.
              Access to every separate execution unit on the GPU? No such thing in OpenGL.
              ....


              *OGL ES can unload shader compiler. So it works almost as good.


              Yes some things are beeing brought into OpenGL (e.g. unified buffer storage). But main problem with OGL/DX is this big/ugly/proprietary piece of code that works it black magic so that GPU can work.

              (Yes Mesa do solve some of the problems. Game devs can participate in Mesa development).




              Here is some example:

              Game devs do some nice looking game. But then they find that crossfire solutions do not work well with their game.
              They are toasted.

              Or they are lucky few who can reach AMD engineering team, and then they can work with AMD on crossfire profiles. Then AMD can update their driver. Gamers can download&install those drivers. And voila.


              With OpenGL/DX its that way.

              With Mantle?

              Game devs do some nice looking game. But then they find that crossfire solutions (2 and more GPUs in one PC) do not work well with their game.
              They write code supporting crossfire. And push autoupdate to their users. THE (happy) END.

              Nice difference, isn't it?

              But allowing game devs to manually do the job of GPU driver, they can do it better. Just because AMD/Nvidia/Intel do not have resources to serve all the game devs out there in the world. And OGL/DX would require some serious rewiring to get that level of independence.

              Comment


              • #17
                Originally posted by NeoBrain View Post
                Porting hlsl parsing code to linux is anything but hard, you know.
                Porting HLSL code is easy, but maintaining 2 versions isn't. Not if you've got thousands of shaders, all getting frequent updates by developers, anyway.

                It means whenever you find a bug in 1 version, you have to go and update the other as well. Then you've got 2 different possible changes that might have regressions in them to deal with, and so on.

                That's why the idea of having something automatically convert shaders is so popular (see MojoShader, Valve's automated tool, etc.)

                Comment


                • #18
                  Originally posted by przemoli View Post
                  Here is some example:

                  Game devs do some nice looking game. But then they find that crossfire solutions do not work well with their game.
                  They are toasted.

                  Or they are lucky few who can reach AMD engineering team, and then they can work with AMD on crossfire profiles. Then AMD can update their driver. Gamers can download&install those drivers. And voila.


                  With OpenGL/DX its that way.

                  With Mantle?

                  Game devs do some nice looking game. But then they find that crossfire solutions (2 and more GPUs in one PC) do not work well with their game.
                  They write code supporting crossfire. And push autoupdate to their users. THE (happy) END.

                  Nice difference, isn't it?

                  But allowing game devs to manually do the job of GPU driver, they can do it better. Just because AMD/Nvidia/Intel do not have resources to serve all the game devs out there in the world. And OGL/DX would require some serious rewiring to get that level of independence.
                  No. What would happen in practice is that they release the autoupdate, and suddenly the game completely locks up on all first-generation crossfire cards due to the same functions behaving slightly differently there. The game devs have no such hardware, thus they can't debug and leave such users out in the cold. THE (unhappy) END.

                  That's terrible. Game devs have no clue about how to code GPU drivers. It's not their job. The vendors are the ones who actually know how their hardware works under the hood, they have all the documentation about it, and most importantly, they have the experience. Now that Mantle is very new and only GCN cards support it, it looks good, but for now it's a small set of graphics cards to deal with. As the set grows, it will become harder and harder to make it work on all cards, game devs will start relying on the lowest common denominator (or break some cards, in which case Mantle becomes just a shiny extra that works only if you happen to be lucky), cruft will build up. And suddenly you have something that's worse than OpenGL ever was.

                  The way to make everything work correctly is to cooperate, not go around. Negotiate with vendors, suggest changes to Khronos, contribute fixes and extensions to Mesa. That way everyone is happy.

                  Comment


                  • #19
                    AAA game are coming into SteamOS if Intel/AMD/Nvidia work together on OpenGL ??? (I hope so)
                    Gabe said in the past that more AAA titles will be announced on SteamOS in 2014... But for the moment, there are... nothing

                    Comment


                    • #20
                      Originally posted by GreatEmerald View Post
                      No. What would happen in practice is that they release the autoupdate, and suddenly the game completely locks up on all first-generation crossfire cards due to the same functions behaving slightly differently there. The game devs have no such hardware, thus they can't debug and leave such users out in the cold. THE (unhappy) END.

                      That's terrible. Game devs have no clue about how to code GPU drivers. It's not their job. The vendors are the ones who actually know how their hardware works under the hood, they have all the documentation about it, and most importantly, they have the experience. Now that Mantle is very new and only GCN cards support it, it looks good, but for now it's a small set of graphics cards to deal with. As the set grows, it will become harder and harder to make it work on all cards, game devs will start relying on the lowest common denominator (or break some cards, in which case Mantle becomes just a shiny extra that works only if you happen to be lucky), cruft will build up. And suddenly you have something that's worse than OpenGL ever was.

                      The way to make everything work correctly is to cooperate, not go around. Negotiate with vendors, suggest changes to Khronos, contribute fixes and extensions to Mesa. That way everyone is happy.
                      Well. AMD, Oxide, and others disagree.

                      Did You actually listened to those videos from AMD conference?

                      Where Oxide did listed all those things they DO, and know that after that GPU drivers DO AGAIN?

                      Also, how in hell, can driver know when to perfom its tasks, and on which CPU core, so it work is least disruptive to the game calculations on CPU?

                      Comment

                      Working...
                      X