Announcement

Collapse
No announcement yet.

Next-Gen OpenGL To Be Announced Next Month

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by Ancurio View Post
    I didn't say any of them abstracted 3D rendering. If you're going to do 3D, you're not going to want to go higher than OGL/D3D in the first place; what I meant was that the people who complain the most that their "convenient" glBegin/glEnd stuff is deprecated/gone in modern GL are the ones only aiming to render simple rectangles (2D games) anyways. For those people, there is no reason to use OGL directly.
    I somehow had the impression the topic would be about high and low level 3D APIs, and not about people having problems to draw a friggin rectangle. My apologies.

    Comment


    • #62
      Originally posted by przemoli View Post
      If You mean, absolute performance? Then yes Nvidia did great job. My speculation is that they aimed at those fast patch for some time now, and they have optimised driver and maybe even some hw to accelerate it more.

      But if You mean, that AMD/Intel fail, than its big fat NO. F**** NO.

      Both AMD and Intel see 1000% or more improvements by selecting right way of doing things.

      End performance may be less than for Nvidia but its still much better than old ways for AMD/Intel.

      No excuse for not to adopt AZDO.

      (And while one of those extensions is from OpenGL 4.4. Core, it can be implemented as extension without claiming even 4.0 as may happen for Mesa. OpenGL 4.x level of hw is required, but not full OpenGL 4.4)
      nope, they didn't optimize things. they just made them work as they should.

      in my simple mind i can imagine this. correctly implemented extension multidraw_indirect will take path where it completely avoids setting state as per specs and make it damn fast, but it will take a lot of work and testing. lame developer on the other hand can implement it in 2min by simply putting usual render queue loop to parse arrays.

      in later,... multidraw_indirect end result will be correct, but with 100% same abysmal performance as GL 3.3. sadly, later implementation counts as implementation just as the one that did job as needed.

      if you check the tests people did on this extension. amd and intel perform with same speed as without.

      same lame 2min implementations can be done for each and every zero-overhead needed extension.

      Comment


      • #63
        Little of off topic, but might be other AMD developers read this.
        Where is Graham Sellers disappear from twitter?

        I'm waited to see his comments about GL future so much, but no news for about a month. Hope he's alright...

        Comment


        • #64
          Originally posted by justmy2cents View Post
          nope, they didn't optimize things. they just made them work as they should.

          in my simple mind i can imagine this. correctly implemented extension multidraw_indirect will take path where it completely avoids setting state as per specs and make it damn fast, but it will take a lot of work and testing. lame developer on the other hand can implement it in 2min by simply putting usual render queue loop to parse arrays.

          in later,... multidraw_indirect end result will be correct, but with 100% same abysmal performance as GL 3.3. sadly, later implementation counts as implementation just as the one that did job as needed.

          if you check the tests people did on this extension. amd and intel perform with same speed as without.

          same lame 2min implementations can be done for each and every zero-overhead needed extension.
          That was actually first AMD implementation.

          Some 2 weeks after some GL dev proposed it.

          So this implementation is still good for:
          a) When hw is not able to execute such single call and it must be split by the driver
          b) Decreasing need for fallbacks (since new way is faster on new designs and at least as fast on old ones, ...)

          End goal is for hw execution.

          That is Nvidia vision. And You can see that in the apitest results for them.

          AMD and Intel both lag behind, but they DO make this a better technique then "older" OpenGL paths.

          So no excuse to not to use (and little excuse to rant about it :P)

          Comment


          • #65
            Originally posted by przemoli View Post
            That was actually first AMD implementation.

            Some 2 weeks after some GL dev proposed it.

            So this implementation is still good for:
            a) When hw is not able to execute such single call and it must be split by the driver
            b) Decreasing need for fallbacks (since new way is faster on new designs and at least as fast on old ones, ...)

            End goal is for hw execution.

            That is Nvidia vision. And You can see that in the apitest results for them.

            AMD and Intel both lag behind, but they DO make this a better technique then "older" OpenGL paths.

            So no excuse to not to use (and little excuse to rant about it :P)
            not ranting, at least i didn't mean to. i only use NVidia for gaming and this doesn't touch me. still, it is way past 2 weeks with no results from any other company than NVidia and if people go with your proposed mentality into the game performance. why not simply make software renderer and it will work everywhere. off course, it will work <1fps, but who cares. important thing is every card on the world is supported and developers can use ?fast path?

            and even if you tried to use it like you say, some features like fencing, persistent buffers, texture arrays would impose uncontrollable problems. part of making the game is also controlling how much and how fast you feed to GPU and VRAM. so, any game that would try working faster and more than 3.3 was capable would kill it self by default. this brings the need to limit resources where one would need to keep 2 versions of the game. simpler and complex

            Comment


            • #66
              Originally posted by justmy2cents View Post
              not ranting, at least i didn't mean to. i only use NVidia for gaming and this doesn't touch me. still, it is way past 2 weeks with no results from any other company than NVidia and if people go with your proposed mentality into the game performance. why not simply make software renderer and it will work everywhere. off course, it will work <1fps, but who cares. important thing is every card on the world is supported and developers can use ?fast path?

              and even if you tried to use it like you say, some features like fencing, persistent buffers, texture arrays would impose uncontrollable problems. part of making the game is also controlling how much and how fast you feed to GPU and VRAM. so, any game that would try working faster and more than 3.3 was capable would kill it self by default. this brings the need to limit resources where one would need to keep 2 versions of the game. simpler and complex
              Driver-only implementation (and I do not know if AMD still use it or for what hw!), do not pose problems with other things for OpenGL app.

              With or without MDI driver need to take care of those too.

              Comment


              • #67
                Originally posted by przemoli View Post
                Driver-only implementation (and I do not know if AMD still use it or for what hw!), do not pose problems with other things for OpenGL app.

                With or without MDI driver need to take care of those too.
                it is kinda obvious you didn't understand my point.

                having "best opengl path" is 10% of the problem. in old days gpu was simply not capable rendering as much as you could feed it. at that time faster gpu ment everything. gpus evolved and right now you CAN'T feed it as much as it could render (why do you think cpu is gaming bottleneck). that is why avoiding state setting between ops and syncing means so much. waiting for gpu to be free, setting it up, do your single action, reseting... 90% of the time you used to wait and do useless things.

                if you do implementation of multidraw_indirect that just parses arrays and then sync, set, draw one, reset, rinse and repeat... welcome, you just created nightmare, where there is >1000% randomness. you avoid ?has_extension? with introducing ?does_it_actually_work?. and later is worse than former. you really did create single path how to code, you just don't have a clue if it works

                well, it is even worse since some hw can and some can't. it just breaks whole meaning. it's like allowing people with bikes driving on the freeway, utter chaos. difference in speed is too big to be controlled.

                and don't misunderstand me, i'll praise the world, amd and intel if it works out for them. they could do few approaches to lessen the pain, still it won't beat hw. also, i love amd on simple desktop and i love intel on servers. i would even be prepared to pay double price for amd (by buying higher price range) if it performed as well as some 750 or 760 and that meant i can avoid blob

                Comment


                • #68
                  Originally posted by justmy2cents View Post
                  it is kinda obvious you didn't understand my point.

                  having "best opengl path" is 10% of the problem. in old days gpu was simply not capable rendering as much as you could feed it. at that time faster gpu ment everything. gpus evolved and right now you CAN'T feed it as much as it could render (why do you think cpu is gaming bottleneck). that is why avoiding state setting between ops and syncing means so much. waiting for gpu to be free, setting it up, do your single action, reseting... 90% of the time you used to wait and do useless things.

                  if you do implementation of multidraw_indirect that just parses arrays and then sync, set, draw one, reset, rinse and repeat... welcome, you just created nightmare, where there is >1000% randomness. you avoid ?has_extension? with introducing ?does_it_actually_work?. and later is worse than former. you really did create single path how to code, you just don't have a clue if it works

                  well, it is even worse since some hw can and some can't. it just breaks whole meaning. it's like allowing people with bikes driving on the freeway, utter chaos. difference in speed is too big to be controlled.

                  and don't misunderstand me, i'll praise the world, amd and intel if it works out for them. they could do few approaches to lessen the pain, still it won't beat hw. also, i love amd on simple desktop and i love intel on servers. i would even be prepared to pay double price for amd (by buying higher price range) if it performed as well as some 750 or 760 and that meant i can avoid blob
                  I've linked to another article/benchmark/review before, but anyways, Mantle seems to better benefit the computers that have a CPU bottleneck, not so much the ones with the wicked fast processors because then they start running into a GPU bottleneck.

                  Comment


                  • #69
                    Originally posted by profoundWHALE View Post
                    I've linked to another article/benchmark/review before, but anyways, Mantle seems to better benefit the computers that have a CPU bottleneck, not so much the ones with the wicked fast processors because then they start running into a GPU bottleneck.
                    Pretty much. Mantel isn't improving the rendering backend much, its lowering the demand of the drivers main thread. That's where the performance benefit is coming from.

                    Comment


                    • #70
                      Originally posted by gamerk2 View Post
                      Pretty much. Mantel isn't improving the rendering backend much, its lowering the demand of the drivers main thread. That's where the performance benefit is coming from.
                      While that is generally true, such viewpoint do have 2 blind spots:

                      1) Time. Game devs need time to coup with new situation. Mantle allow for thing not possible previously (assigning tasks to separate engines on the GPU!!!). So we have not yet see what dedicated teams of game devs could do with Mantle. (Multi-GPU solutions especially should improve - no more waiting for GPU vendor driver update for workable SLI/Crossfire)

                      2) New possibilities. Mantle allow for pairing different GPUs (different in term of power). There is no benchmark for that curently, as nowhere else You can do parts of graphics pipeline on second (third, etc....) GPU. That may be something we will only start to see. (Fog, postprocessing, etc. come to mind, which could be executed on APU, while dGPU work for everything else)


                      So one can not discard usefulness of Mantle based on current benchmarks, as those do not push boundaries far enough (2) and there are too little devs involved so we can not see how Mantle helps(harms) devs ability to produce games WITHOUT vendor involvement (1).

                      Comment


                      • #71
                        My Predictions...

                        Could You guys check this out and tell me what You think about it?

                        http://www.phoronix.com/forums/showt...amp-Android)-!

                        Comment


                        • #72
                          Gallium3D

                          All this talk about reducing API overhead by removing abstractions, and nobody has asked how Gallium3D is affected by this? I'd imagine that Gallium might get in the way of implementing the next-gen OpenGL with low overhead.

                          Comment


                          • #73
                            Originally posted by gigaplex View Post
                            All this talk about reducing API overhead by removing abstractions, and nobody has asked how Gallium3D is affected by this? I'd imagine that Gallium might get in the way of implementing the next-gen OpenGL with low overhead.
                            Gallium 3D for now has nothing to do with that.
                            At the time crapless OpenGL will be released it will be driver with compatibility. Propably Gallium3D will support old OGL and OGL Next Gen at least at first.
                            I would be much more intrested about X/Wayland issue, but for 99% OGL Next will be compatible to both.
                            Propably it is gonna take a year at minimum till it gonna be released so don't worry.

                            Comment


                            • #74
                              Originally posted by maslascher View Post
                              Gallium 3D for now has nothing to do with that.
                              At the time crapless OpenGL will be released it will be driver with compatibility. Propably Gallium3D will support old OGL and OGL Next Gen at least at first.
                              I would be much more intrested about X/Wayland issue, but for 99% OGL Next will be compatible to both.
                              Propably it is gonna take a year at minimum till it gonna be released so don't worry.
                              Why does Gallium have nothing to do with the discussion? It shouldn't be hard to write a state tracker for the next gen OpenGL for Gallium, but I doubt you'll see the performance benefits compared to a non-Gallium architecture. Intel claimed that the CPU overhead of Gallium is fairly high which is why they didn't use it for their drivers. If there's any truth to their claim then Gallium won't be a good foundation for a low overhead, high performance API.

                              Comment


                              • #75
                                Originally posted by gigaplex View Post
                                Why does Gallium have nothing to do with the discussion? It shouldn't be hard to write a state tracker for the next gen OpenGL for Gallium, but I doubt you'll see the performance benefits compared to a non-Gallium architecture. Intel claimed that the CPU overhead of Gallium is fairly high which is why they didn't use it for their drivers. If there's any truth to their claim then Gallium won't be a good foundation for a low overhead, high performance API.
                                I doubt that will be a problem at least for the next few years. Why gallium3d is only used by free drivers, this drivers Grafics performace is pretty bad, for amd hardware on old games it may differ but not with new games, heck the amd opensource drivers never even reached opengl 4.0 capabilites.

                                And the intel gpus are just by hardware extremly bad gpu (compared to ded. grafic cards at least and even against amd apus they are not on par).

                                So in 99% of the time the gpu driver/hardware will be the bottleneck and not if the cpu has 20-30% load.

                                And even if that would change, I doubt that gallium3d is really in the way of such stuff. its not like microsoft changes their driver modell for dx12 or something, if they do its again only to have a excuse to support it only with windows 9.

                                Comment

                                Working...
                                X