Announcement

Collapse
No announcement yet.

Next-Gen OpenGL To Be Announced Next Month

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • blackiwid
    replied
    Originally posted by gigaplex View Post
    Why does Gallium have nothing to do with the discussion? It shouldn't be hard to write a state tracker for the next gen OpenGL for Gallium, but I doubt you'll see the performance benefits compared to a non-Gallium architecture. Intel claimed that the CPU overhead of Gallium is fairly high which is why they didn't use it for their drivers. If there's any truth to their claim then Gallium won't be a good foundation for a low overhead, high performance API.
    I doubt that will be a problem at least for the next few years. Why gallium3d is only used by free drivers, this drivers Grafics performace is pretty bad, for amd hardware on old games it may differ but not with new games, heck the amd opensource drivers never even reached opengl 4.0 capabilites.

    And the intel gpus are just by hardware extremly bad gpu (compared to ded. grafic cards at least and even against amd apus they are not on par).

    So in 99% of the time the gpu driver/hardware will be the bottleneck and not if the cpu has 20-30% load.

    And even if that would change, I doubt that gallium3d is really in the way of such stuff. its not like microsoft changes their driver modell for dx12 or something, if they do its again only to have a excuse to support it only with windows 9.

    Leave a comment:


  • gigaplex
    replied
    Originally posted by maslascher View Post
    Gallium 3D for now has nothing to do with that.
    At the time crapless OpenGL will be released it will be driver with compatibility. Propably Gallium3D will support old OGL and OGL Next Gen at least at first.
    I would be much more intrested about X/Wayland issue, but for 99% OGL Next will be compatible to both.
    Propably it is gonna take a year at minimum till it gonna be released so don't worry.
    Why does Gallium have nothing to do with the discussion? It shouldn't be hard to write a state tracker for the next gen OpenGL for Gallium, but I doubt you'll see the performance benefits compared to a non-Gallium architecture. Intel claimed that the CPU overhead of Gallium is fairly high which is why they didn't use it for their drivers. If there's any truth to their claim then Gallium won't be a good foundation for a low overhead, high performance API.

    Leave a comment:


  • maslascher
    replied
    Originally posted by gigaplex View Post
    All this talk about reducing API overhead by removing abstractions, and nobody has asked how Gallium3D is affected by this? I'd imagine that Gallium might get in the way of implementing the next-gen OpenGL with low overhead.
    Gallium 3D for now has nothing to do with that.
    At the time crapless OpenGL will be released it will be driver with compatibility. Propably Gallium3D will support old OGL and OGL Next Gen at least at first.
    I would be much more intrested about X/Wayland issue, but for 99% OGL Next will be compatible to both.
    Propably it is gonna take a year at minimum till it gonna be released so don't worry.

    Leave a comment:


  • gigaplex
    replied
    Gallium3D

    All this talk about reducing API overhead by removing abstractions, and nobody has asked how Gallium3D is affected by this? I'd imagine that Gallium might get in the way of implementing the next-gen OpenGL with low overhead.

    Leave a comment:


  • Linuxxx
    replied
    My Predictions...

    Could You guys check this out and tell me what You think about it?

    http://www.phoronix.com/forums/showt...amp-Android)-!

    Leave a comment:


  • przemoli
    replied
    Originally posted by gamerk2 View Post
    Pretty much. Mantel isn't improving the rendering backend much, its lowering the demand of the drivers main thread. That's where the performance benefit is coming from.
    While that is generally true, such viewpoint do have 2 blind spots:

    1) Time. Game devs need time to coup with new situation. Mantle allow for thing not possible previously (assigning tasks to separate engines on the GPU!!!). So we have not yet see what dedicated teams of game devs could do with Mantle. (Multi-GPU solutions especially should improve - no more waiting for GPU vendor driver update for workable SLI/Crossfire)

    2) New possibilities. Mantle allow for pairing different GPUs (different in term of power). There is no benchmark for that curently, as nowhere else You can do parts of graphics pipeline on second (third, etc....) GPU. That may be something we will only start to see. (Fog, postprocessing, etc. come to mind, which could be executed on APU, while dGPU work for everything else)


    So one can not discard usefulness of Mantle based on current benchmarks, as those do not push boundaries far enough (2) and there are too little devs involved so we can not see how Mantle helps(harms) devs ability to produce games WITHOUT vendor involvement (1).

    Leave a comment:


  • gamerk2
    replied
    Originally posted by profoundWHALE View Post
    I've linked to another article/benchmark/review before, but anyways, Mantle seems to better benefit the computers that have a CPU bottleneck, not so much the ones with the wicked fast processors because then they start running into a GPU bottleneck.
    Pretty much. Mantel isn't improving the rendering backend much, its lowering the demand of the drivers main thread. That's where the performance benefit is coming from.

    Leave a comment:


  • profoundWHALE
    replied
    Originally posted by justmy2cents View Post
    it is kinda obvious you didn't understand my point.

    having "best opengl path" is 10% of the problem. in old days gpu was simply not capable rendering as much as you could feed it. at that time faster gpu ment everything. gpus evolved and right now you CAN'T feed it as much as it could render (why do you think cpu is gaming bottleneck). that is why avoiding state setting between ops and syncing means so much. waiting for gpu to be free, setting it up, do your single action, reseting... 90% of the time you used to wait and do useless things.

    if you do implementation of multidraw_indirect that just parses arrays and then sync, set, draw one, reset, rinse and repeat... welcome, you just created nightmare, where there is >1000% randomness. you avoid ?has_extension? with introducing ?does_it_actually_work?. and later is worse than former. you really did create single path how to code, you just don't have a clue if it works

    well, it is even worse since some hw can and some can't. it just breaks whole meaning. it's like allowing people with bikes driving on the freeway, utter chaos. difference in speed is too big to be controlled.

    and don't misunderstand me, i'll praise the world, amd and intel if it works out for them. they could do few approaches to lessen the pain, still it won't beat hw. also, i love amd on simple desktop and i love intel on servers. i would even be prepared to pay double price for amd (by buying higher price range) if it performed as well as some 750 or 760 and that meant i can avoid blob
    I've linked to another article/benchmark/review before, but anyways, Mantle seems to better benefit the computers that have a CPU bottleneck, not so much the ones with the wicked fast processors because then they start running into a GPU bottleneck.

    Leave a comment:


  • justmy2cents
    replied
    Originally posted by przemoli View Post
    Driver-only implementation (and I do not know if AMD still use it or for what hw!), do not pose problems with other things for OpenGL app.

    With or without MDI driver need to take care of those too.
    it is kinda obvious you didn't understand my point.

    having "best opengl path" is 10% of the problem. in old days gpu was simply not capable rendering as much as you could feed it. at that time faster gpu ment everything. gpus evolved and right now you CAN'T feed it as much as it could render (why do you think cpu is gaming bottleneck). that is why avoiding state setting between ops and syncing means so much. waiting for gpu to be free, setting it up, do your single action, reseting... 90% of the time you used to wait and do useless things.

    if you do implementation of multidraw_indirect that just parses arrays and then sync, set, draw one, reset, rinse and repeat... welcome, you just created nightmare, where there is >1000% randomness. you avoid ?has_extension? with introducing ?does_it_actually_work?. and later is worse than former. you really did create single path how to code, you just don't have a clue if it works

    well, it is even worse since some hw can and some can't. it just breaks whole meaning. it's like allowing people with bikes driving on the freeway, utter chaos. difference in speed is too big to be controlled.

    and don't misunderstand me, i'll praise the world, amd and intel if it works out for them. they could do few approaches to lessen the pain, still it won't beat hw. also, i love amd on simple desktop and i love intel on servers. i would even be prepared to pay double price for amd (by buying higher price range) if it performed as well as some 750 or 760 and that meant i can avoid blob

    Leave a comment:


  • przemoli
    replied
    Originally posted by justmy2cents View Post
    not ranting, at least i didn't mean to. i only use NVidia for gaming and this doesn't touch me. still, it is way past 2 weeks with no results from any other company than NVidia and if people go with your proposed mentality into the game performance. why not simply make software renderer and it will work everywhere. off course, it will work <1fps, but who cares. important thing is every card on the world is supported and developers can use ?fast path?

    and even if you tried to use it like you say, some features like fencing, persistent buffers, texture arrays would impose uncontrollable problems. part of making the game is also controlling how much and how fast you feed to GPU and VRAM. so, any game that would try working faster and more than 3.3 was capable would kill it self by default. this brings the need to limit resources where one would need to keep 2 versions of the game. simpler and complex
    Driver-only implementation (and I do not know if AMD still use it or for what hw!), do not pose problems with other things for OpenGL app.

    With or without MDI driver need to take care of those too.

    Leave a comment:

Working...
X