Announcement

Collapse
No announcement yet.

Next-Gen OpenGL To Be Announced Next Month

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • blackiwid
    replied
    Originally posted by justmy2cents View Post
    i might be wrong here, but zero-overhead on GL was presented with up to 4.4 extensions. only problem is that only nvidia did their work and actually made them perform as they should.
    Sorry I dont fall for that propaganda, they only showed some theoretical benchmarks no game is with nvidia hardware only close as much faster then this games are that support mantle.

    So even if they would have fixed opengl to matle level, its very strange that many companies implement a complete new api (mantle) to make games for the minor grafic cards vendor faster, than making some opengl tunings for the major grafic card seller with the bigger market share.

    So either opengl is such a garbage such tuning would be harder than supporting a second api, or nvidia is full of crap. I bet on the second one.

    Correction:

    Ok I forgot 99.98999999999% of all games are made in directx and not opengl, so in both cases they have to support another engine, but still we have not seen any major advantages of the nvidia opengl implementation in not ONE real game, but we have seen big advantages with mantle in several games... it looks even the next gta version a game many people will buy and is ideal for such optimations (open world high cpu load) will support mantle.
    Last edited by blackiwid; 16 July 2014, 12:24 PM.

    Leave a comment:


  • gamerk2
    replied
    Originally posted by jrch2k8 View Post
    Problem is not that the drivers are sending millions of additional opcodes and trashing the command dispatchers or not using certain uber optimised paths or nothing like that.

    The actual problem is hardware bandwidth/latency even if PCIe is really fast is not that fast so every upload to GPU ram is gonna hurt a lot, so the efficiency focus is a standard way to remove the upload process as much as possible and keep data inside the GPU ram as much as is possible to save PCIe trips to the CPU and backwards, ofc this will increase GPU ram usage(you can't have both ways) and start up times(you have to upload more data to avoid multiple/serial uploads), for example:

    Current OpenGL/DX game: upload TextureA, wait for upload(hurts and hurts and hurts), process, Upload TextureB, wait for upload(hurts and hurts and hurts), process,render.

    Next Gen OpenGL/DX game: upload TextureA,B,C,N... to buffersA,B,C,N ...(<-- only once per scene), reference buffer A, process, reference buffer B, process, render

    Of course many more factors need to work that way, the example is just a very bastardised way to show part of the problem
    If that were the case, them you'd see a performance increase going from PCI-e v2 to PCI-e v32, which you don't. Hell, even PCI-e 1.1 x16 is enough for a single mid-range GPU. PCI-e bandwidth is NOT a problem.

    Leave a comment:


  • jrch2k8
    replied
    Originally posted by Ancurio View Post
    Can't you do that kind of stuff with decoding texture image data directly to mapped PBOs and then uploading from that?
    well i guess you can but nvidia extensions for buffer storage seems to be more efficient or at least is more direct and easy to use

    Leave a comment:


  • log0
    replied
    Originally posted by curaga View Post
    Virtual memory doesn't help much when there is no zeroing. Allocate a huge texture without initializing it, read from it -> very high probability of seeing old window contents.
    Just curious, is there a high probability? I would have expected that frame buffers are reused for frame buffering. But then there is composition and stuff of course.

    The issue with zeroing is already there, independent of whether you have vm or not, I think.

    Leave a comment:


  • curaga
    replied
    Originally posted by smitty3268 View Post
    Newer hardware implements virtual memory, and the hardware provides validation to make sure your app doesn't get access to something it shouldn't be able to see. Crashes can happen, though.
    Virtual memory doesn't help much when there is no zeroing. Allocate a huge texture without initializing it, read from it -> very high probability of seeing old window contents.

    Leave a comment:


  • mdias
    replied
    Originally posted by stalkerg View Post
    OpenGL5 will be async? And have functions with callback?
    OpenGL is already async. There are no callbacks that I'm aware of, however there are fences/sync objects.

    Leave a comment:


  • stalkerg
    replied
    OpenGL5 will be async? And have functions with callback?

    Leave a comment:


  • newwen
    replied
    OpenGL 5 needs a complete clean-sheet API , with drivers maintaining GL <= 4.4 APIs for compatibility. Contexts and libraries for OpenGL 5 and earlier versions should be completely separated. For example, in order to create a OpenGL 5 or later context you should specify the version.
    Last edited by newwen; 16 July 2014, 04:10 AM.

    Leave a comment:


  • przemoli
    replied
    Originally posted by justmy2cents View Post
    i might be wrong here, but zero-overhead on GL was presented with up to 4.4 extensions. only problem is that only nvidia did their work and actually made them perform as they should. feature wise, nothing should stop any 4.4 card to employ it, even if 5 would describe some nicer api. in my opinion, card makers won't do that since they are in business of selling new cards, not refurbishing old ones

    unless you meant some other view on efficient
    If You mean, absolute performance? Then yes Nvidia did great job. My speculation is that they aimed at those fast patch for some time now, and they have optimised driver and maybe even some hw to accelerate it more.

    But if You mean, that AMD/Intel fail, than its big fat NO. F**** NO.

    Both AMD and Intel see 1000% or more improvements by selecting right way of doing things.

    End performance may be less than for Nvidia but its still much better than old ways for AMD/Intel.

    No excuse for not to adopt AZDO.

    (And while one of those extensions is from OpenGL 4.4. Core, it can be implemented as extension without claiming even 4.0 as may happen for Mesa. OpenGL 4.x level of hw is required, but not full OpenGL 4.4)

    Leave a comment:


  • liam
    replied
    Originally posted by justmy2cents View Post
    i might be wrong here, but zero-overhead on GL was presented with up to 4.4 extensions. only problem is that only nvidia did their work and actually made them perform as they should. feature wise, nothing should stop any 4.4 card to employ it, even if 5 would describe some nicer api. in my opinion, card makers won't do that since they are in business of selling new cards, not refurbishing old ones

    unless you meant some other view on efficient
    Are you sure about this? AZDO was a joint effort from devs at AMD, Intel and nvidia. It seems odd if their joint effort only ran correctly on one of the platforms.

    Leave a comment:

Working...
X