Announcement

Collapse
No announcement yet.

Next-Gen OpenGL To Be Announced Next Month

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    "lover overhead OpenGL"

    Originally posted by phoronix View Post
    FTA: 'Hearing about "lover overhead OpenGL" is hardly a surprise'
    Did it just get... sexy... in here?

    Comment


    • #22
      I noticed that and it immediately made my day.

      Comment


      • #23
        Originally posted by Kivada View Post
        You mean DX10.1 right? IIRC it's because AMD was the first one to support it by a long shot, there where a few games that implemented it, even some that removed the capability because Nvidia paid them off as DX10.1 made the games run noticeably faster then DX10.
        huh, that might actually explain part of why Age of Conan went a little bit funky around 2010, after they 'revamped' their GFX engine. Which sucked harder than the original.
        Hi

        Comment


        • #24
          Originally posted by zxy_thf View Post
          In short, the old APIs were not designed to be efficient
          select VS epoll is a good example.
          i might be wrong here, but zero-overhead on GL was presented with up to 4.4 extensions. only problem is that only nvidia did their work and actually made them perform as they should. feature wise, nothing should stop any 4.4 card to employ it, even if 5 would describe some nicer api. in my opinion, card makers won't do that since they are in business of selling new cards, not refurbishing old ones

          unless you meant some other view on efficient

          Comment


          • #25
            Originally posted by jrch2k8 View Post
            Problem is not that the drivers are sending millions of additional opcodes and trashing the command dispatchers or not using certain uber optimised paths or nothing like that.

            The actual problem is hardware bandwidth/latency even if PCIe is really fast is not that fast so every upload to GPU ram is gonna hurt a lot, so the efficiency focus is a standard way to remove the upload process as much as possible and keep data inside the GPU ram as much as is possible to save PCIe trips to the CPU and backwards, ofc this will increase GPU ram usage(you can't have both ways) and start up times(you have to upload more data to avoid multiple/serial uploads), for example:

            Current OpenGL/DX game: upload TextureA, wait for upload(hurts and hurts and hurts), process, Upload TextureB, wait for upload(hurts and hurts and hurts), process,render.

            Next Gen OpenGL/DX game: upload TextureA,B,C,N... to buffersA,B,C,N ...(<-- only once per scene), reference buffer A, process, reference buffer B, process, render

            Of course many more factors need to work that way, the example is just a very bastardised way to show part of the problem
            Can't you do that kind of stuff with decoding texture image data directly to mapped PBOs and then uploading from that?

            Comment


            • #26
              Originally posted by justmy2cents View Post
              i might be wrong here, but zero-overhead on GL was presented with up to 4.4 extensions. only problem is that only nvidia did their work and actually made them perform as they should. feature wise, nothing should stop any 4.4 card to employ it, even if 5 would describe some nicer api. in my opinion, card makers won't do that since they are in business of selling new cards, not refurbishing old ones

              unless you meant some other view on efficient
              Are you sure about this? AZDO was a joint effort from devs at AMD, Intel and nvidia. It seems odd if their joint effort only ran correctly on one of the platforms.

              Comment


              • #27
                Originally posted by justmy2cents View Post
                i might be wrong here, but zero-overhead on GL was presented with up to 4.4 extensions. only problem is that only nvidia did their work and actually made them perform as they should. feature wise, nothing should stop any 4.4 card to employ it, even if 5 would describe some nicer api. in my opinion, card makers won't do that since they are in business of selling new cards, not refurbishing old ones

                unless you meant some other view on efficient
                If You mean, absolute performance? Then yes Nvidia did great job. My speculation is that they aimed at those fast patch for some time now, and they have optimised driver and maybe even some hw to accelerate it more.

                But if You mean, that AMD/Intel fail, than its big fat NO. F**** NO.

                Both AMD and Intel see 1000% or more improvements by selecting right way of doing things.

                End performance may be less than for Nvidia but its still much better than old ways for AMD/Intel.

                No excuse for not to adopt AZDO.

                (And while one of those extensions is from OpenGL 4.4. Core, it can be implemented as extension without claiming even 4.0 as may happen for Mesa. OpenGL 4.x level of hw is required, but not full OpenGL 4.4)

                Comment


                • #28
                  OpenGL 5 needs a complete clean-sheet API , with drivers maintaining GL <= 4.4 APIs for compatibility. Contexts and libraries for OpenGL 5 and earlier versions should be completely separated. For example, in order to create a OpenGL 5 or later context you should specify the version.
                  Last edited by newwen; 16 July 2014, 04:10 AM.

                  Comment


                  • #29
                    OpenGL5 will be async? And have functions with callback?

                    Comment


                    • #30
                      Originally posted by stalkerg View Post
                      OpenGL5 will be async? And have functions with callback?
                      OpenGL is already async. There are no callbacks that I'm aware of, however there are fences/sync objects.

                      Comment

                      Working...
                      X