Announcement

Collapse
No announcement yet.

Next-Gen OpenGL To Be Announced Next Month

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by schmidtbag View Post
    I still can't get a clear answer - do current DX11/OGL4.x cards support DX12 and OGL5?
    I believe DX12 is likely going to have similar requirements to Mantle - that is, RadeonSI and up for AMD, and i think Fermi is the equivalent on the NVidia side. My guess is that GL5 will be have the same requirements as well. They can always create a GL4.5 version that provides the parts that will run on older cards, just like they provided GL3.3 when 4.0 came out.
    Last edited by smitty3268; 07-15-2014, 03:10 PM.

    Comment


    • #17
      Originally posted by PuckPoltergeist View Post
      Free memory access for everyone everywhere, hooray! The input validation is there for a good reason. In best case, you just hang the hardware and have to reset. More worse, the application gets access to regions it's not allowed.
      Newer hardware implements virtual memory, and the hardware provides validation to make sure your app doesn't get access to something it shouldn't be able to see. Crashes can happen, though.

      Comment


      • #18
        Originally posted by PuckPoltergeist View Post
        Free memory access for everyone everywhere, hooray! The input validation is there for a good reason. In best case, you just hang the hardware and have to reset. More worse, the application gets access to regions it's not allowed.
        Virtual memory and command stream checking should take care of gross misbehavior.

        But you are of course free to stick with current APIs if you need current levels of validation. Like for every draw call for every drawn frame...

        Edit:
        @smitty3268 beat me to it

        Comment


        • #19
          Originally posted by schmidtbag View Post
          I still can't get a clear answer - do current DX11/OGL4.x cards support DX12 and OGL5?
          Yes and No. Manufacturers can update drivers for older hardware to bring support for newer OpenGL/DirectX revisions but they may choose not to do so.

          Comment


          • #20
            Originally posted by schmidtbag View Post
            I still can't get a clear answer - do current DX11/OGL4.x cards support DX12 and OGL5? And, I still don't understand why this needs a new specification - why can't older versions just simply be modified to increase efficiency?
            In short, the old APIs were not designed to be efficient
            select VS epoll is a good example.

            Comment


            • #21
              "lover overhead OpenGL"

              Originally posted by phoronix View Post
              FTA: 'Hearing about "lover overhead OpenGL" is hardly a surprise'
              Did it just get... sexy... in here?

              Comment


              • #22
                I noticed that and it immediately made my day.

                Comment


                • #23
                  Originally posted by Kivada View Post
                  You mean DX10.1 right? IIRC it's because AMD was the first one to support it by a long shot, there where a few games that implemented it, even some that removed the capability because Nvidia paid them off as DX10.1 made the games run noticeably faster then DX10.
                  huh, that might actually explain part of why Age of Conan went a little bit funky around 2010, after they 'revamped' their GFX engine. Which sucked harder than the original.

                  Comment


                  • #24
                    Originally posted by zxy_thf View Post
                    In short, the old APIs were not designed to be efficient
                    select VS epoll is a good example.
                    i might be wrong here, but zero-overhead on GL was presented with up to 4.4 extensions. only problem is that only nvidia did their work and actually made them perform as they should. feature wise, nothing should stop any 4.4 card to employ it, even if 5 would describe some nicer api. in my opinion, card makers won't do that since they are in business of selling new cards, not refurbishing old ones

                    unless you meant some other view on efficient

                    Comment


                    • #25
                      Originally posted by jrch2k8 View Post
                      Problem is not that the drivers are sending millions of additional opcodes and trashing the command dispatchers or not using certain uber optimised paths or nothing like that.

                      The actual problem is hardware bandwidth/latency even if PCIe is really fast is not that fast so every upload to GPU ram is gonna hurt a lot, so the efficiency focus is a standard way to remove the upload process as much as possible and keep data inside the GPU ram as much as is possible to save PCIe trips to the CPU and backwards, ofc this will increase GPU ram usage(you can't have both ways) and start up times(you have to upload more data to avoid multiple/serial uploads), for example:

                      Current OpenGL/DX game: upload TextureA, wait for upload(hurts and hurts and hurts), process, Upload TextureB, wait for upload(hurts and hurts and hurts), process,render.

                      Next Gen OpenGL/DX game: upload TextureA,B,C,N... to buffersA,B,C,N ...(<-- only once per scene), reference buffer A, process, reference buffer B, process, render

                      Of course many more factors need to work that way, the example is just a very bastardised way to show part of the problem
                      Can't you do that kind of stuff with decoding texture image data directly to mapped PBOs and then uploading from that?

                      Comment


                      • #26
                        Originally posted by justmy2cents View Post
                        i might be wrong here, but zero-overhead on GL was presented with up to 4.4 extensions. only problem is that only nvidia did their work and actually made them perform as they should. feature wise, nothing should stop any 4.4 card to employ it, even if 5 would describe some nicer api. in my opinion, card makers won't do that since they are in business of selling new cards, not refurbishing old ones

                        unless you meant some other view on efficient
                        Are you sure about this? AZDO was a joint effort from devs at AMD, Intel and nvidia. It seems odd if their joint effort only ran correctly on one of the platforms.

                        Comment


                        • #27
                          Originally posted by justmy2cents View Post
                          i might be wrong here, but zero-overhead on GL was presented with up to 4.4 extensions. only problem is that only nvidia did their work and actually made them perform as they should. feature wise, nothing should stop any 4.4 card to employ it, even if 5 would describe some nicer api. in my opinion, card makers won't do that since they are in business of selling new cards, not refurbishing old ones

                          unless you meant some other view on efficient
                          If You mean, absolute performance? Then yes Nvidia did great job. My speculation is that they aimed at those fast patch for some time now, and they have optimised driver and maybe even some hw to accelerate it more.

                          But if You mean, that AMD/Intel fail, than its big fat NO. F**** NO.

                          Both AMD and Intel see 1000% or more improvements by selecting right way of doing things.

                          End performance may be less than for Nvidia but its still much better than old ways for AMD/Intel.

                          No excuse for not to adopt AZDO.

                          (And while one of those extensions is from OpenGL 4.4. Core, it can be implemented as extension without claiming even 4.0 as may happen for Mesa. OpenGL 4.x level of hw is required, but not full OpenGL 4.4)

                          Comment


                          • #28
                            OpenGL 5 needs a complete clean-sheet API , with drivers maintaining GL <= 4.4 APIs for compatibility. Contexts and libraries for OpenGL 5 and earlier versions should be completely separated. For example, in order to create a OpenGL 5 or later context you should specify the version.
                            Last edited by newwen; 07-16-2014, 04:10 AM.

                            Comment


                            • #29
                              OpenGL5 will be async? And have functions with callback?

                              Comment


                              • #30
                                Originally posted by stalkerg View Post
                                OpenGL5 will be async? And have functions with callback?
                                OpenGL is already async. There are no callbacks that I'm aware of, however there are fences/sync objects.

                                Comment

                                Working...
                                X