Announcement

Collapse
No announcement yet.

Recapping The OpenGL 4.5 Improvements, NVIDIA Linux Changes

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Recapping The OpenGL 4.5 Improvements, NVIDIA Linux Changes

    Phoronix: Recapping The OpenGL 4.5 Improvements, NVIDIA Linux Changes

    NVIDIA's Mark Kilgard presented at SIGGRAPH 2014 in Vancouver to cover the changes found in the just-released OpenGL 4.5 specification. He also went over some of NVIDIA's Linux driver changes...

    http://www.phoronix.com/vr.php?view=MTc2MjA

  • #2
    Deslided presentation

    Link to deslided presentation

    Comment


    • #3
      Reading about Direct State Access makes me wonder why on earth was such thing not implemented in OpenGL 1? Why the hell was the non-DSA functions implemented in the first place?

      Comment


      • #4
        Originally posted by sarmad View Post
        Reading about Direct State Access makes me wonder why on earth was such thing not implemented in OpenGL 1? Why the hell was the non-DSA functions implemented in the first place?
        Perhaps it didn't make sense for OpenGL 1.0 at the time. Also, SGI was controlling the API. There wasn't even consumer hardware that could do OpenGL (or 3D).

        Comment


        • #5
          Talk replay available

          Originally posted by turol View Post
          A replay of the talk with audio and the live demos can be found here:

          http://www.ustream.tv/recorded/51255959

          Comment


          • #6
            Originally posted by marek View Post
            Perhaps it didn't make sense for OpenGL 1.0 at the time. Also, SGI was controlling the API. There wasn't even consumer hardware that could do OpenGL (or 3D).
            Actually back in the original OpenGL 1.0 days, there wasn't even the notion of texture objects. Instead there was only a current 1D and 2D texture target. The plan was that display lists could be used to "encapsulate" texture image state. That didn't work out very well and so texture objects were added in OpenGL 1.1 and the glBindTexture selector was introduced to minimize the API impact of supporting texture objects.

            The issue had nothing really to do with "SGI controlling the API". Even in OpenGL 1.0 days, SGI had "turned over" OpenGL to what was then the Architectural Review Board so couldn't dictate or control the API itself.

            Later when the ARB_multitexture extension was added to OpenGL 1.3, it added yet another selector glActiveTexture.

            There wasn't really a conscious decision to be so dependent on selectors in the OpenGL API. It just kind of happened.

            And once texture objects set the stage for a bind-to-edit model of operation, other extensions for buffers objects, framebuffer objects, etc. adopted the same bind-to-edit approach for consistency.

            The EXT_direct_state_access extension finally addressed this about 5 years ago. While the extension obviously required zero new hardware, it took the OpenGL working group a while to simply agree on the approach. Welcome to standards making by committee!

            I hope this helps.

            - Mark

            Comment


            • #7
              Originally posted by mark_kilgard View Post
              Actually back in the original OpenGL 1.0 days, there wasn't even the notion of texture objects. Instead there was only a current 1D and 2D texture target. The plan was that display lists could be used to "encapsulate" texture image state. That didn't work out very well and so texture objects were added in OpenGL 1.1 and the glBindTexture selector was introduced to minimize the API impact of supporting texture objects.

              The issue had nothing really to do with "SGI controlling the API". Even in OpenGL 1.0 days, SGI had "turned over" OpenGL to what was then the Architectural Review Board so couldn't dictate or control the API itself.

              Later when the ARB_multitexture extension was added to OpenGL 1.3, it added yet another selector glActiveTexture.

              There wasn't really a conscious decision to be so dependent on selectors in the OpenGL API. It just kind of happened.

              And once texture objects set the stage for a bind-to-edit model of operation, other extensions for buffers objects, framebuffer objects, etc. adopted the same bind-to-edit approach for consistency.

              The EXT_direct_state_access extension finally addressed this about 5 years ago. While the extension obviously required zero new hardware, it took the OpenGL working group a while to simply agree on the approach. Welcome to standards making by committee!

              I hope this helps.

              - Mark
              Thanks for the great explanation. Wow, this is what happens when you think more about the past (backwards compatibility) than the present and the future (proper code design).

              Comment


              • #8
                Originally posted by sarmad View Post
                Thanks for the great explanation. Wow, this is what happens when you think more about the past (backwards compatibility) than the present and the future (proper code design).
                We could have had a nice and clean overhaul full of immutable textures/buffers, DSA and C style OOP with Longs Peak, but unfortunately the CAD folks just said "FU, we don't want to rewrite our old code to get <modern feature>" and it all fell apart. Very unfortunate. And yet at the same time many of them are happy to swallow the D3D way of "version increase = API break" without complaining any bit.

                Comment


                • #9
                  Originally posted by Ancurio View Post
                  We could have had a nice and clean overhaul full of immutable textures/buffers, DSA and C style OOP with Longs Peak, but unfortunately the CAD folks just said "FU, we don't want to rewrite our old code to get <modern feature>" and it all fell apart. Very unfortunate. And yet at the same time many of them are happy to swallow the D3D way of "version increase = API break" without complaining any bit.
                  Whoa there; what a bunch of not-really-accurate history.

                  Long Peaks was in truth an absolute train wreck. Ill-informed web commentators often describe Long Peaks as being some "nice and clean overhaul" but that's so far from the truth. And "CAD folks" didn't have a vote or even any say on Long Peaks. They didn't ever even know what it was or care so it's ludicrous to say they subverted Long Peaks. Long Peaks was bad enough to subvert itself; it planned to be an incompatible API that was less functional and burdened with a bunch of dubious, unproven, inflexible API constructs. Multiple companies (but none CAD vendors) looked into the abyss that was Long Peaks and put an end to it.

                  Honestly, what's more plausible: There was this wonderfully clean overhaul of OpenGL poised for success and a few CAD companies not even involved in the process said "we don't want to rewrite our code" and... crash, boom, bam, this amazingly awesome API "just fell apart"... OR multiple rational companies involved in the process looked at what the Long Peaks effort had produced and weighed the unproven benefits against breaking API compatibility and quite sanely and responsibly realized Long Peaks was unjustified and canned it.

                  The latter isn't just more plausible; it's what happened.

                  - Mark

                  Comment


                  • #10
                    Originally posted by mark_kilgard View Post
                    Whoa there; what a bunch of not-really-accurate history.

                    Long Peaks was in truth an absolute train wreck. Ill-informed web commentators often describe Long Peaks as being some "nice and clean overhaul" but that's so far from the truth. And "CAD folks" didn't have a vote or even any say on Long Peaks. They didn't ever even know what it was or care so it's ludicrous to say they subverted Long Peaks. Long Peaks was bad enough to subvert itself; it planned to be an incompatible API that was less functional and burdened with a bunch of dubious, unproven, inflexible API constructs. Multiple companies (but none CAD vendors) looked into the abyss that was Long Peaks and put an end to it.

                    Honestly, what's more plausible: There was this wonderfully clean overhaul of OpenGL poised for success and a few CAD companies not even involved in the process said "we don't want to rewrite our code" and... crash, boom, bam, this amazingly awesome API "just fell apart"... OR multiple rational companies involved in the process looked at what the Long Peaks effort had produced and weighed the unproven benefits against breaking API compatibility and quite sanely and responsibly realized Long Peaks was unjustified and canned it.

                    The latter isn't just more plausible; it's what happened.

                    - Mark
                    Then why would they keep complete silence over what happened? Seems kind of strange for an open consortium, doesn't it?

                    Comment


                    • #11
                      Originally posted by Ancurio View Post
                      Then why would they keep complete silence over what happened? Seems kind of strange for an open consortium, doesn't it?
                      No, not really. The deliberations within a standards body are known to all the member participants but dirty laundry isn't typically hung out to dry. Khronos is open in that its standards are published openly but if you want to join the process, and you are welcome to do so, you have to become a member and abide by the process.

                      It's bad form to discuss those deliberations outside the standards body's own process. That didn't stop some frustrated individuals after the canning of Long Peaks from doing just that. Doesn't make it right. Doesn't make it good.

                      Keep in mind they predicted all sorts of nonsense about how Long Peaks getting canned was going to lead to the death of OpenGL and other calamitous predictions. Seven years later, those predictions look ridiculous.

                      The truth is OpenGL's standards making actually got back on track after the Long Peaks debacle. From 2008 on, there's been an OpenGL standard update every single year, sometimes twice a year all the way up to OpenGL 4.5 this week.

                      - Mark

                      Comment


                      • #12
                        Interesting to hear that indeed, well then let's hope the Next Generation OpenGL Initiative will produce something better this time...

                        About NV_path_rendering: I heard about it some time ago and that Mozilla is looking into it for Firefox acceleration. If I remember correctly, they were interested but said there were problems too. I still find it strange that on Windows, acceleration with Direct2D works quite well since some time for multiple Browsers, while on Linux etc. they are still failing to reach the same level (if any) of acceleration (with cairo, skia, opengl and whatnot).
                        Is Direct2D really that good?
                        Could NV_path_rendering be the savior here? - given that it would one day become ARB_path_rendering.
                        Why, after 3 years, is it still some rather "secret" extension?

                        It's probably too naive, but why not make an combined effort with for example the Mozilla folks (or Google/Skia) to get around the problems, invite the other IHV along and create ARB_path_rendering.

                        Comment


                        • #13
                          Originally posted by Stebs View Post
                          Could NV_path_rendering be the savior here? - given that it would one day become ARB_path_rendering.
                          Why, after 3 years, is it still some rather "secret" extension?
                          A secret? Hardly.

                          Jeff Bolz and I published our SIGGRAPH Asia 2012 "GPU-accelerated Path Rendering" paper discussing the approach in quite a lot of technical detail. Publishing a SIGGRAPH paper isn't a good way to keep a secret.

                          Likewise the OpenGL extension specification for NV_path_rendering is registered and public.

                          Adobe's Illustrator CC 2014 shipped in June with GPU-acceleration using OpenGL, particularly NV_path_rendering and NV_blend_equation_advanced. For the first time in its history, Illustrator is now taking advantage of the GPU.

                          There's a Software Development Kit available with code that builds for both Linux and Windows. Windows users can try out the SDK's pre-compiled demos.

                          Google's top-of-tree Skia has NV_path_rendering support enabled.

                          The OpenGL Extension Wrangler (GLEW) and Regal both provide extension loading support.

                          NV_path_rendering is ships with NVIDIA's Tegra K1-based Shield Tablet. The GameWorks SDK has a path rendering example and more are coming.

                          You are correct that Mozilla has experimented with NV_path_rendering. I'm a Firefox user on both Windows and Linux so I'd love to see a multi-OS GPU-acceleration approach for Firefox.

                          - Mark

                          Comment


                          • #14
                            Originally posted by mark_kilgard View Post
                            A secret? Hardly.
                            No, I did not mean really secret, but by "secret" i meant "not really known among common users (like me)" aka not already hyped by users that can not use things like Direct2D but want some modern accelerated path renderer used in programs.

                            Shortly after my post, I read publications (like the Adobe's Illustrator announcement and others from SIGGRAPH 2014) that cleared some things for me (and are making NV_path_rendering more prominent).
                            Did not think that you would read/reply to my post, so I did not follow my first intention to edit my post, sorry, and btw. thanks a lot for the reply!

                            You are correct that Mozilla has experimented with NV_path_rendering. I'm a Firefox user on both Windows and Linux so I'd love to see a multi-OS GPU-acceleration approach for Firefox.

                            - Mark
                            Mozilla is working on enabling skia as cairo alternative for Firefox, it is working right now, albeit not yet ready for release.
                            They have a bugzilla entry for turning on NV_path_rendering support for skia: https://bugzilla.mozilla.org/show_bug.cgi?id=978290
                            Would love to see it in action too
                            Thanks again, Stebs

                            Comment

                            Working...
                            X