Announcement

Collapse
No announcement yet.

DRI3 Was Just Proposed On The Fly From XDC2012

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • DRI3 Was Just Proposed On The Fly From XDC2012

    Phoronix: DRI3 Was Just Proposed On The Fly From XDC2012

    While Keith Packard and Eric Anholt were enjoying beers and Schweinshaxe last night as the X.Org gathering began, ideas for DRI3 were hashed out. Discussions about this DRI2 successor then continued today at XDC2012...

    http://www.phoronix.com/vr.php?view=MTE4ODE

  • #2
    Honestly considering the DMA-BUF infrastructure has finally been added and is, hopefully, stable (Assuming no body forgot about a really obvious use-case that needs to be handled that will require an API break)

    DRI and any other graphics related infrastructure probably could use an update in order to take advantage of it. This way we have support for buffer-sharing across the board and theres no link the chain where we can't do something because it wasn't updated

    Comment


    • #3
      "It's a synchronization disaster," as said by Keith Packard today from the SUSE office hosting XDC2012.
      Haha nice to see that a four year old addition to the stack is now 'a disaster'. What happened, and how could it happen?

      Comment


      • #4
        Will XDC2012 discuss phoronix's teasing of its users by almost exclusively providing links to itself?

        Comment


        • #5
          Originally posted by Azpegath View Post
          Haha nice to see that a four year old addition to the stack is now 'a disaster'. What happened, and how could it happen?
          that is the natural progress of technology, 4+ years ago it looked like a good solution for the current infrastructure but today they know better, new techniques has emerged, the graphic stack is much more advanced than it was, so is natural that the previous solution doesnt fit too well in today scenario.

          like your 80's gasoline monster muscle state of the art GTO carburator technology is absolutely LOL and innefficient by today standards and smart injection techology.

          the world moves on and technology advances

          Comment


          • #6
            Originally posted by Azpegath View Post
            Haha nice to see that a four year old addition to the stack is now 'a disaster'. What happened, and how could it happen?
            It's not 'a disaster', it's 'a synchronization disaster'. One aspect of the new API didn't work out too well, and that aspect is going to be improved, along with the usual kind of fixes you end up doing once you know you'll break API anyway.

            DRI1 lived for ~7 years. DRI2 may end up living 5. Those things aren't on a clear schedule, but considering the pace at which the OSS gfx stack improved during the last years, it's really not unexpectedly early to talk about DRI3.

            Comment


            • #7
              Originally posted by Azpegath View Post
              Haha nice to see that a four year old addition to the stack is now 'a disaster'. What happened, and how could it happen?
              Back then devs were just trying to get 3D working with all the fancy new compositers. Experience has shown some of the limitations of that system, and how performance could be improved. Nothing to see here...

              Comment


              • #8
                Originally posted by Azpegath View Post
                Haha nice to see that a four year old addition to the stack is now 'a disaster'. What happened, and how could it happen?
                Yeah, it may be a disaster, but you should've seen the last one...it was even worse!

                Kidding aside...jrch2k8 said it well. The software's evolving over time. DRI2 solved major limitations of DRI1 (any direct rendered windows went straight to the screen, ignoring your compositor), but didn't necessarily do it in the most efficient way possible.

                Also, the system has become more complex: back in the day, we only had color and depth/stencil buffers, so it seemed reasonable to have the 2D driver (xf86-video-intel/ati/etc) allocate them. Now, we have packed depth/stencil, separate depth/stencil, hierarchical depth (HiZ), MSAA (tiled or sliced) and compressed MSAA buffers. Each generation of hardware needs a different set of buffers, and requires them to be in particular formats. It's just too complicated, and introduces annoying interdependencies between the 2D and 3D driver versions. Mesa has to figure out if the 2D driver is new enough to support HiZ or MSAA (and the necessary bug fixes), and fall back appropriately. It gets messy fast.

                Plus, with the advent of EGL, Mesa actually has to solve all of this itself anyway. Consider Wayland...there is no xf86 driver to ask. At that point, we wonder why we don't just consolidate everything in Mesa and rework the DRI protocol to simplify things and avoid a lot of the Mesa<->X server song and dance.

                There are other reasons to make the change too. Keith's point about synchronization and performance is a good one.

                DRI3 is going to be awesome. Long live DRI3
                Free Software Developer .:. Mesa and Xorg
                Opinions expressed in these forum posts are my own.

                Comment


                • #9
                  Linux is pretty much the only platform that makes effective use of the latest 3d and 2d acceleration work. Would it be possible to improve that situation with this API breakage. Though DRI isn't the only part involved operating systems like Haiku, *BSDs and Solaris could use more acceleration support!

                  Comment


                  • #10
                    XDC summaries

                    May I humbly suggest (as a sorry non-attendee this year) that you provide daily summaries of the interesting discussions at XDC? That would at least allow me to flame people on the phoronix forums, even if I can't do it in person!

                    As for DRI3... why don't we just move to Wayland? <answering myself>The biggest obstacles are (1) lack of EGL software; most games on Linux use GLX, and I'm not sure how easy it will be to port over the Steam library, and (2) binary drivers that don't do EGL or a standalone API that can support Wayland easily.

                    However, things are moving in the right direction. All the work Linaro has been doing is very standards based, focusing on upstreamable, common APIs that can be supported by both desktop and SoC style hardware. So maybe my wishful thinking about Wayland isn't so impractical after all.

                    Thanks,
                    Jesse

                    Comment

                    Working...
                    X