Announcement

Collapse
No announcement yet.

Gallium3D Gets Xorg, DRI2 State Trackers

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Gallium3D Gets Xorg, DRI2 State Trackers

    Phoronix: Gallium3D Gets Xorg, DRI2 State Trackers

    Gallium3D recently landed in Mesa's mainline code-base and work on it continues to move forward in a steadfast manner. Committed to Mesa's master branch last night for Gallium3D were state trackers for Xorg and DRI2. State trackers are used in Gallium3D to track the state of something (such as Mesa's state) and to then translate that into operations...

    http://www.phoronix.com/vr.php?view=NzExMA

  • #2
    what?
    i always thought gallium was just for rendering oder gpgpu stuff and the rest like modesetting was handled by radeonhd for example o_O

    Comment


    • #3
      So when will Intel start adding gallium support to their driver? Does experimental code already exist somewhere?

      Comment


      • #4
        Originally posted by Pfanne View Post
        what?
        i always thought gallium was just for rendering oder gpgpu stuff and the rest like modesetting was handled by radeonhd for example o_O
        I'm a bit fuzzy on most of it, but from what I understand, Gallium is a general-purpose low-level graphics engine. The state trackers are essentially the plug between a high-level API (be it OpenGL/Mesa, Cairo, Xorg/EXA, or any of a number of others). The winsys layer is then the plug between the OS, which is what would include the bits to talk to KMS and the kernel bits of DRI2/DRM. The middle layer of Gallium is then the actual "driver" that takes Gallium commands (generated by the state trackers) and converts them into hardware commands and submits them to the hardware (using the winsys layer as appropriate).

        The state trackers are pretty flexible. In OpenGL, for example, you have a drawing context. You don't say "draw a red triangle here," but instead you say, "set the drawing color to red; now add these three coordinates; now connect the coordinates I gave you to render a triangle." The state tracker tracks the 'state' of the OpenGL context and eventually generates real drawing commands. Cairo uses a similar context-based drawing API, and hence can be accelerated in a very similar fashion. The Cairo state tracker would simply be tracking 2D-only state and be doing some tesselation to generate triangle-based rendering commands for the hardware, but otherwise is really the same as the OpenGL state tracker in general. EXA works less with contexts and more with complete drawing commands (very simplistic ones), so the EXA state tracker would be even simpler -- it would just translate EXA commands into Gallium commands without actually even needing to track much state.

        I'm a little unclear on exactly what a KMS/DRI2 state tracker is good for... I think it's meant to help writing nested window systems. One binary can talk over the same interface to both the actual hardware drivers (drawing to the framebuffer and resizing the actual output) or to an 'emulated' display (drawing to a buffer and resizing that buffer) without needing or even being able to know the difference.

        Comment


        • #5
          so everything that radeon/hd does, like modesetting exa etc will be ported to gallium3d which than handles all this stuff?

          Comment


          • #6
            the lack of a drawing showing the real structure of the whole thing, makes understanding gallium and Co. a bit difficult

            Comment


            • #7
              Originally posted by Pfanne View Post
              so everything that radeon/hd does, like modesetting exa etc will be ported to gallium3d which than handles all this stuff?
              Modesetting is going into KMS. The only reason radeon/radeonhd does that now is because they had to before KMS was in mainline and ready to go. All modesetting is going into the kernel and out of the individual Xorg drivers.

              The drawing acceleration is all going into Mesa/OpenGL. Right now the drivers include hardware acceleration in EXA due to a lack of solid OpenGL driver support. The eventual plan was to get Xorg to just have a generic OpenGL-based rendering backend, which is what XGL does and what the glucose project for Xorg is supposed to accomplish.

              Xorg then uses DRI2 and a standardized video memory management API (be it GEM, TTM, or whatever everyone eventually settle on) for allocating buffers and the like.

              At that point I think the only thing left is video playback acceleration. Gallium has some shader-based implementations of that, but I don't know if there are plans to provide a proper video acceleration API in Gallium or what. I imagine the eventual plan is to have a standardized API and driver set for that which is independent of Xorg as well.

              Basically, radeon/radeonhd only exist right now because until very recently Xorg had to have all the hardware support in its drivers. That is all going away, and eventually Xorg will not have any hardware specific drivers in it at all. It's just going to be a layer that interfaces between applications (managing windows and events and protocol and such) and the underlying system (input drivers, KMS, DRI2, Mesa/OpenGL, etc.)

              That is all completely orthogonal to Gallium. Gallium's purpose as I understand is just that it makes implementing many parts of that future graphics stack much easier and more efficient, particularly because it makes it possible to write an Xorg backend or Cairo backend that can talk directly in modern-hardware-friendly operations instead of needing to translate Xorg drawing into OpenGL which then gets translated into those hardware-friendly operations. Gallium is a modern hardware-accelerated rendering layer that sits between the low-level OS interfaces and the high-level user-friendly APIs.

              Comment


              • #8
                so radeonhd will basicly disappear, when everything is ported to gallium3d?

                Comment


                • #9
                  Originally posted by elanthis View Post
                  Modesetting is going into KMS. The only reason radeon/radeonhd does that now is because they had to before KMS was in mainline and ready to go. All modesetting is going into the kernel and out of the individual Xorg drivers.

                  The drawing acceleration is all going into Mesa/OpenGL. Right now the drivers include hardware acceleration in EXA due to a lack of solid OpenGL driver support. The eventual plan was to get Xorg to just have a generic OpenGL-based rendering backend, which is what XGL does and what the glucose project for Xorg is supposed to accomplish.

                  Xorg then uses DRI2 and a standardized video memory management API (be it GEM, TTM, or whatever everyone eventually settle on) for allocating buffers and the like.

                  At that point I think the only thing left is video playback acceleration. Gallium has some shader-based implementations of that, but I don't know if there are plans to provide a proper video acceleration API in Gallium or what. I imagine the eventual plan is to have a standardized API and driver set for that which is independent of Xorg as well.

                  Basically, radeon/radeonhd only exist right now because until very recently Xorg had to have all the hardware support in its drivers. That is all going away, and eventually Xorg will not have any hardware specific drivers in it at all. It's just going to be a layer that interfaces between applications (managing windows and events and protocol and such) and the underlying system (input drivers, KMS, DRI2, Mesa/OpenGL, etc.)

                  That is all completely orthogonal to Gallium. Gallium's purpose as I understand is just that it makes implementing many parts of that future graphics stack much easier and more efficient, particularly because it makes it possible to write an Xorg backend or Cairo backend that can talk directly in modern-hardware-friendly operations instead of needing to translate Xorg drawing into OpenGL which then gets translated into those hardware-friendly operations. Gallium is a modern hardware-accelerated rendering layer that sits between the low-level OS interfaces and the high-level user-friendly APIs.
                  Thanks, you exactly told what I wanted to know about Gallium.

                  Comment


                  • #10
                    Originally posted by Pfanne View Post
                    so radeonhd will basicly disappear, when everything is ported to gallium3d?
                    The (long term) plan is for all hardware-specific Xorg drivers -- including radeon and radeonhd -- to disappear entirely, with or without Gallium3D.

                    Gallium3D isn't at all necessary to remove the hardware drivers from Xorg. EXA can be implemented as a glucose-like layer on top of Mesa just fine, and Xorg will still talk to KMS/DRI2/etc. like it does already. Add in a standard video acceleration driver architecture and Xorg would lose any need for radeon/radeonhd or other hardware drivers. Gallium3D is purely optional.

                    Comment


                    • #11
                      Originally posted by elanthis View Post
                      The state trackers are pretty flexible. In OpenGL, for example, you have a drawing context. You don't say "draw a red triangle here," but instead you say, "set the drawing color to red; now add these three coordinates; now connect the coordinates I gave you to render a triangle." The state tracker tracks the 'state' of the OpenGL context and eventually generates real drawing commands. Cairo uses a similar context-based drawing API, and hence can be accelerated in a very similar fashion. The Cairo state tracker would simply be tracking 2D-only state and be doing some tesselation to generate triangle-based rendering commands for the hardware, but otherwise is really the same as the OpenGL state tracker in general. EXA works less with contexts and more with complete drawing commands (very simplistic ones), so the EXA state tracker would be even simpler -- it would just translate EXA commands into Gallium commands without actually even needing to track much state.
                      Does Gallium/Mesa have the "WGL" style limitation of a single OpenGL context, or can I open multiple contexts? If I have multiple cards, can I open multiple OpenGL contexts per card?

                      Frank

                      Comment


                      • #12
                        Theoretically you could have multiple contexts, but I think the driver would have to implement a context-switching mechanism. I'm not sure if it does that now or how it handles multiple clients who want to use the Gallium interface. I'm going to hazard a guess that that hasn't been tested/worked on much because they're focusing on getting basic functionality working first.
                        Last edited by TechMage89; 03-03-2009, 01:33 PM.

                        Comment


                        • #13
                          Does Gallium/Mesa have the "WGL" style limitation of a single OpenGL context, or can I open multiple contexts?
                          To the best of my knowlegde, there is no such limitation - where did you get this?

                          The only limitation is that you can only have a single context active *per thread*, which is dictated by the retained-mode OpenGL API (and not by the WGL/GLX/AGL glue). Point in case, I once wrote a test that created up to a hundred OpenGL contexts.

                          If I have multiple cards, can I open multiple OpenGL contexts per card?
                          The closed-source drivers provide GLX extensions that support this, but I doubt this will ever appear on the open-source ones.

                          Comment


                          • #14
                            Hehe.

                            This stuff kicks ass.

                            Here is my understanding of the driver structure:


                            Old Fashioned (the way most people are currently doing things)
                            --------------------------------------------------------------

                            The Linux video driver model is very convoluted. X grew up in a time when all a video card was was a framebuffer. A framebuffer is a region of memory that you simply write out data to and then the video card writes that data to the display. Very quick and direct way of accessing the video card, but you have no acceleration.

                            As time moved on and people started accelerating bits and pieces of the display you have had new drivers pop up to provide that functionality.

                            So now we have several drivers that drive a single video card. Typically:

                            1. VGA or Framebuffer driver. -- kernel driver

                            This provides console access. That is when your not in X Windows and you have stuff on your screen it is using the in-kernel VGA or Framebuffer driver.

                            2. Xorg DDX. -- userspace driver

                            This provides 2D acceleration and other features. The DDX is 'Device Dependent X'. The Xorg driver is developed separately from Linux and has it's own way to directly access the hardware. It also fiddles around with PCI registers and other dangerous things outside of the Linux kernel.

                            For example you have EXA and XAA. XAA is the old way of providing 2D acceleration and EXA is a newer way that tends to work better. People are currently using both in Linux and most DDX support both APIs and defaults to one or the other.

                            It also provides for mode setting (setting the resolution and display outputs), some limited video playback acceleration, and such things.

                            Oh and the DIX, that is Device Independent X, is all the stuff that isn't part of the hardware drivers. Things like X libraries.


                            3. DRM Driver -- kernel driver

                            Linux DRM Driver is the in-kernel driver for providing controlled access to video cards. DRM stands for 'Direct Rendering Management'.

                            The Linux DRM stuff then implements a stable user space API/ABI called DRI that video drivers can talk to the hardware through.



                            4. DRI Driver -- userspace driver

                            The DRI driver is a OpenGL hardware acceleration driver that is created by accelerating bits and pieces of Mesa and it interfaces the hardware through the DRI protocol and Linux DRM driver.


                            Mesa is the default OpenGL stack provided by distributions and other open source things. OpenGL is itself a large generic API designed for programming 3D applications. Itself is mostly independent from hardware and may or may not be hardware accelerated. This is very different from DirectX were DirectX is tied very close to the hardware. With OpenGL only a part of the API is ever hardware accelerated. With 'consumer' grade video cards they typically only accelerate portions of the API that is used by video games and such were 'professional' video cards tend to a bit more.

                            So basically they create the DRI driver by taking Mesa and then accelerating as much as they can in hardware.

                            It sucks
                            ---------

                            How all of this works together, more or less, is that these various drivers are forced to work together in a piecemeal fashion. Since they are all developed by different folks and different projects (Xorg, Linux, and Mesa) they don't really get along that well and a great deal of time and effort is spent in just getting them happy and stable.

                            For example usually X.org DDX is given free reign over the display. But if you need a OpenGL application on their then the DDX draws a blank square in the framebuffer that is then given over to the DRI driver. Something like that.



                            The future way
                            ----------------


                            1. Linux DRM driver. --- kernel space driver.

                            Some low-level functionality, like mode setting, is handed over the Linux kernel.

                            Also you have things like GEM, in the Linux kernel, were you have a central memory management scheme were the Linux kernel is put in charge of controlling the video card memory and paging data in and out of the video card and things like that. Previously each API had it's own memory management scheme that made it very difficult for them to work together, which is required for composited desktops.

                            A updated DRI2 protocol is created for allowing userspace drivers to access the hardware.

                            2. Gallium3D ---- userspace driver.

                            The new DRI2 driver which replaces the OpenGL-specific DRI/DRI2 Mesa drivers.

                            With the old DRI userspace stuff each driver had way too much hardware-specific code in it. It was very difficult for improvements in one driver to translate well to other drivers. So Gallium3D is divided up so that it tries to isolate as much hardware-specific code as possible and make that as small as possible. Also it is then extended to support multiple different APIs. So that it doesn't just provide OpenGL, it can provide EXA API, video playback acceleration, or support for OpenCL.


                            So that's the new model. I don't understand the internal structure of Gallium that much to be able to talk about what exactly state trackers or what the hardware-specific portion of the driver is called and whatnot.

                            One of the bonuses is that since your dividing up Gallium in a useful manner then you can make the hardware be anything. It can provide access to your CPU (aka software driver) or Cell or whatever. After all modern video acceleration isn't hardware acceleration anymore.. it's all software. Just software that is optimized to run on both your CPU and GPU rather then just your CPU.


                            What happenned to Xorg DDX?
                            ----------------------------

                            Well with Gallium3D being able to provide lots of different APIs then having a 2D specific driver is redundant. (not to mention most video card companies are doing away with 2D-specific silicon altogether).

                            And with the Linux kernel taking over memory management completely, input detection, and mode setting completely then there is no need for Xorg to do that anymore, either.

                            So there is virtually no reason for X to have any direct access to hardware at all.

                            So your X Server just becomes another application. No more twiddling with your PCI bits. It'll run under your user account and it'll get hardware acceleration the same way any other application can get it.

                            And this is, by far, for the best.


                            VGA/Framebuffer
                            ------------------

                            I donno. Gallium can probably provide for that also. Maybe the kernel will keep seperate drivers for it. I don't know. Not really that important anymore.


                            What about other OSes?
                            -----------------------

                            Well one of the arguments for pro-Xorg drivers is that the DDX stuff is mostly OS independent. Since X twiddles with the bits directly there isn't much in the way of Linux specific code to deal with.

                            With everything going through Linux DRM stuff then that means your drivers need to have Linux to run.

                            But the flip side is that as long as other OSes can have kernels smart enough to support the DRI2 protocol then then can support all the same hardware acceleration features. More or less.

                            And since Gallium itself is modular then it should be possible to design a Gallium driver to use existing Windows or OS X APIs/drivers to accelerate itself. Something like that... way over my head.

                            Comment


                            • #15
                              Originally posted by drag View Post
                              VGA/Framebuffer
                              ------------------

                              I donno. Gallium can probably provide for that also. Maybe the kernel will keep seperate drivers for it. I don't know. Not really that important anymore.
                              VESA will always be necessary as a fail-safe option

                              Comment

                              Working...
                              X