Announcement

Collapse
No announcement yet.

Gallium3D Gets Xorg, DRI2 State Trackers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by elanthis View Post
    The state trackers are pretty flexible. In OpenGL, for example, you have a drawing context. You don't say "draw a red triangle here," but instead you say, "set the drawing color to red; now add these three coordinates; now connect the coordinates I gave you to render a triangle." The state tracker tracks the 'state' of the OpenGL context and eventually generates real drawing commands. Cairo uses a similar context-based drawing API, and hence can be accelerated in a very similar fashion. The Cairo state tracker would simply be tracking 2D-only state and be doing some tesselation to generate triangle-based rendering commands for the hardware, but otherwise is really the same as the OpenGL state tracker in general. EXA works less with contexts and more with complete drawing commands (very simplistic ones), so the EXA state tracker would be even simpler -- it would just translate EXA commands into Gallium commands without actually even needing to track much state.
    Does Gallium/Mesa have the "WGL" style limitation of a single OpenGL context, or can I open multiple contexts? If I have multiple cards, can I open multiple OpenGL contexts per card?

    Frank

    Comment


    • #12
      Theoretically you could have multiple contexts, but I think the driver would have to implement a context-switching mechanism. I'm not sure if it does that now or how it handles multiple clients who want to use the Gallium interface. I'm going to hazard a guess that that hasn't been tested/worked on much because they're focusing on getting basic functionality working first.
      Last edited by TechMage89; 03 March 2009, 02:33 PM.

      Comment


      • #13
        Does Gallium/Mesa have the "WGL" style limitation of a single OpenGL context, or can I open multiple contexts?
        To the best of my knowlegde, there is no such limitation - where did you get this?

        The only limitation is that you can only have a single context active *per thread*, which is dictated by the retained-mode OpenGL API (and not by the WGL/GLX/AGL glue). Point in case, I once wrote a test that created up to a hundred OpenGL contexts.

        If I have multiple cards, can I open multiple OpenGL contexts per card?
        The closed-source drivers provide GLX extensions that support this, but I doubt this will ever appear on the open-source ones.

        Comment


        • #14
          Hehe.

          This stuff kicks ass.

          Here is my understanding of the driver structure:


          Old Fashioned (the way most people are currently doing things)
          --------------------------------------------------------------

          The Linux video driver model is very convoluted. X grew up in a time when all a video card was was a framebuffer. A framebuffer is a region of memory that you simply write out data to and then the video card writes that data to the display. Very quick and direct way of accessing the video card, but you have no acceleration.

          As time moved on and people started accelerating bits and pieces of the display you have had new drivers pop up to provide that functionality.

          So now we have several drivers that drive a single video card. Typically:

          1. VGA or Framebuffer driver. -- kernel driver

          This provides console access. That is when your not in X Windows and you have stuff on your screen it is using the in-kernel VGA or Framebuffer driver.

          2. Xorg DDX. -- userspace driver

          This provides 2D acceleration and other features. The DDX is 'Device Dependent X'. The Xorg driver is developed separately from Linux and has it's own way to directly access the hardware. It also fiddles around with PCI registers and other dangerous things outside of the Linux kernel.

          For example you have EXA and XAA. XAA is the old way of providing 2D acceleration and EXA is a newer way that tends to work better. People are currently using both in Linux and most DDX support both APIs and defaults to one or the other.

          It also provides for mode setting (setting the resolution and display outputs), some limited video playback acceleration, and such things.

          Oh and the DIX, that is Device Independent X, is all the stuff that isn't part of the hardware drivers. Things like X libraries.


          3. DRM Driver -- kernel driver

          Linux DRM Driver is the in-kernel driver for providing controlled access to video cards. DRM stands for 'Direct Rendering Management'.

          The Linux DRM stuff then implements a stable user space API/ABI called DRI that video drivers can talk to the hardware through.



          4. DRI Driver -- userspace driver

          The DRI driver is a OpenGL hardware acceleration driver that is created by accelerating bits and pieces of Mesa and it interfaces the hardware through the DRI protocol and Linux DRM driver.


          Mesa is the default OpenGL stack provided by distributions and other open source things. OpenGL is itself a large generic API designed for programming 3D applications. Itself is mostly independent from hardware and may or may not be hardware accelerated. This is very different from DirectX were DirectX is tied very close to the hardware. With OpenGL only a part of the API is ever hardware accelerated. With 'consumer' grade video cards they typically only accelerate portions of the API that is used by video games and such were 'professional' video cards tend to a bit more.

          So basically they create the DRI driver by taking Mesa and then accelerating as much as they can in hardware.

          It sucks
          ---------

          How all of this works together, more or less, is that these various drivers are forced to work together in a piecemeal fashion. Since they are all developed by different folks and different projects (Xorg, Linux, and Mesa) they don't really get along that well and a great deal of time and effort is spent in just getting them happy and stable.

          For example usually X.org DDX is given free reign over the display. But if you need a OpenGL application on their then the DDX draws a blank square in the framebuffer that is then given over to the DRI driver. Something like that.



          The future way
          ----------------


          1. Linux DRM driver. --- kernel space driver.

          Some low-level functionality, like mode setting, is handed over the Linux kernel.

          Also you have things like GEM, in the Linux kernel, were you have a central memory management scheme were the Linux kernel is put in charge of controlling the video card memory and paging data in and out of the video card and things like that. Previously each API had it's own memory management scheme that made it very difficult for them to work together, which is required for composited desktops.

          A updated DRI2 protocol is created for allowing userspace drivers to access the hardware.

          2. Gallium3D ---- userspace driver.

          The new DRI2 driver which replaces the OpenGL-specific DRI/DRI2 Mesa drivers.

          With the old DRI userspace stuff each driver had way too much hardware-specific code in it. It was very difficult for improvements in one driver to translate well to other drivers. So Gallium3D is divided up so that it tries to isolate as much hardware-specific code as possible and make that as small as possible. Also it is then extended to support multiple different APIs. So that it doesn't just provide OpenGL, it can provide EXA API, video playback acceleration, or support for OpenCL.


          So that's the new model. I don't understand the internal structure of Gallium that much to be able to talk about what exactly state trackers or what the hardware-specific portion of the driver is called and whatnot.

          One of the bonuses is that since your dividing up Gallium in a useful manner then you can make the hardware be anything. It can provide access to your CPU (aka software driver) or Cell or whatever. After all modern video acceleration isn't hardware acceleration anymore.. it's all software. Just software that is optimized to run on both your CPU and GPU rather then just your CPU.


          What happenned to Xorg DDX?
          ----------------------------

          Well with Gallium3D being able to provide lots of different APIs then having a 2D specific driver is redundant. (not to mention most video card companies are doing away with 2D-specific silicon altogether).

          And with the Linux kernel taking over memory management completely, input detection, and mode setting completely then there is no need for Xorg to do that anymore, either.

          So there is virtually no reason for X to have any direct access to hardware at all.

          So your X Server just becomes another application. No more twiddling with your PCI bits. It'll run under your user account and it'll get hardware acceleration the same way any other application can get it.

          And this is, by far, for the best.


          VGA/Framebuffer
          ------------------

          I donno. Gallium can probably provide for that also. Maybe the kernel will keep seperate drivers for it. I don't know. Not really that important anymore.


          What about other OSes?
          -----------------------

          Well one of the arguments for pro-Xorg drivers is that the DDX stuff is mostly OS independent. Since X twiddles with the bits directly there isn't much in the way of Linux specific code to deal with.

          With everything going through Linux DRM stuff then that means your drivers need to have Linux to run.

          But the flip side is that as long as other OSes can have kernels smart enough to support the DRI2 protocol then then can support all the same hardware acceleration features. More or less.

          And since Gallium itself is modular then it should be possible to design a Gallium driver to use existing Windows or OS X APIs/drivers to accelerate itself. Something like that... way over my head.

          Comment


          • #15
            Originally posted by drag View Post
            VGA/Framebuffer
            ------------------

            I donno. Gallium can probably provide for that also. Maybe the kernel will keep seperate drivers for it. I don't know. Not really that important anymore.
            VESA will always be necessary as a fail-safe option

            Comment


            • #16
              Originally posted by drag View Post
              VGA/Framebuffer - I donno. Gallium can probably provide for that also. Maybe the kernel will keep seperate drivers for it. I don't know. Not really that important anymore.
              I think the idea is to update the current FB drivers so that they will use KMS+GEM if present.
              Test signature

              Comment


              • #17
                Originally posted by some-guy View Post
                VESA will always be necessary as a fail-safe option
                VESA is not fail-safe. Both of my last two video card/monitor combinations were totally unusable with the VESA driver. In one case the driver simply refused to use any mode the monitor would accept, and in the later case I had the bottom 1/6th of the screen blank and no mouse cursor visible. For the nvidia hardware, the nv driver wouldn't even work right (I literally had to use the binary driver, which worked flawlessly) and for the ati hardware I had to use the radeon driver (the radeonhd driver did the same thing the VESA driver did).

                Comment


                • #18
                  Originally posted by bridgman View Post
                  I think the idea is to update the current FB drivers so that they will use KMS+GEM if present.
                  This is something that I've found a bit confusing. What's the difference between a "kernel modesetting" driver and a "framebuffer device" driver? Different userland API?

                  Comment


                  • #19
                    Originally posted by Ex-Cyber View Post
                    This is something that I've found a bit confusing. What's the difference between a "kernel modesetting" driver and a "framebuffer device" driver? Different userland API?
                    KMS is just code for setting the actual modes (resolution, depth, etc.). The framebuffer driver is a very limited drawing API (I think just for pushing pixels to video memory, more or less) and a very limited VESA/VGA mode selector. It is used by the text console and by Xorg when no other driver is available. Right now the framebuffer driver does its own mode setting, as does Xorg, which is what makes VT-switching from X to console (or even X to X) such a bitch.

                    The framebuffer interface (and possibly the individual drivers, not sure) are not going away. The kernel needs them to render text consoles and to render OOPS messages and the like. The plan is to just remove the mode-setting portion of the framebuffer drivers and make them use KMS so that we have only a single mode setting framework in the kernel.

                    Comment

                    Working...
                    X