Theoretically you could have multiple contexts, but I think the driver would have to implement a context-switching mechanism. I'm not sure if it does that now or how it handles multiple clients who want to use the Gallium interface. I'm going to hazard a guess that that hasn't been tested/worked on much because they're focusing on getting basic functionality working first.
To the best of my knowlegde, there is no such limitation - where did you get this?Quote:
Does Gallium/Mesa have the "WGL" style limitation of a single OpenGL context, or can I open multiple contexts?
The only limitation is that you can only have a single context active *per thread*, which is dictated by the retained-mode OpenGL API (and not by the WGL/GLX/AGL glue). Point in case, I once wrote a test that created up to a hundred OpenGL contexts.
The closed-source drivers provide GLX extensions that support this, but I doubt this will ever appear on the open-source ones.Quote:
If I have multiple cards, can I open multiple OpenGL contexts per card?
This stuff kicks ass.
Here is my understanding of the driver structure:
Old Fashioned (the way most people are currently doing things)
The Linux video driver model is very convoluted. X grew up in a time when all a video card was was a framebuffer. A framebuffer is a region of memory that you simply write out data to and then the video card writes that data to the display. Very quick and direct way of accessing the video card, but you have no acceleration.
As time moved on and people started accelerating bits and pieces of the display you have had new drivers pop up to provide that functionality.
So now we have several drivers that drive a single video card. Typically:
1. VGA or Framebuffer driver. -- kernel driver
This provides console access. That is when your not in X Windows and you have stuff on your screen it is using the in-kernel VGA or Framebuffer driver.
2. Xorg DDX. -- userspace driver
This provides 2D acceleration and other features. The DDX is 'Device Dependent X'. The Xorg driver is developed separately from Linux and has it's own way to directly access the hardware. It also fiddles around with PCI registers and other dangerous things outside of the Linux kernel.
For example you have EXA and XAA. XAA is the old way of providing 2D acceleration and EXA is a newer way that tends to work better. People are currently using both in Linux and most DDX support both APIs and defaults to one or the other.
It also provides for mode setting (setting the resolution and display outputs), some limited video playback acceleration, and such things.
Oh and the DIX, that is Device Independent X, is all the stuff that isn't part of the hardware drivers. Things like X libraries.
3. DRM Driver -- kernel driver
Linux DRM Driver is the in-kernel driver for providing controlled access to video cards. DRM stands for 'Direct Rendering Management'.
The Linux DRM stuff then implements a stable user space API/ABI called DRI that video drivers can talk to the hardware through.
4. DRI Driver -- userspace driver
The DRI driver is a OpenGL hardware acceleration driver that is created by accelerating bits and pieces of Mesa and it interfaces the hardware through the DRI protocol and Linux DRM driver.
Mesa is the default OpenGL stack provided by distributions and other open source things. OpenGL is itself a large generic API designed for programming 3D applications. Itself is mostly independent from hardware and may or may not be hardware accelerated. This is very different from DirectX were DirectX is tied very close to the hardware. With OpenGL only a part of the API is ever hardware accelerated. With 'consumer' grade video cards they typically only accelerate portions of the API that is used by video games and such were 'professional' video cards tend to a bit more.
So basically they create the DRI driver by taking Mesa and then accelerating as much as they can in hardware.
How all of this works together, more or less, is that these various drivers are forced to work together in a piecemeal fashion. Since they are all developed by different folks and different projects (Xorg, Linux, and Mesa) they don't really get along that well and a great deal of time and effort is spent in just getting them happy and stable.
For example usually X.org DDX is given free reign over the display. But if you need a OpenGL application on their then the DDX draws a blank square in the framebuffer that is then given over to the DRI driver. Something like that.
The future way
1. Linux DRM driver. --- kernel space driver.
Some low-level functionality, like mode setting, is handed over the Linux kernel.
Also you have things like GEM, in the Linux kernel, were you have a central memory management scheme were the Linux kernel is put in charge of controlling the video card memory and paging data in and out of the video card and things like that. Previously each API had it's own memory management scheme that made it very difficult for them to work together, which is required for composited desktops.
A updated DRI2 protocol is created for allowing userspace drivers to access the hardware.
2. Gallium3D ---- userspace driver.
The new DRI2 driver which replaces the OpenGL-specific DRI/DRI2 Mesa drivers.
With the old DRI userspace stuff each driver had way too much hardware-specific code in it. It was very difficult for improvements in one driver to translate well to other drivers. So Gallium3D is divided up so that it tries to isolate as much hardware-specific code as possible and make that as small as possible. Also it is then extended to support multiple different APIs. So that it doesn't just provide OpenGL, it can provide EXA API, video playback acceleration, or support for OpenCL.
So that's the new model. I don't understand the internal structure of Gallium that much to be able to talk about what exactly state trackers or what the hardware-specific portion of the driver is called and whatnot.
One of the bonuses is that since your dividing up Gallium in a useful manner then you can make the hardware be anything. It can provide access to your CPU (aka software driver) or Cell or whatever. After all modern video acceleration isn't hardware acceleration anymore.. it's all software. Just software that is optimized to run on both your CPU and GPU rather then just your CPU.
What happenned to Xorg DDX?
Well with Gallium3D being able to provide lots of different APIs then having a 2D specific driver is redundant. (not to mention most video card companies are doing away with 2D-specific silicon altogether).
And with the Linux kernel taking over memory management completely, input detection, and mode setting completely then there is no need for Xorg to do that anymore, either.
So there is virtually no reason for X to have any direct access to hardware at all.
So your X Server just becomes another application. No more twiddling with your PCI bits. It'll run under your user account and it'll get hardware acceleration the same way any other application can get it.
And this is, by far, for the best.
I donno. Gallium can probably provide for that also. Maybe the kernel will keep seperate drivers for it. I don't know. Not really that important anymore.
What about other OSes?
Well one of the arguments for pro-Xorg drivers is that the DDX stuff is mostly OS independent. Since X twiddles with the bits directly there isn't much in the way of Linux specific code to deal with.
With everything going through Linux DRM stuff then that means your drivers need to have Linux to run.
But the flip side is that as long as other OSes can have kernels smart enough to support the DRI2 protocol then then can support all the same hardware acceleration features. More or less.
And since Gallium itself is modular then it should be possible to design a Gallium driver to use existing Windows or OS X APIs/drivers to accelerate itself. Something like that... way over my head.
The framebuffer interface (and possibly the individual drivers, not sure) are not going away. The kernel needs them to render text consoles and to render OOPS messages and the like. The plan is to just remove the mode-setting portion of the framebuffer drivers and make them use KMS so that we have only a single mode setting framework in the kernel.