Originally posted by drag
View Post
Announcement
Collapse
No announcement yet.
Testing Out AMD's DRI2 Driver Stack
Collapse
X
-
Meandering a little way off-topic...
Originally posted by drag View Post<snip>
(a decent Pentium 4 can play a DVD-resolution-sized video with doing raw unaccelerated x11 and software scaling.. but even my dual cores choke on 1080p HD stuff.)
<snip>
Comment
-
I think the "ATI/AMD" remark at the start of the article was directed at the post-buyout company - not the pure "ATi" drivers.
I used to have a 9800 Pro, was VERY disappointed that the Linux drivers sucked so badly (and the installer back then was hell).
With Nvidia I've had fine binary drivers - although I wish they'd grow up and help out the Noauvouvouioviuoi project with hardware docs at least (ermm, what part of the hardware command structure is DRM'd then Nvidia?).
But, anyhoo, VERY nice article.
Correct performance stats (nice for a change), wasn't condeming of the performance, and explained the various aspects of KMS/DRI/gallium/etc very well.
Phoronix, give this guy more stuff to write about!!
Comment
-
Originally posted by drag View PostWhat this gets you is that the APIs are going to be much more unified between drivers and application compatibility should improve quite a bit, if not performance.
Comment
-
Can anyone point me to some articles/discussions for background on TTM/GEM/KMS? This whole development direction seems to me to be contrary to good design principles - there are more moving parts, and putting more into the kernel means more context switches required to get any useful work done. Back when I worked on X (I wrote the X11R1 port for Apollo DOMAIN/OS) we would have had our heads chewed off for trying to push any of this work into the kernel...
Comment
-
Most of the real discussion was a few years ago, but the key point is that there were some real serious problems with the current architecture (possibly worse than the ones you were dealing with) which had to be addressed.
The main problems were :
- multiple graphics drivers, some in user space, and some in the kernel, over-writing each others settings during common use cases
- inability to share memory between 2D and 3D drivers, which made any kind of desktop composition inefficient and slow
Both the 2D and 3D drivers need context switches in order to access the hardware anyways, since the direct rendering architecture uses drm to arbitrate between the ddx and mesa drivers, and to manage the shared DMA buffers (aka ring buffer) used to feed commands and data into the graphics processors. I don't think these changes really introduce more context switches as much as change the dividing line between user and kernel responsibilities.
I'm a bit rusty on the history (my hands-on X experience was before X11), but my understanding is that user modesetting is a relatively new addition to X, and that the KMS initiative is arguably going back to the way modesetting was handled in earlier versions of X which were presumably built on existing kernel drivers. I haven't had much luck finding online references to support this, but I have been told this by a number of people who have worked on X for a very long time.
If you're saying that the DRI architecture is fundamentally flawed and that all graphics should go through a single userspace stack (presumably in the X server) then that's a different discussion of course.
Jesse Barnes wrote a good summary of the rationale for moving modesetting into the kernel, and there is a good discussion (plus the odd rant ) in the subsequent comments : http://kerneltrap.org/node/8242
Thomas's original TTM proposal is a pretty good summary of the goals related to memory management : http://www.tungstengraphics.com/mm.pdfLast edited by bridgman; 13 May 2009, 09:02 PM.Test signature
Comment
-
Originally posted by Melcar View PostEven the binaries were "usable". I remember running my old 9800 and 9600 cards on the old drivers; performance was lacking, but they got the job done. I honestly could never understand what all the fuss was about with ATI drivers even back then.
Comment
-
I would have to disagree with the author.
I have had a lot of problems with my ATI card on Linux.
In fact it has caused me many system re-installs.
I have the 4870 X 2 and no 3d support on Jaunty Jackalope. I tried installing the Catalyst 9.4 and when I reboot my machine it locks up and the colors are all messed up. I wish I would have bought a Nvidia card. I used to use windows and the Vista drivers were not much better.
Comment
-
Originally posted by bridgman View PostMost of the real discussion was a few years ago, but the key point is that there were some real serious problems with the current architecture (possibly worse than the ones you were dealing with) which had to be addressed.
The main problems were :
- multiple graphics drivers, some in user space, and some in the kernel, over-writing each others settings during common use cases
- inability to share memory between 2D and 3D drivers, which made any kind of desktop composition inefficient and slow
Both the 2D and 3D drivers need context switches in order to access the hardware anyways, since the direct rendering architecture uses drm to arbitrate between the ddx and mesa drivers, and to manage the shared DMA buffers (aka ring buffer) used to feed commands and data into the graphics processors. I don't think these changes really introduce more context switches as much as change the dividing line between user and kernel responsibilities.
I'm a bit rusty on the history (my hands-on X experience was before X11), but my understanding is that user modesetting is a relatively new addition to X, and that the KMS initiative is arguably going back to the way modesetting was handled in earlier versions of X which were presumably built on existing kernel drivers. I haven't had much luck finding online references to support this, but I have been told this by a number of people who have worked on X for a very long time.
If you're saying that the DRI architecture is fundamentally flawed and that all graphics should go through a single userspace stack (presumably in the X server) then that's a different discussion of course.
Jesse Barnes wrote a good summary of the rationale for moving modesetting into the kernel, and there is a good discussion (plus the odd rant ) in the subsequent comments : http://kerneltrap.org/node/8242
Thomas's original TTM proposal is a pretty good summary of the goals related to memory management : http://www.tungstengraphics.com/mm.pdf
Yeah, I guess a lot of things were simpler on the Apollo workstations; they had no hardware text mode, they really only had one supported graphics mode per given machine configuration. The one wrinkle is that they already had their own native graphics/windowing system that was not related to X. There were two porting efforts going on, one to simply layer the X APIs on top of the native APIs, and one to drive the hardware directly. I think ultimately we had to accept the overhead and just layered on top of the Apollo APIs, to allow native apps to continue to run alongside X apps. Otherwise we would have had the same issues - multiple drivers talking to the same graphics hardware...
As for DRI ... we obviously wouldn't need to fret over "redirected direct-rendering" if everything was going through the X server...
Comment
-
Originally posted by cliff View PostIn fact it has caused me many system re-installs.
I have the 4870 X 2 and no 3d support on Jaunty Jackalope. I tried installing the Catalyst 9.4 and when I reboot my machine it locks up and the colors are all messed up.Last edited by bridgman; 14 May 2009, 12:03 AM.Test signature
Comment
Comment