Announcement

Collapse
No announcement yet.

Testing Out AMD's DRI2 Driver Stack

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by crumja View Post
    One nice thing to see would be the same results for fglrx (or at least the fglrx results with the last working kernel/xorg that supports the hardware). This would give the community a baseline for where we are now and how much more we have to do.

    Otherwise, all we have to go on is bridgman's pronouncement of 3D performance in the 70% range of current fglrx.

    From my own testing it's more like 20-50%. I can't wait to get my hands on a finalized Gallium3D driver and bench it against fglrx.

    Comment


    • #17
      From my own testing it's more like 20-50%. I can't wait to get my hands on a finalized Gallium3D driver and bench it against fglrx.
      "finalized" is a odd term to use. Development of the drivers will stop right around the time people stop using them..

      (that is one of the nice things about OSS drivers is that as long as people are using them then they'll be supported. My oldest laptop uses a ancient ATI mobility chipset that had it's supporte dropped years and years ago by the flglx drivers, but the OSS stuff works decently and is still gettting occasional bug fixes.

      But I would not expect any massive improvements in benchmarks from Gallium. Any performance improvements will be a slow small build up of performance over time.

      -------------

      the thing I am hoping most from Gallium right now is consistency.

      OSS drivers are plagued by inconstanty reliability, performance, and API support. This means that for games and applications that take advantage of Linux have to put a lot of effort into driver-specific behavior and fixes to support a wide veriaty of users.

      Most projects don't have the resources and just concentrate on Nvidia first and then Flglx a second, if that. The OSS video drivers may get a few bug work-arounds in applications, but that is few and far between.

      So for users using OSS drivers they not only have to deal with confusing and oddball behavior they are likely to run into problems when trying to play games or do other things related to GPU acceleration.

      For example say your using Mythtv... the XvMC acceleration stuff is very spotty.. for the Intel driver you have very different setup for getting XvMC on the older 810-style devices vs the newer 9xx GMA devices and XvMC is not really that useful for accelerating lots of different things other then mpeg-2. So that makes a user go 'low-level' and edit Xorg to enable it or get a *.so file or something like that and edit a couple files to get it enabled. And even after they put that effort into it it's not something that is enabled by default, so it's not widely used, so it's not widely tested, and it's not really good at helping to accelerate HD video (which is usually going to be H.264 for most people) which is going to be people's primary interest in accelerating. (a decent Pentium 4 can play a DVD-resolution-sized video with doing raw unaccelerated x11 and software scaling.. but even my dual cores choke on 1080p HD stuff.)

      This leads to bad usability and a reputation that Linux is buggy and hard to use. People who do not necessarially care about having the best performance are still forced to install propretiary drivers due to compatibility reasons and this, again, is not really easy to do and leads to bad user experiences, unproductive platform, and bad reputations.

      Sooo....

      Gallium is nice because its design tries to minimize, or at least isolate, the hardware-specific code in the driver/API stack. With the current model while the DRI and the DDX drivers are all based on the same sort of design and same code bases.. they are all very unique and have hardware specific features spread all over the place. With Gallium they try to isolate that so that you get a much more simplified OpenGL or EXA implimentation that then connects using standardized methods to the hardware-specific code.

      What this gets you is that the APIs are going to be much more unified between drivers and application compatibility should improve quite a bit, if not performance. And then since the hardware-specific code is isolated then that makes it a much easier design to deal with for driver developers and then they can get better hardware support done faster.

      ..........


      I figure once they get to the point were browsers will stop locking up X Windows, 3D games and applications stop having ugly artifacts and crashing all the time, suspend and video modesetting is very reliable... then that is when they'll start putting effort into optimizing performance.
      Last edited by drag; 05-13-2009, 03:51 PM.

      Comment


      • #18
        It's impressive how fast and good the work is done.
        Thanks for good benchmarks.
        Could you do some more benchmarks of 3D games (for example Nexuiz and OpenArena) which could be run on DRI2 already and maybe compiz/kde window effects performance?

        Comment


        • #19
          I meant finalized in terms of "usable by end-users".
          As for Gallium, I won't bitch and moan for performance. Stability and feature support are my main concerns. 30-60% of the 3D performance of fglrx is fine with me. Getting applications to run in a stable manner is more important to me than squeezing extra frames. Optimization can come latter.

          Comment


          • #20
            Folks, a bit off-topic, but I still can't have 3D accel in my AMD/ATI Radeon HD 3200 IG in Kubuntu 9.04 ... anyone knows what happened with the Catalyst 9.5 release? It was rumored to be out today (just google for Catalyst 9.5)

            Thanks!

            Comment


            • #21
              Originally posted by drag View Post
              But I would not expect any massive improvements in benchmarks from Gallium. Any performance improvements will be a slow small build up of performance over time.
              Actually, it tends to go in spurts and bursts, especially with my coding style. :3

              Comment


              • #22
                Meandering a little way off-topic...

                Originally posted by drag View Post
                <snip>
                (a decent Pentium 4 can play a DVD-resolution-sized video with doing raw unaccelerated x11 and software scaling.. but even my dual cores choke on 1080p HD stuff.)
                <snip>
                I think I've stated it elsewhere, but on the mplayerhq.hu homepage they give you instructions on how to compile the mplayer-mt. It lets me play 1080p on an Athlon X2 4600+ CPU with Xv. While it's not GPU acceleration, it's a heck of a lot better than nothing. My MythTV system with an HD3200 is awesome with the open source drivers. The only thing that could make it better is if I could start doing some 3D gaming on it as well.

                Comment


                • #23
                  I think the "ATI/AMD" remark at the start of the article was directed at the post-buyout company - not the pure "ATi" drivers.

                  I used to have a 9800 Pro, was VERY disappointed that the Linux drivers sucked so badly (and the installer back then was hell).

                  With Nvidia I've had fine binary drivers - although I wish they'd grow up and help out the Noauvouvouioviuoi project with hardware docs at least (ermm, what part of the hardware command structure is DRM'd then Nvidia?).

                  But, anyhoo, VERY nice article.
                  Correct performance stats (nice for a change), wasn't condeming of the performance, and explained the various aspects of KMS/DRI/gallium/etc very well.

                  Phoronix, give this guy more stuff to write about!!

                  Comment


                  • #24
                    Originally posted by drag View Post
                    What this gets you is that the APIs are going to be much more unified between drivers and application compatibility should improve quite a bit, if not performance.
                    Which means you will get better performance, because as you unify things, testing is easier and improvements effect more hardware, etc. Unifying/standardizing more means more focus on goodies like performance and such, and less time wasted, which will yield a better Linux experience. Can't wait.

                    Comment


                    • #25
                      Can anyone point me to some articles/discussions for background on TTM/GEM/KMS? This whole development direction seems to me to be contrary to good design principles - there are more moving parts, and putting more into the kernel means more context switches required to get any useful work done. Back when I worked on X (I wrote the X11R1 port for Apollo DOMAIN/OS) we would have had our heads chewed off for trying to push any of this work into the kernel...

                      Comment


                      • #26
                        Most of the real discussion was a few years ago, but the key point is that there were some real serious problems with the current architecture (possibly worse than the ones you were dealing with) which had to be addressed.

                        The main problems were :

                        - multiple graphics drivers, some in user space, and some in the kernel, over-writing each others settings during common use cases

                        - inability to share memory between 2D and 3D drivers, which made any kind of desktop composition inefficient and slow

                        Both the 2D and 3D drivers need context switches in order to access the hardware anyways, since the direct rendering architecture uses drm to arbitrate between the ddx and mesa drivers, and to manage the shared DMA buffers (aka ring buffer) used to feed commands and data into the graphics processors. I don't think these changes really introduce more context switches as much as change the dividing line between user and kernel responsibilities.

                        I'm a bit rusty on the history (my hands-on X experience was before X11), but my understanding is that user modesetting is a relatively new addition to X, and that the KMS initiative is arguably going back to the way modesetting was handled in earlier versions of X which were presumably built on existing kernel drivers. I haven't had much luck finding online references to support this, but I have been told this by a number of people who have worked on X for a very long time.

                        If you're saying that the DRI architecture is fundamentally flawed and that all graphics should go through a single userspace stack (presumably in the X server) then that's a different discussion of course.

                        Jesse Barnes wrote a good summary of the rationale for moving modesetting into the kernel, and there is a good discussion (plus the odd rant ) in the subsequent comments : http://kerneltrap.org/node/8242

                        Thomas's original TTM proposal is a pretty good summary of the goals related to memory management : http://www.tungstengraphics.com/mm.pdf
                        Last edited by bridgman; 05-13-2009, 09:02 PM.

                        Comment


                        • #27
                          Originally posted by Melcar View Post
                          Even the binaries were "usable". I remember running my old 9800 and 9600 cards on the old drivers; performance was lacking, but they got the job done. I honestly could never understand what all the fuss was about with ATI drivers even back then.
                          Well, I too used a 9800 np and it was to say the least unstable. I would get driver crashes by just by hitting Ctrl+AlT+F1 or any other f key. Also there were quite a few times when video applications would give me a black screen. Not including all that the community usually had to patch the drivers themselves because a lot of the time they would not compile. They also had issues with VIA motherboards, so the drivers wouldn't even load without some secret patch(seriously I had to look for 3-5 hours before I could find a patch to my motherboard kt333).

                          Comment


                          • #28
                            I would have to disagree with the author.
                            I have had a lot of problems with my ATI card on Linux.
                            In fact it has caused me many system re-installs.
                            I have the 4870 X 2 and no 3d support on Jaunty Jackalope. I tried installing the Catalyst 9.4 and when I reboot my machine it locks up and the colors are all messed up. I wish I would have bought a Nvidia card. I used to use windows and the Vista drivers were not much better.

                            Comment


                            • #29
                              Originally posted by bridgman View Post
                              Most of the real discussion was a few years ago, but the key point is that there were some real serious problems with the current architecture (possibly worse than the ones you were dealing with) which had to be addressed.

                              The main problems were :

                              - multiple graphics drivers, some in user space, and some in the kernel, over-writing each others settings during common use cases

                              - inability to share memory between 2D and 3D drivers, which made any kind of desktop composition inefficient and slow

                              Both the 2D and 3D drivers need context switches in order to access the hardware anyways, since the direct rendering architecture uses drm to arbitrate between the ddx and mesa drivers, and to manage the shared DMA buffers (aka ring buffer) used to feed commands and data into the graphics processors. I don't think these changes really introduce more context switches as much as change the dividing line between user and kernel responsibilities.

                              I'm a bit rusty on the history (my hands-on X experience was before X11), but my understanding is that user modesetting is a relatively new addition to X, and that the KMS initiative is arguably going back to the way modesetting was handled in earlier versions of X which were presumably built on existing kernel drivers. I haven't had much luck finding online references to support this, but I have been told this by a number of people who have worked on X for a very long time.

                              If you're saying that the DRI architecture is fundamentally flawed and that all graphics should go through a single userspace stack (presumably in the X server) then that's a different discussion of course.

                              Jesse Barnes wrote a good summary of the rationale for moving modesetting into the kernel, and there is a good discussion (plus the odd rant ) in the subsequent comments : http://kerneltrap.org/node/8242

                              Thomas's original TTM proposal is a pretty good summary of the goals related to memory management : http://www.tungstengraphics.com/mm.pdf
                              Thanks, those references were pretty interesting.

                              Yeah, I guess a lot of things were simpler on the Apollo workstations; they had no hardware text mode, they really only had one supported graphics mode per given machine configuration. The one wrinkle is that they already had their own native graphics/windowing system that was not related to X. There were two porting efforts going on, one to simply layer the X APIs on top of the native APIs, and one to drive the hardware directly. I think ultimately we had to accept the overhead and just layered on top of the Apollo APIs, to allow native apps to continue to run alongside X apps. Otherwise we would have had the same issues - multiple drivers talking to the same graphics hardware...

                              As for DRI ... we obviously wouldn't need to fret over "redirected direct-rendering" if everything was going through the X server...

                              Comment


                              • #30
                                Originally posted by cliff View Post
                                In fact it has caused me many system re-installs.
                                I have the 4870 X 2 and no 3d support on Jaunty Jackalope. I tried installing the Catalyst 9.4 and when I reboot my machine it locks up and the colors are all messed up.
                                A few people have reported problems with the X2 boards and Jaunty, although they didn't show up in our "early look" testing on server 1.6. Did you see the same problems with Intrepid ? If not, it might be worth staying on 8.10 until we finish QA testing & bug fixing on Jaunty, and announce support in the release notes.
                                Last edited by bridgman; 05-14-2009, 12:03 AM.

                                Comment

                                Working...
                                X