Announcement

Collapse
No announcement yet.

ATI R300 Mesa, Gallium3D Compared To Catalyst

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    I just hope the OSS drivers don't get stuck in a for(; loop trying to fix the infrastructure / architecture indefinitely. The extremely frequent hardware generation jumps have been making it nearly impossible for the open source 3d stack to settle on an architecture and give driver developers time to work on the more difficult optimizations. The driver developers are constantly stuck fighting a battle between four things demanding their time: time on new hardware support, time on optimizations, time spent working on the infrastructure of the next architecture, and time spent porting old hardware to the new architecture. There aren't enough developers to work on all four of those issues adequately, so you have two options: either increase the number of man-hours dedicated to OSS driver development, or, lacking that, eliminate one of the tasks demanding their time. I propose the latter. Specifically, eliminate the time spent working on new architectures. Just draw a line in the sand at Gallium3d, and stop rewriting stuff all the time. Commit to a stable API and then optimize, so people can actually get some good use out of their hardware without resorting to fglrx or Windows.

    I don't think Gallium3d will be the end-all be-all of 3d architectures. Either the APIs within it will eventually change in such major ways that existing drivers will need to be practically rewritten, or some different 3d architecture by another company will crop up and take its place. This is a practical necessity because, with new GPUs come new demands on the software stack, and these simply can't be worked into the existing architecture without breaking existing code.

    That's understandable, but the existence of new demanding hardware shouldn't spell the end of optimization potential for old cards with the old architecture.

    Personally I think r300g is an exception to this rule: moving from classic mesa to gallium is a completely different kind of architecture shift, and one that makes a lot of sense. But when gallium starts bumping its internal API to v1.0, v2.0, v3.0 to support new hardware, optimization should continue on r300g vs gallium 0.4, unless upgrading it to work with later gallium versions is appreciably easy (which is definitely not something I think can be taken for granted).

    Either an increase in salaried company manpower or a sea of new, complete, public documentation would ease concerns about there being enough time for all four of the developers' tasks to be completed efficiently, but I'm pretty sure that both improved manpower and documentation are being stonewalled indefinitely by Linux's relative insignificance.

    I just think it would do a lot of good to sit down and optimize at least one of the drivers so it is competitive with Catalyst and renders correctly 99% of the time. These tasks go hand in hand; they both require a careful attention to the OpenGL implementation, with an eye towards desktop / consumer use, 3d gaming, and real-time 3d visualization apps. But these more difficult optimizations will never get invested in if the developers feel that the platform on which they write their code is about to evaporate, which is I think why OSS graphics drivers have remained so poor performance-wise for many years.

    Comment


    • #22
      I think what you are asking for has already happened, in the sense that the clump of architectural initiatives discussed at XDS 2007 is now implemented for at least a couple of GPU hardware families, and is running by default for at least one GPU family in the latest round of distros. I haven't seen any new architectural initiatives added to the clump.

      That said, it seems like the Gallium3D API is going to continue to evolve, but I don't *think* it's "supporting new hardware at the expense of old hardware". What seems to be happening is more along the lines of "evolving to better support the state trackers which were hypothetical a few years ago but which are now being implemented".
      Test signature

      Comment


      • #23
        The old adage, release the docs and the community will write the drivers was true 5-10 years ago, but not so much today. New hardware, especially GPUs, are just too complex. We've now released docs for several generations of hardware and there have only been a handful of people to submit bug fixes, much less major feature contributions. It's not just GPUs. There are docs and source code available for a number of other asics (wifi, nics, usb, etc.) but many of them are still unsupported on Linux because no one has written drivers, or the vendor drivers are deemed "too ugly" for upstream inclusion and no one wants to clean them up.

        I don't really see more and better docs making much difference. At some point you just have to sit down and learn about GPUs. The fundamental ideas are the same across all vendors. Between docs and source code for several generations of chips from many vendors, there's plenty of info out there to get a solid baseline understanding. No one that writes graphics drivers read one good set of docs and then suddenly knew how to write drivers.

        As far as I can tell, there has not been any sort of constant, never-ending infrastructure improvements or code rewrites; the xfree86/xorg/drm infrastructure was unchanged from the 1995 until 2007/2008-ish. During the last couple years were basically two major projects the required a lot of code shuffling: kms and gallium. Unfortunately, both were required to even begin to think about the kind of optimizations needed to reach the maximum performance levels of the hardware. The Xorg/drm stack was basically stuck in 1995 until kms. The proprietary drivers all had a 15 year head start. The latest and greatest 3D features (and even the not so latest or greatest: framebuffer objects, pbuffers for example) require a unified memory manager. We have one, but just now. Performance tuning a memory manager is a whole task in itself; consider some of the Linux memory manager upheavals in the past. While, extension-wise, the driver stacks (open and closed) are pretty close, there were a lot of low level functions that were never implemented as fast paths until recently (glteximage and glreadpixels for example) due to the lack of a unified memory manager. Gallium is not quite so fundamental, but it is really needed to get the most out of shaderful hardware. It's not that you can't write a decent driver on classic mesa, it's just a lot more messy work.

        The classic mesa drivers for r100, r200, and r300/r400 cards have been around for years before all the recent code shuffling, and we still have about the same amount of community participation.

        We are just talking about 3D here. There are also lots of other similarly complex parts of the graphics stack that no one thinks about but also require a lot of work and whose infrastructure was stuck in 1995 until recently: modesetting, power management, event generation, interrupts, asic initialization, suspend/resume, etc. Much of this work also requires kms, but for different reasons. For most of these, you need a single driver with a unified view of the hardware that's not shared with 2 other drivers. For some perspective, the kms drm kernel drivers alone are close to if not the largest drivers in the kernel. They are larger than some whole subsystems. That does not count the userspace 2D/3D/Xv drivers.

        Comment


        • #24
          Originally posted by agd5f View Post
          post
          If someone would like to donate money for improving the OSS ATI drivers where would it be best donated?

          Comment


          • #25
            Originally posted by TeoLinuX View Post
            is there room for performance improvement once 2D stability is reached and gallium3D has matured?
            There is, namely:
            - color tiling (implemented but seems to be disabled in this article, WHY???)
            - Hyper-Z
            - compiler optimizations
            - DRI2 Swap
            - reducing CPU overhead everywhere

            Michael should really be running xf86-video-ati and libdrm from git and latest kernel-rcX, otherwise the perfomance will continue to suck in his articles.

            There are performance gains and there will be more as time goes by, but the majority of them will be disabled with old kernels and DDX drivers.

            In other words, another crappy article.

            Comment


            • #26
              I don't understand all these complaints. It is not like "Hey the driver for card x sucks compared to driver y"... It is more like "The architecture for drivers to fit in isn't even done yet.".

              The beauty of Gallium is that a shitload of stuff that was normaly needed to be done for each card is now card and driver agnostic [is this correct english?].

              I'd say use whatever works for you now because the foundation for the drivers to implement themselves in isn't finnished and is the primary concern today. Hold your cool and massive respect to AMD for their part of the support for laying this new architecture and ofcourse also everyone else.

              Comment


              • #27
                Using the latest kernel lets say 2.6.35.7 and xorg edgers (ubuntu PPA) really leads to be totally desync with the latest development features / performance of xorg??

                Comment


                • #28
                  What would it take to code a Gallium3D backend for, let's choose, the Quake 3D engine? Namely, no opengl.

                  Comment


                  • #29
                    Originally posted by allquixotic View Post
                    But when gallium starts bumping its internal API to v1.0, v2.0, v3.0 to support new hardware, optimization should continue on r300g vs gallium 0.4, unless upgrading it to work with later gallium versions is appreciably easy (which is definitely not something I think can be taken for granted).
                    The Gallium versioning really doesn't matter and doesn't reflect anything, it evolves and is different all the time. There are no big jumps, just a lot of very small changes and the developer making those changes is usually responsible for fixing all drivers.


                    Originally posted by MaestroMaus View Post
                    If someone would like to donate money for improving the OSS ATI drivers where would it be best donated?
                    Me.

                    Seriously, manpower is an issue, not money. GSoC is just one way to get some good money. Yet still only one guy applied for it this year to work on ATI drivers, I think. And it's not so hard for him. If there were like 3 guys, I would say the community seems to care about the quality of graphics drivers, but I cannot say it now.

                    Originally posted by sylware View Post
                    What would it take to code a Gallium3D backend for, let's choose, the Quake 3D engine? Namely, no opengl.
                    Waste of time. First we need to get most hardware support in Gallium, which will take forever, then we may think about Gallium backends, so it's really out of the questions.

                    Comment


                    • #30
                      Originally posted by marek View Post
                      Seriously, manpower is an issue, not money.
                      ... and for some hardware, *no* hardware programming manual. ATI/AMD/intel are on the right track.

                      I speak for myself: I wanted to help, really, but GNU (L)GPL only code for me. My code would have been trashed. And whatever, I have had no brain time to code anything, and if one day I do, I will fork as I do not agree at all on the design goals.

                      Comment

                      Working...
                      X