Announcement

Collapse
No announcement yet.

ATI R300 Mesa, Gallium3D Compared To Catalyst

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Michael should test with DRI2/KMS disabled, since it crops the performance of 3d. Just add "nomodeset" to boot options.

    Comment


    • #12
      Originally posted by dm0rais View Post
      MichaelJust add "nomodeset" to boot options.
      I really doubt that he does NOT know that. ((((((((:


      The OSS drivers are doomed to be at least 40-50 % slower.

      And I understand very well why nVIDIA does not want to invest (money, documentation, time, etc.) in OSS drivers.
      nVIDIA does not want its name to be connected with such a disastrous 3D performance in one way or another.

      Comment


      • #13
        I have to disagree with all the doomsayers.
        All the open source Hardware accelerated Mesa drivers are still young projects.
        Given time, and the ability to work together between nouveau, radeon, and intel, these drivers will approach the performance of their closed counterparts until the difference is small enough that no one cares.
        Nobody cries about gcc being less efficient than icc even though icc is better optimized for intel products. icc is a niche product which has real value, but only to a small number of users.

        That is where gallium is headed. We're not there yet, no, but that is where the road leads.

        Comment


        • #14
          Originally posted by TeoLinuX View Post
          I was thinking exactly the same thing.
          It's sad though that 3D performance on the opensource side is so abysmal... I understand with nVidia, a company who dislikes opensource... but AMD has been thoroughly involved in opensource drivers for years, yet results are far to come (in 3d field, I mean).
          Nearly all of the open source work so far (including both community and vendor developers) has been (a) adding driver support for previously unsupported GPU generations, (b) moving the driver stack onto a new low-level architecture where modesetting and memory management were done in the kernel driver, (c) moving the 3D driver from the classic mesa HW interface to Gallium3D. None of those tasks were expected to do anything for performance, but most of them had to be done before investing in performance work made much sense.

          I don't think the developers are surprised that performance hasn't gone up, and it's probably fair to say they worked really hard to make sure that performance didn't go *down* any more than it did as a consequence of moving to a more flexible architecture that could support desired features like higher levels of GL support and higher system performance in the future.

          Originally posted by TeoLinuX View Post
          I know that FGLRX has tons of IP they cannot expose, I know it shares lots of code with the windows drivers... but come on, a question rises to me: "is oss ati-driver wrong from the basement?" I'm not being polemic. Just asking an opinion...
          Yes and no - there are probably small pieces of the open source stack which will need to be tossed and re-implemented in order to get big performance gains. Even so, I don't think that would have much effect on the other >90% of the stack.

          Before you ask, I don't know if anyone has had time to do any real performance analysis work yet to identify where the weak points are (although the need to retransmit lots of state information under DRI2 is an obvious suspect). The performance numbers suggest that there are a small number of resolution-independent bottlenecks dragging down 3D performance, and that most of the code does not "have a performance problem".

          One of the solutions being discussed is to store relatively more state information in the kernel driver and let the kernel driver decide if the state info currently programmed into the GPU registers is still valid. That seems likely to make a big honkin' difference in performance, and would probably eliminate the performance delta between KMS and UMS. It's not a trivial change, however.

          A nastier question is how big and complex the open source driver can become before it starts to have the same challenges as the proprietary driver. Our early estimate was that the open source stack could probably get to ~60-70% of the 3D performance of fglrx without having to get "scary complicated", and I haven't seen anything to change that view yet.

          It seems like all the right things are being done and in the right sequence... the frustrating part for users is that the initial 2/3 of the work is mostly "migrate onto newer better architecture without making things too much worse" kind of stuff, and the "go faster" stuff only comes near the end.

          Originally posted by TeoLinuX View Post
          I know that it's code under development, I know that efforts are put in giving it features and stable 2D first. But I'm asking to coders involved: is there room for performance improvement once 2D stability is reached and gallium3D has matured? I mean: now blobs are 3 to 5 times faster! Can we expect oss drivers to be let's say 60% as fast as fglrx in teh next 18 months? or is it wishful thinking?
          I'm not an active coder but I have a window into both proprietary and open source development, and it seems to me that there is definitely an opportunity for significant performance improvement. The question is whether performance work should be higher priority than stability and core features... current thinking is "no" and that seems like the right choice to me.

          Originally posted by sundown View Post
          .... MESA!??
          Yeah, Gallium3D doesn't replace Mesa, it only replaces the classic Mesa HW driver interface. 90% or more of the Mesa code is still being used.
          Test signature

          Comment


          • #15
            Optimizations are still not in place. People have repeated like parrots already that priority for the OSS drivers right now is to get everything supported/working. I don't see why everyone is bitching about performance right now.
            I tell you what though. My RS480 works beautifully with radeon and allows me to enjoy a level of computing comfort that fglrx never could. A fast driver is worth crap if it crashes your entire system every 30 minutes. Gallium has not given me any issues whatsoever either. I have a few games running on WINE and a couple of native games as well, things which I couldn't really do with the classic mesa drivers.

            Comment


            • #16
              Originally posted by bridgman View Post
              Nearly all of the open source work so far (including both community and vendor developers) has been (a) adding driver support for previously unsupported GPU generations, (b) moving the driver stack onto a new low-level architecture where modesetting and memory management were done in the kernel driver, (c) moving the 3D driver from the classic mesa HW interface to Gallium3D. None of those tasks were expected to do anything for performance, but most of them had to be done before investing in performance work made much sense.
              I've been saying this for a while... And fglrx is ever slowly seeming to get better. Mind, everyone's mileage my vary, but I discovered that one of the annoyances that I'd always had with fglrx disappeared with the latest and greatest bundled with Ubuntu 10.4 when I'd switched users and it didn't hang the box. I don't know what else seems to be right this go-around, but it was something that made me smile a bit all the same.

              I don't think the developers are surprised that performance hasn't gone up, and it's probably fair to say they worked really hard to make sure that performance didn't go *down* any more than it did as a consequence of moving to a more flexible architecture that could support desired features like higher levels of GL support and higher system performance in the future.
              I'm utterly un-surprised at this and the glacial pace. The infrastructure is getting put into place to scoop up that 70% that you mentioned. But you need to get the infrastructure there to take advantage of the optimizations and remove a few idiot things that stupidly bottleneck your drivers. That work takes time. LOTS of it.

              Yes and no - there are probably small pieces of the open source stack which will need to be tossed and re-implemented in order to get big performance gains. Even so, I don't think that would have much effect on the other >90% of the stack.
              Yep. All it takes is one thing that you shouldn't be doing to stall a pipeline badly- and you DON'T want stalls, and when you can't avoid them, you want them to try to be interframe not intraframe.

              Before you ask, I don't know if anyone has had time to do any real performance analysis work yet to identify where the weak points are (although the need to retransmit lots of state information under DRI2 is an obvious suspect). The performance numbers suggest that there are a small number of resolution-independent bottlenecks dragging down 3D performance, and that most of the code does not "have a performance problem".
              I'm suspecting they're not worrying about it just yet, but will be soon.

              One of the solutions being discussed is to store relatively more state information in the kernel driver and let the kernel driver decide if the state info currently programmed into the GPU registers is still valid. That seems likely to make a big honkin' difference in performance, and would probably eliminate the performance delta between KMS and UMS. It's not a trivial change, however.
              No, it's not. But it'd be a pretty decent speed boost all the same.

              A nastier question is how big and complex the open source driver can become before it starts to have the same challenges as the proprietary driver. Our early estimate was that the open source stack could probably get to ~60-70% of the 3D performance of fglrx without having to get "scary complicated", and I haven't seen anything to change that view yet.
              Well, we've got to learn to do more than crawl before we learn to fly, right? I'm seeing us go reaching for the "scary complicated" over time- just not for a while yet to come.

              I'm not an active coder but I have a window into both proprietary and open source development, and it seems to me that there is definitely an opportunity for significant performance improvement. The question is whether performance work should be higher priority than stability and core features... current thinking is "no" and that seems like the right choice to me.
              Speed's nice. Stability and robustness is more of a requirement- you can always add speed later on once you've got it all working. Worrying about brute performance first strikes me as an instance of entirely too early optimization, if you want my take on it.

              Comment


              • #17
                The thing you see is that ati legacy hardware with oss drivers is a joke for gaming. With nvidia legacy hardware longer gets drivers where you can really run games. The latest driver even runs with old geforce 6 gpus... oss driver nice, but games not really. just enough for compiz.

                Comment


                • #18
                  People stop torturing yourselves with fglrx!

                  It's either OSS Radeon, or binary NVIDIA! There is no such a thing as fglrx, it is a lie!

                  Comment


                  • #19
                    fglrx works with native linux games usually. Some need to add workarounds however (wine too). video accelleration is more problematic than games.

                    Comment


                    • #20
                      I definitely have to agree that the focus has been on stability and features first, with performance second. And rightly so.

                      I want drivers that primarily won't crash my machine, will run Compiz well, and will play back video full-screened on one monitor without tearing while I'm working on the second monitor. Over the last few years I've seen the OSS Radeon drivers get to this point, and I am very grateful for the effort that has been put into these drivers.

                      I don't run too many games in Linux (I've got a Win7 install for most of my games), but the ones that I do run are mostly casual games (World of Goo and similar stuff) and MMOs (Eve/WoW), but I'll be glad when the developers can move beyond stability/features (I'd especially love clover to get finished some day) and start looking into performance tuning.

                      Comment

                      Working...
                      X