Announcement

Collapse
No announcement yet.

ATI dropping support for <R600 - wtf!?

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    The fglrx driver also contains code shared across multiple OSes, and all the OSes except Linux have strong DRM (Digital Rights Management, not Direct Rendering Manager) requirements. It's certainly possible to pick the driver apart and separate out the bits which can be safely exposed, but the result would look a lot more like ground beef than like a steak

    Note that Intel doesn't release docs on all the hardware in their chips either. That is not meant as criticism; we are all dealing with the same constraints here. I didn't think IBM ever supported open source driver development for their graphics chips; Our FireGL guys might remember -- they used IBM GPUs before switching to ATI).

    In all seriousness, I don't think documentation or sample code is a factor for power management or for higher levels of OpenGL support.

    Power management is primarily waiting for things to settle down in the command submission portions of the driver stack so that power management code can dynamically adjust to drawing workload and display modes etc...

    OpenGL 2.1 theoretically comes for free once Gallium3D drivers are running on 3xx-5xx. Gallium3D in turn is currently built over DRI2 and GEM/TTM, which are also in progress.

    For anyone wondering why we expect the open source drivers to make you happy even if fglrx did not, the reason is simple. The open source drivers are aimed directly at the mix of functionality and performance that most of you are asking for and *only* contain code for that functionality. Fglrx is still aimed primarily at professional workstation users on a small set of enterprise Linux distibutions, and includes well over 10x as much hardware-specific code as the open source drivers.

    You really need all that code to get every last bit of 3D performance out of the GPU, but a *much* smaller driver can provide all the functionality most of you expect along with perhaps 70% of the performance -- and can be tweaked to work well on a wide variety of distros and systems much more readily than our workstation driver.
    Last edited by bridgman; 03-05-2009, 03:03 PM.

    Comment


    • #32
      Originally posted by bridgman View Post
      In all seriousness, I don't think documentation or sample code is a factor for power management or for higher levels of OpenGL support.
      Great then! So we can expect open source 3D rendering as fast as fglrx?
      Originally posted by bridgman View Post
      Power management is primarily waiting for things to settle down in the command submission portions of the driver stack so that power management code can dynamically adjust to drawing workload and display modes etc...
      So, I guess this is WIP?
      Originally posted by bridgman View Post
      OpenGL 2.1 theoretically comes for free once Gallium3D drivers are running on 3xx-5xx. Gallium3D in turn is currently built over DRI2 and GEM/TTM, which are also in progress.
      What about OpenGL 3.0?

      Originally posted by bridgman View Post
      You really need all that code to get every last bit of 3D performance out of the GPU, but a *much* smaller driver can provide all the functionality most of you expect along with perhaps 70% of the performance -- and can be tweaked to work well on a wide variety of distros and systems much more readily than our workstation driver.
      WTF??? Only 70%? So I guess I won't be playing Nexuiz on high resoultion with all effects turned on on an open source driver?

      I'd understand 90+%, but 70%?? That's just not enough, or is it?

      Comment


      • #33
        I don't really get why people complain about this move. You can't maintain those things forever, especially when you have a good alternative. It is just a proof that AMD's open source strategy pays off. I mean, effectively, dropping Catalyst support for <R600 means that many Windows users will have to either switch to Linux or buy a new card in order to use the latest technologies. Linux users, on the other hand, will continue to profit from the development of the FOSS driver(s), backed by AMD or community-driven. Also, as someone here pointed out, more people moving to the open source drivers will help squashing bugs etc. So the ones who'll really profit from this move are the Linux users -- I'd be more pissed off if I were using Windows and knew Catalyst had some bug that's never going to get fixed.

        Honestly, I'd have them completely drop Catalyst support for Linux and have a few more people working on FOSS drivers.

        Comment


        • #34
          Hold on a minute here bridgman, I was kind of under the impression that fglrx shares most of its code with the Windows drivers and you had some "Unified Driver Architecture" type middleware in the mix. Is that simply not true? Is the fglrx driver a completely different codebase specifically written for professional applications, in comparison to the gaming-oriented Windows drivers? And what's this 70% number? Are you telling me that Linux open source solutions will only have just over two thirds the performance of the current fglrx driver, which is already a fair distance behind the Windows blobs? What are we looking at, half the framerate? Excuse me if I find that hard to swallow.

          Comment


          • #35
            The performance of the driver will depend entirely on how much optimization the open source devs. are willing and/or able to put into the drivers.

            Comment


            • #36
              Originally posted by roothorick View Post
              Hold on a minute here bridgman, I was kind of under the impression that fglrx shares most of its code with the Windows drivers and you had some "Unified Driver Architecture" type middleware in the mix. Is that simply not true? Is the fglrx driver a completely different codebase specifically written for professional applications, in comparison to the gaming-oriented Windows drivers?
              The fglrx driver shares big chunks of code with other OSes, and includes code paths for both workstation and consumer products. I call it a "workstation driver" because both the development and test focus of the Linux-specific bits are biased towards professional workstation use, particularly in terms of distro support (RHEL and SLED are not the most common consumer distros).

              Originally posted by roothorick View Post
              And what's this 70% number? Are you telling me that Linux open source solutions will only have just over two thirds the performance of the current fglrx driver, which is already a fair distance behind the Windows blobs? What are we looking at, half the framerate? Excuse me if I find that hard to swallow.
              AFAIK the Windows and Linux binary driver performance should be pretty much identical today; I think Linux stopped being "a fair distance behind" around the end of 2007.

              I asked our architects what performance levels they felt could be obtained with a "clean, simple, well written driver" but without any application-specific performance optimization and their estimate was an average of between 60 and 70% of proprietary driver performance.

              We have released enough programming info to get to 100% but the driver code size and effort grows exponentially as you go far that last 30%. Getting the last bit of performance optimization out of a driver/GPU combination is extremely time-consuming and just plain hard work -- and none of the devs I have spoken with feel it will be necessary.

              This was based on the assumption of identical performance between Windows and Linux; if you are seeing something different (other than running through Wine) please let me know.
              Last edited by bridgman; 03-05-2009, 03:43 PM.

              Comment


              • #37
                Originally posted by bridgman View Post
                I asked our architects what performance levels they felt could be obtained with a "clean, simple, well written driver" but without any application-specific performance optimization and their estimate was an average of between 60 and 70% of proprietary driver performance.

                We have released enough programming info to get to 100% but the driver code size and effort grows exponentially as you go far that last 30%. Getting the last bit of performance optimization out of a driver/GPU combination is extremely time-consuming and just plain hard work -- and none of the devs I have spoken with feel it will be necessary.
                So, this means only 2100 out of 3000 FPS with glxgears on my Mobility Radeon X1600. (in 3rd powerstate) I'm not impressed. Sorry. FGLRX gave me 2500+ FPS in 2nd powerstate and over 3000 FPS in 3rd powerstate.

                But, one day I'll learn how to make drivers and I'll optimize my copy of radeon to get 100% of my card. That's the beauty of open source.

                Comment


                • #38
                  Originally posted by DoDoENT View Post
                  So, this means only 2100 out of 3000 FPS with glxgears on my Mobility Radeon X1600. (in 3rd powerstate) I'm not impressed. Sorry. FGLRX gave me 2500+ FPS in 2nd powerstate and over 3000 FPS in 3rd powerstate.

                  But, one day I'll learn how to make drivers and I'll optimize my copy of radeon to get 100% of my card. That's the beauty of open source.

                  Glxgears in not a benchmark. And you can't really blame ATI for the state of the open source drivers... not anymore at least. The info. is available. Developers have the task of making the drivers and optimizing 3D performance.

                  Comment


                  • #39
                    Yeah, I wasn't really thinking about glxgears in my posts; I didn't want to have to explain to the architects why everyone uses a benchmark which hardly uses any of the GPU functionality

                    The dev community is already working on the top priorities for improving 3D performance in the open drivers :

                    #1 - memory manager (needed for GL 1.5 and higher) - GEM/TTM

                    #2 - redo the command submission and buffer management code (current driver stack doesn't pipeline CPU and GPU operation as much as it could) - bufmgr, radeon-rewrite

                    #3 - shift to a driver model designed around shader-based GPUs rather than fixed-function GPUs (ie Gallium3D)

                    Once those are done (and all are making great progress) I think you'll see driver development move back to the incremental improvements you are used to seeing. Right now there is perhaps 18 months of work accumulated in branches and alternate code paths, and all of that should start to show up in releases over the next few months.

                    Comment


                    • #40
                      Originally posted by bridgman View Post
                      Yeah, I wasn't really thinking about glxgears in my posts; I didn't want to have to explain to the architects why everyone uses a benchmark which hardly uses any of the GPU functionality
                      So, glxgears actually doesn't fully utilize GPU?

                      Originally posted by bridgman View Post
                      The dev community is already working on the top priorities for improving 3D performance in the open drivers :

                      #1 - memory manager (needed for GL 1.5 and higher) - GEM/TTM

                      #2 - redo the command submission and buffer management code (current driver stack doesn't pipeline CPU and GPU operation as much as it could) - bufmgr, radeon-rewrite

                      #3 - shift to a driver model designed around shader-based GPUs rather than fixed-function GPUs (ie Gallium3D)

                      Once those are done (and all are making great progress) I think you'll see driver development move back to the incremental improvements you are used to seeing. Right now there is perhaps 18 months of work accumulated in branches and alternate code paths, and all of that should start to show up in releases over the next few months.
                      OK, now I understand: latest open source drivers are actually in good shape regarding performance, but those latest bits of code still aren't included in most of distributions (what users actually see). So, could we see good open source 3D performance and power management by the end of this year (I mean in Ubuntu 9.10/Fedora 12)?

                      Comment


                      • #41
                        Originally posted by DoDoENT View Post
                        So, glxgears actually doesn't fully utilize GPU?
                        Right. The glxgears program uses only fixed-function graphics (no shaders), and draws shaded triangles with no textures. Between shaders and textures that's maybe between 80% of a modern GPU sitting idle. Vertex shaders get used to emulate the fixed-function transform and lighting, but that's about it. The only block that works hard is the ROP/RBE, ie the part that handles depth compare (Z-buffer) and writes pixels into video memory.

                        Even worse, since older chips used relatively more of their silicon area for ROP/RBE than new chips, it's not unusual for an old GPU to outperform a new GPU on glxgears.

                        Originally posted by DoDoENT View Post
                        OK, now I understand: latest open source drivers are actually in good shape regarding performance, but those latest bits of code still aren't included in most of distributions (what users actually see). So, could we see good open source 3D performance and power management by the end of this year (I mean in Ubuntu 9.10/Fedora 12)?
                        I think so. The end-of-year distros are probably the first ones which will be able to ship with all this new stuff, but it should be available for download & build sooner.

                        Power management is the only part that hasn't really been implemented yet (everything else has been implemented but not integrated), but for 5xx and below I *think* it should come together fairly quickly once the invasive drm changes start to settle down.

                        Comment


                        • #42
                          DX10 cards even have those "unified shaders" anyway, where the distinction between geometry, vertex and pixel processors becomes fuzzy. I think glxgears doesn't have any access to this feature, so it uses the small (fixed) percentage of shaders that are assigned to it by the driver. I think ATI cards give about 30% of their processors to vertex/geomerty. The other 70% is dedicated to pixel shading and is not accessible to glxgears at all. So in other words, glxgears only uses 30% of a modern GPU.

                          I think this 30%/70% ratio started with the X1900.

                          (I hope I got the above right :P)

                          Comment


                          • #43
                            Right now the current fglrx driver gives me much better performance (2d and 3d) on my 9600 XT/TV (rv350) card then the open source driver does.

                            But from reading this thread I understand that fglrx will not support my 9600 card anymore and I am forced to switch to the open source driver?

                            Is that correct?

                            Comment


                            • #44
                              Yes. Though I preferred the open driver on my X1950XT anyway. Catalyst was faster, but bugged like hell. With the open driver, everything was working correctly, even though 3D was slower.

                              Comment


                              • #45
                                Originally posted by RealNC View Post
                                (I hope I got the above right :P)
                                You are not far off the mark. The unified architecture was introduced with R600 (the 2000 series). R500 is still old-style vertex & fragment shaders, which is why it can be accelerated using the same code as R300/R400.

                                Other than that, unified shaders means that the drivers and/or hardware can dynamically balance resources according to the utilization. In other words, if your program is 90% vertex shader and 10% fragment shader computation, a unified architecture will be able to allocate the hardware resources to match that. An old-style architecture, like R500, would leave the fragment processing hardware underutilized, in this 90/10 scenario.

                                But from reading this thread I understand that fglrx will not support my 9600 card anymore and I am forced to switch to the open source driver?

                                Is that correct?
                                Yes. You'll be able to use fglrx with the 2.6.28 kernel and XServer 1.5 (and maybe 1.6), but you'll have to change to radeon/radeonhd once new versions arrive.

                                Comment

                                Working...
                                X