Announcement

Collapse
No announcement yet.

AMD R600/700 2D Performance: Open vs. Closed Drivers

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    This brings me to a question.
    It seems that the old R300 cards and older have good 2D and 3D OSS support. Maybe we need an overall graph / database on what support level vs performance all graphics cards have.

    Ie R300 **** four star OSS
    R700/800 *** three star OSS ?

    That's My 5 cents.. =)

    Comment


    • #17
      Originally posted by b15hop View Post
      This brings me to a question.
      It seems that the old R300 cards and older have good 2D and 3D OSS support. Maybe we need an overall graph / database on what support level vs performance all graphics cards have.

      Ie R300 **** four star OSS
      R700/800 *** three star OSS ?

      That's My 5 cents.. =)
      http://wiki.x.org/wiki/RadeonFeature
      http://wiki.x.org/wiki/RadeonProgram

      Unfortunately server is down today.

      Comment


      • #18
        I think the oss performance is pretty impressive compared to the early age of the driver. I don't think they even have done any optimizing yet?

        Comment


        • #19
          I'm interested if the OpenSource drivers downclock the GPU properly as they should to save power?

          Comment


          • #20
            Originally posted by greg View Post
            Well, it's a bit strange. In synthetic benchmarks (like done by Phoronix), the NVidia drivers perform extremely well. A few quick benches (w/ JXRenderMark) show that XRender performance is a lot better (than a Radeon 3850 w/ OSS drivers) on a GeForce 8600 GT despite the hardware being a lot slower.

            Just a few XRender operations are slow on NVidia; gradients aren't accelerated, for example. Maybe KDE depends on these operations being fast?
            The software renderer for X.org is going to be very efficient.

            "Hardware acceleration" != Faster

            I don't know the particulars for this driver, but it is common to run into OSS drivers that can perform these benchmarks faster, but in real-world desktops be much slower.

            The deal is that if you have a mixture of software rendering and hardware rendering it means your doing lots of context switching and moving memory objects back and forth from the video ram to main ram.

            So in synthetic benchmarks like this were a most of the operations are running in software you will actually get impressive performance.. but as soon as you start to mix hardware acceleration features into the mix then performance and efficiency dives.

            So as result OSS drivers improve the performance on these benchmarks will actually likely _go_down_ as they move from mixed software/hardware rendering to performing all the operations on the GPU, which in many cases is slower then running it on the CPU.

            Comment


            • #21
              The only thing this article should make us all understand is what a piece of shit fglrx is. But still, thanks amd for understanding your incapacity in delivering a linux driver that "forced" you to release your hardware specs (if you didn't do it, all linux users would slowly go towards nvidia or intel, and you know this).

              And don't believe to the bullshit about Linux and its 1% market. It's not a percentage problem, the problem is if I speak bad about ati, then you speak bad about it -> a lot of people will start talking bad about ati. The final consequence is: ati is a bad company, and this is exactly what happened with Windows Vista. 10 people talked bad about vista, after that everyone talked bad about it, even people that never tried it. This forced microsoft to do a new os in record time even though Vista is infinite times more stable and believe it or not, statisticaly more secure than Windows XP.

              Don't believe to bullshit people, but still be happy that amd released specs so next year amd cards in theory will rock on linux.
              So what is my point? Very simple. Be grateful to amd, but be more grateful to nvidia, because they are the only ones delivering a more or less stable and full-featured driver for linux since 2001. We must not forget this. It was year 2004 and the only 3d working card on Linux was nvidia, I will never forget this.
              Last edited by bulletxt; 09-30-2009, 06:55 PM.

              Comment


              • #22
                Originally posted by drag View Post
                The deal is that if you have a mixture of software rendering and hardware rendering it means your doing lots of context switching and moving memory objects back and forth from the video ram to main ram.
                That's exactly my point and that's why the missing gradient acceleration might be a problem. With the NVidia example, if you have some gradient rendering in your pipe, everything will slow down because pixmaps constantly are copied between host system and GPU. It's not the software rendering itself causing most of the slowdown.

                So in synthetic benchmarks like this were a most of the operations are running in software you will actually get impressive performance.. but as soon as you start to mix hardware acceleration features into the mix then performance and efficiency dives.
                Huh? The OSS Radeon drivers have pretty full-featured XRender acceleration, so ideally your whole processing pipe is accelerated. Same for NVidia, with few exceptions as well. fglrx is completely missing any sensible XRender acceleration as far as I know.

                Comment


                • #23
                  Originally posted by Zajec View Post
                  My God! No wonder they say X.Org is bloated ...!

                  Actually, now that I've had a quick look. I notice Gallium is looking quite far away. Almost every Gallium feature is a long way off. So if that is miles away then OOS OpenCL will be absolutely ages away. Thinking at least 3 years away..
                  Last edited by b15hop; 09-30-2009, 08:03 PM.

                  Comment


                  • #24
                    Originally posted by greg View Post
                    That's exactly my point and that's why the missing gradient acceleration might be a problem. With the NVidia example, if you have some gradient rendering in your pipe, everything will slow down because pixmaps constantly are copied between host system and GPU. It's not the software rendering itself causing most of the slowdown.



                    Huh? The OSS Radeon drivers have pretty full-featured XRender acceleration, so ideally your whole processing pipe is accelerated. Same for NVidia, with few exceptions as well. fglrx is completely missing any sensible XRender acceleration as far as I know.
                    I wonder if the gradient stuff can be done in hardware? I'm thinking that it could well be done since GPU's these days are very capable. The constant switching might be the cause of this but this just means that a focus towards bias is important. Either go all out with hardware or all out with software. Thought I still prefer hardware over software. Especially according to nvidia's marketing:

                    YouTube - CPU vs. GPU (funny myth busters video)

                    Comment


                    • #25
                      Yes, sure, gradient rendering can be accelerated. In the worst case the fixed function hardware doesn't support it and you need to use shaders, but that isn't a problem, it just means more work.

                      Comment


                      • #26
                        Originally posted by greg View Post
                        Yes, sure, gradient rendering can be accelerated. In the worst case the fixed function hardware doesn't support it and you need to use shaders, but that isn't a problem, it just means more work.
                        God if shaders were used to colour gradients, they could potentially be rendered so fast that it will be off the charts. Hmm =) Though in saying that, less is more.. I think as long as it's done simply in hardware it should be fine.

                        Comment


                        • #27
                          @chrisr

                          My card is just a 3450 - which is definitely very slow with opengl. But the only thing it must be capable of is to watch movies and surf the web.

                          Comment


                          • #28
                            Originally posted by Kano View Post
                            @chrisr

                            My card is just a 3450 - which is definitely very slow with opengl. But the only thing it must be capable of is to watch movies and surf the web.
                            That's the problem Kano, I wanted more then that. I wanted OpenGL 3D as well. Not just fast 2D. If you use that same 3450 in Win XP you will realise that it's not really that slow, it's about as fast as a Raedon 9600 pro for 2D and movie stuff. Infact with good drivers it should be faster.

                            Comment


                            • #29
                              Originally posted by bulletxt View Post
                              The only thing this article should make us all understand...
                              Yes, everyone knows that AMD is now slowly starting to catch up and are using the community to help them do so, but thanks for the reminder.

                              It's really good news to hear the OSS status doing so well which is something you currently cannot say about Nvidia, but yes yes Nvidia takes the closed source cake. Ultimately to goal is to get OSS to surpass Nvidia's closed source quality, and in many ways it already has due to the nature of OSS. (Like possibly not being able to implement a non-Nvidia GUI front-end for their proprietary driver? Maybe, maybe not, but regardless OSS gives devs and users a lot more freedom.)
                              Last edited by Yfrwlf; 10-01-2009, 01:24 AM.

                              Comment


                              • #30
                                There is an additional reason to use the fglrx driver instead of the OSS ones: Powermanagement. While this might not be a huge problem on desktops, on notebooks this is one of the main features of fglrx. With the OSS driver my laptop gets extremely hot and uses tons of power. So if you don't want to burn your fingers (or you do not like to sweat a lot while typing) and want to use your notebook running on batteries, then you are stuck with the fglrx driver at the moment.

                                Comment

                                Working...
                                X