Announcement

Collapse
No announcement yet.

AMD R600/700 2D Performance: Open vs. Closed Drivers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    The only thing this article should make us all understand is what a piece of shit fglrx is. But still, thanks amd for understanding your incapacity in delivering a linux driver that "forced" you to release your hardware specs (if you didn't do it, all linux users would slowly go towards nvidia or intel, and you know this).

    And don't believe to the bullshit about Linux and its 1% market. It's not a percentage problem, the problem is if I speak bad about ati, then you speak bad about it -> a lot of people will start talking bad about ati. The final consequence is: ati is a bad company, and this is exactly what happened with Windows Vista. 10 people talked bad about vista, after that everyone talked bad about it, even people that never tried it. This forced microsoft to do a new os in record time even though Vista is infinite times more stable and believe it or not, statisticaly more secure than Windows XP.

    Don't believe to bullshit people, but still be happy that amd released specs so next year amd cards in theory will rock on linux.
    So what is my point? Very simple. Be grateful to amd, but be more grateful to nvidia, because they are the only ones delivering a more or less stable and full-featured driver for linux since 2001. We must not forget this. It was year 2004 and the only 3d working card on Linux was nvidia, I will never forget this.
    Last edited by bulletxt; 30 September 2009, 06:55 PM.

    Comment


    • #22
      Originally posted by drag View Post
      The deal is that if you have a mixture of software rendering and hardware rendering it means your doing lots of context switching and moving memory objects back and forth from the video ram to main ram.
      That's exactly my point and that's why the missing gradient acceleration might be a problem. With the NVidia example, if you have some gradient rendering in your pipe, everything will slow down because pixmaps constantly are copied between host system and GPU. It's not the software rendering itself causing most of the slowdown.

      So in synthetic benchmarks like this were a most of the operations are running in software you will actually get impressive performance.. but as soon as you start to mix hardware acceleration features into the mix then performance and efficiency dives.
      Huh? The OSS Radeon drivers have pretty full-featured XRender acceleration, so ideally your whole processing pipe is accelerated. Same for NVidia, with few exceptions as well. fglrx is completely missing any sensible XRender acceleration as far as I know.

      Comment


      • #23
        My God! No wonder they say X.Org is bloated ...!

        Actually, now that I've had a quick look. I notice Gallium is looking quite far away. Almost every Gallium feature is a long way off. So if that is miles away then OOS OpenCL will be absolutely ages away. Thinking at least 3 years away..
        Last edited by b15hop; 30 September 2009, 08:03 PM.

        Comment


        • #24
          Originally posted by greg View Post
          That's exactly my point and that's why the missing gradient acceleration might be a problem. With the NVidia example, if you have some gradient rendering in your pipe, everything will slow down because pixmaps constantly are copied between host system and GPU. It's not the software rendering itself causing most of the slowdown.



          Huh? The OSS Radeon drivers have pretty full-featured XRender acceleration, so ideally your whole processing pipe is accelerated. Same for NVidia, with few exceptions as well. fglrx is completely missing any sensible XRender acceleration as far as I know.
          I wonder if the gradient stuff can be done in hardware? I'm thinking that it could well be done since GPU's these days are very capable. The constant switching might be the cause of this but this just means that a focus towards bias is important. Either go all out with hardware or all out with software. Thought I still prefer hardware over software. Especially according to nvidia's marketing:

          YouTube - CPU vs. GPU (funny myth busters video)

          Comment


          • #25
            Yes, sure, gradient rendering can be accelerated. In the worst case the fixed function hardware doesn't support it and you need to use shaders, but that isn't a problem, it just means more work.

            Comment


            • #26
              Originally posted by greg View Post
              Yes, sure, gradient rendering can be accelerated. In the worst case the fixed function hardware doesn't support it and you need to use shaders, but that isn't a problem, it just means more work.
              God if shaders were used to colour gradients, they could potentially be rendered so fast that it will be off the charts. Hmm =) Though in saying that, less is more.. I think as long as it's done simply in hardware it should be fine.

              Comment


              • #27
                @chrisr

                My card is just a 3450 - which is definitely very slow with opengl. But the only thing it must be capable of is to watch movies and surf the web.

                Comment


                • #28
                  Originally posted by Kano View Post
                  @chrisr

                  My card is just a 3450 - which is definitely very slow with opengl. But the only thing it must be capable of is to watch movies and surf the web.
                  That's the problem Kano, I wanted more then that. I wanted OpenGL 3D as well. Not just fast 2D. If you use that same 3450 in Win XP you will realise that it's not really that slow, it's about as fast as a Raedon 9600 pro for 2D and movie stuff. Infact with good drivers it should be faster.

                  Comment


                  • #29
                    Originally posted by bulletxt View Post
                    The only thing this article should make us all understand...
                    Yes, everyone knows that AMD is now slowly starting to catch up and are using the community to help them do so, but thanks for the reminder.

                    It's really good news to hear the OSS status doing so well which is something you currently cannot say about Nvidia, but yes yes Nvidia takes the closed source cake. Ultimately to goal is to get OSS to surpass Nvidia's closed source quality, and in many ways it already has due to the nature of OSS. (Like possibly not being able to implement a non-Nvidia GUI front-end for their proprietary driver? Maybe, maybe not, but regardless OSS gives devs and users a lot more freedom.)
                    Last edited by Yfrwlf; 01 October 2009, 01:24 AM.

                    Comment


                    • #30
                      There is an additional reason to use the fglrx driver instead of the OSS ones: Powermanagement. While this might not be a huge problem on desktops, on notebooks this is one of the main features of fglrx. With the OSS driver my laptop gets extremely hot and uses tons of power. So if you don't want to burn your fingers (or you do not like to sweat a lot while typing) and want to use your notebook running on batteries, then you are stuck with the fglrx driver at the moment.

                      Comment

                      Working...
                      X