Announcement

Collapse
No announcement yet.

AMD's Catalyst Evolution For The Radeon HD 7000 Series

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD's Catalyst Evolution For The Radeon HD 7000 Series

    Phoronix: AMD's Catalyst Evolution For The Radeon HD 7000 Series

    It used to be -- at least when using the Windows Catalyst drivers -- that within the first few months of AMD releasing new Radeon graphics hardware that Catalyst driver optimizations would deliver measurable improvements in this short span. For the Radeon HD 7000 series, which is built upon an entirely new GCN architecture, is this still the case? Here are benchmarks of all the AMD Catalyst Linux drivers that have been released this year and then benchmarked on an AMD Radeon HD 7950 graphics card.

    http://www.phoronix.com/vr.php?view=17592

  • #2
    This is why I hate those "the drivers are not mature yet" guys. You can actually judge a card based on its initial showing: that's pretty much the performance you'll get. If there are serious bugs (like in this review), they stand out by themselves.

    Comment


    • #3
      Where can we find fglrx 9.0.0 ?

      Comment


      • #4
        I might be completely wrong here; but wasn't it said GCN is much easier to program for in
        terms of shader compiler optimizations than previous architectures?
        If so, that may explain why performance figures do not differ significantly between driver revisions.

        Comment


        • #5
          Originally posted by entropy View Post
          I might be completely wrong here; but wasn't it said GCN is much easier to program for in
          terms of shader compiler optimizations than previous architectures?
          If so, that may explain why performance figures do not differ significantly between driver revisions.
          Maybe because the tested games are faaaar from demanding for this graphics card.



          I would highly suggest to rethink this kind of tests. >400 fps what in hell tells us such a result? Nothing! I know there not many demanding current games out there. So at least quality enhancing features should be activated by default for such tests. I mean super sampling anti-aliasing and such stuff. The cards must be used to their capacity! In general it should be investigated if there are more demanding OpenGL benchmarks out there available for Linux. WINE is not an option because of the fast development of this project which also my encompass performance changes due to optimizations regarding WINE and not the display driver. Overall its not easy but testing an 8 years old game like Doom 3 doesn't tell us anything. At least there are very demanding graphics mods out there for Doom 3 which may be compatible with the Linux version. Then this would be an option.

          Here is a link to an interesting Doom 3 Mod which enhances the graphics: http://www.moddb.com/mods/cverdzislav

          Comment


          • #6
            Originally posted by bug77 View Post
            This is why I hate those "the drivers are not mature yet" guys. You can actually judge a card based on its initial showing: that's pretty much the performance you'll get. If there are serious bugs (like in this review), they stand out by themselves.
            It's probably much more true on Windows for this card. They have all those per-application optimizations built into their drivers, catalyst A.I. swapping out shaders with more optimized versions, etc. there, which aren't necessary or present for the simple OSS games Michael tests on Linux. Although they might have some of that for Unigine.

            Comment


            • #7
              Maybe Unigine Heaven would be a good test ?

              Comment


              • #8
                Has the OpenCL performance changed much on these cards?

                Comment


                • #9
                  Oh yeah Unigine Heaven is available for Linux and seems to be a good choice. @Michael Include this in ur next benchmarks, plz.

                  Comment


                  • #10
                    Phoronix must never test the official closed drivers again, you seem ridiculous doing that. Radeon7000 has 4Tflops@32bit wile Kepler has 3.5Tflops@64bit or 7Tflops@32bit(1cuda-core has 2 ALUs). Kepler is 70+% faster than Radeon7000 and twice than Fermi. There is a reason why in benchmarks is only 20% faster than Radeon7000 and 40% than Fermi: When you have 2*GPUs you only have +50% performance, thats not because they don't scale well (thats idiotic and impossible for stream processing), but because the driver turns in quality and precision mode. The loser company uses this trick against Unigine for example, to gain more frames wile losing quality. The benchmark is unable to measure the quality difference, because the benchmark speaks to driver and not directly to hardware. Nvidia has done in the past exactly what AMD does today, when the gtx7800(24pixel processors) it was 20% faster than x1900(48pixel processors). Then all benchmarks changed and gtx7950(2*gtx7800) was 20% faster, wile Nvidia threatens Microsoft that they develop their own API because D3D helps Radeon.

                    Comment


                    • #11
                      Originally posted by artivision View Post
                      Phoronix must never test the official closed drivers again, you seem ridiculous doing that. Radeon7000 has 4Tflops@32bit wile Kepler has 3.5Tflops@64bit or 7Tflops@32bit(1cuda-core has 2 ALUs). Kepler is 70+% faster than Radeon7000 and twice than Fermi. There is a reason why in benchmarks is only 20% faster than Radeon7000 and 40% than Fermi: When you have 2*GPUs you only have +50% performance, thats not because they don't scale well (thats idiotic and impossible for stream processing), but because the driver turns in quality and precision mode. The loser company uses this trick against Unigine for example, to gain more frames wile losing quality. The benchmark is unable to measure the quality difference, because the benchmark speaks to driver and not directly to hardware. Nvidia has done in the past exactly what AMD does today, when the gtx7800(24pixel processors) it was 20% faster than x1900(48pixel processors). Then all benchmarks changed and gtx7950(2*gtx7800) was 20% faster, wile Nvidia threatens Microsoft that they develop their own API because D3D helps Radeon.
                      Making up facts as we go along are we?

                      Comment


                      • #12
                        If you have any deeper knowledge than mine, you welcome to discus it.

                        Comment


                        • #13
                          Originally posted by artivision View Post
                          If you have any deeper knowledge than mine, you welcome to discus it.
                          You realize that the # of stream processors doesn't directly equal the final performance, right? That's like saying a 3Ghz CPU will always be the same speed, whether it was made by Intel, AMD, or based off an ARM design.

                          And theoretical performance is just that - theoretical. There are all kinds of reasons hardware never reaches those kinds of numbers in practice - the caches might be too small, not enough bandwidth to feed the processors, etc. There are hundreds of possible reasons, and it will be different for each and every design.

                          As far as the precision - ha, i still remember the 9700 vs FX days, when NVidia was insisting the 16 bits was all you ever needed, and no one could tell the difference between that and those fancy 24bit precision ATI cards.

                          Re: a grand conspiracy by MS to help AMD and hurt NVidia - uh, ok. whatever dude.

                          Comment


                          • #14
                            Originally posted by Nasenbaer View Post
                            WINE is not an option because of the fast development of this project which also my encompass performance changes due to optimizations regarding WINE and not the display driver.
                            That is hardly relevant. All the benchmarking would be done with a single Wine version, so any optimisations don't matter - they are either there or not there, the relative performance is the same.

                            Comment


                            • #15
                              Originally posted by GreatEmerald View Post
                              That is hardly relevant. All the benchmarking would be done with a single Wine version, so any optimisations don't matter - they are either there or not there, the relative performance is the same.
                              Theoretically this would be possible, of course. But long time comparions would still be hard, because distributions switch over to new versions. So you would have to compile it on your own. Compiling it with the same version would mean, that even ever dependency should be built with allways the same gcc version and so on. Ok u could compile a binary version and statically compile all other dependencies in that executable but would be a quite high effort.
                              And then the performance between two companies could be affected by bugs in the D3D<->OpenGL translations which one company can handle better than the other etc.

                              Native benchmarks would be better in my opinion. But a benchmark like this one, were u want to see the differences between dirver revision would still be possible when u use a single WINE version. But for long termn comparions its not so good I think.

                              Comment

                              Working...
                              X