Announcement

Collapse
No announcement yet.

AMD's Catalyst Evolution For The Radeon HD 7000 Series

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by artivision View Post
    Phoronix must never test the official closed drivers again, you seem ridiculous doing that. Radeon7000 has 4Tflops@32bit wile Kepler has 3.5Tflops@64bit or 7Tflops@32bit(1cuda-core has 2 ALUs). Kepler is 70+% faster than Radeon7000 and twice than Fermi. There is a reason why in benchmarks is only 20% faster than Radeon7000 and 40% than Fermi: When you have 2*GPUs you only have +50% performance, thats not because they don't scale well (thats idiotic and impossible for stream processing), but because the driver turns in quality and precision mode. The loser company uses this trick against Unigine for example, to gain more frames wile losing quality. The benchmark is unable to measure the quality difference, because the benchmark speaks to driver and not directly to hardware. Nvidia has done in the past exactly what AMD does today, when the gtx7800(24pixel processors) it was 20% faster than x1900(48pixel processors). Then all benchmarks changed and gtx7950(2*gtx7800) was 20% faster, wile Nvidia threatens Microsoft that they develop their own API because D3D helps Radeon.
    Making up facts as we go along are we?

    Comment


    • #12
      If you have any deeper knowledge than mine, you welcome to discus it.

      Comment


      • #13
        Originally posted by artivision View Post
        If you have any deeper knowledge than mine, you welcome to discus it.
        You realize that the # of stream processors doesn't directly equal the final performance, right? That's like saying a 3Ghz CPU will always be the same speed, whether it was made by Intel, AMD, or based off an ARM design.

        And theoretical performance is just that - theoretical. There are all kinds of reasons hardware never reaches those kinds of numbers in practice - the caches might be too small, not enough bandwidth to feed the processors, etc. There are hundreds of possible reasons, and it will be different for each and every design.

        As far as the precision - ha, i still remember the 9700 vs FX days, when NVidia was insisting the 16 bits was all you ever needed, and no one could tell the difference between that and those fancy 24bit precision ATI cards.

        Re: a grand conspiracy by MS to help AMD and hurt NVidia - uh, ok. whatever dude.

        Comment


        • #14
          Originally posted by Nasenbaer View Post
          WINE is not an option because of the fast development of this project which also my encompass performance changes due to optimizations regarding WINE and not the display driver.
          That is hardly relevant. All the benchmarking would be done with a single Wine version, so any optimisations don't matter - they are either there or not there, the relative performance is the same.

          Comment


          • #15
            Originally posted by GreatEmerald View Post
            That is hardly relevant. All the benchmarking would be done with a single Wine version, so any optimisations don't matter - they are either there or not there, the relative performance is the same.
            Theoretically this would be possible, of course. But long time comparions would still be hard, because distributions switch over to new versions. So you would have to compile it on your own. Compiling it with the same version would mean, that even ever dependency should be built with allways the same gcc version and so on. Ok u could compile a binary version and statically compile all other dependencies in that executable but would be a quite high effort.
            And then the performance between two companies could be affected by bugs in the D3D<->OpenGL translations which one company can handle better than the other etc.

            Native benchmarks would be better in my opinion. But a benchmark like this one, were u want to see the differences between dirver revision would still be possible when u use a single WINE version. But for long termn comparions its not so good I think.

            Comment


            • #16
              I still don't see your point. In Ubuntu, for instance, we already have precompiled binaries. If they are not updated during the whole testing period, then the test results are valid, especially relatively. And even looking at absolute values, they would represent real-life performance, even if slightly dated. The differences between optimisations for different cards doesn't matter in that regard, too, because that's the performance you get. Tests like that don't say that card X is better than card Y, but rather that card X performs better on Wine than card Y.

              Comment


              • #17
                Originally posted by GreatEmerald View Post
                I still don't see your point. In Ubuntu, for instance, we already have precompiled binaries. If they are not updated during the whole testing period, then the test results are valid, especially relatively. And even looking at absolute values, they would represent real-life performance, even if slightly dated. The differences between optimisations for different cards doesn't matter in that regard, too, because that's the performance you get. Tests like that don't say that card X is better than card Y, but rather that card X performs better on Wine than card Y.
                Well, ok you're right. U persuaded me (just like one of these Syndicate agents LOL) - but ur absolutely right that u could cancel updates to see the difference between different driver versions. And for real-life tests u could, no you should use the latest version to get realistic results. Then it's up to the user if he needs a better WoW on WINE performance or its better to get more fps in some native application.

                Comment


                • #18
                  Originally posted by Nasenbaer View Post
                  And then the performance between two companies could be affected by bugs in the D3D<->OpenGL translations which one company can handle better than the other etc.

                  Native benchmarks would be better in my opinion. But a benchmark like this one, were u want to see the differences between dirver revision would still be possible when u use a single WINE version. But for long termn comparions its not so good I think.
                  There still exist some demanding Windows OpenGL-based games: Rage (OGL only), Serious Sam 3 BFE (supports both D3D and OGL). The second one killed my 6670 (10-20 fps)

                  Comment


                  • #19
                    Originally posted by kwahoo View Post
                    There still exist some demanding Windows OpenGL-based games: Rage (OGL only), Serious Sam 3 BFE (supports both D3D and OGL). The second one killed my 6670 (10-20 fps)
                    But it would be much better if Carmack would give us a native linux version of Rage.

                    Comment


                    • #20
                      Originally posted by smitty3268 View Post
                      It's probably much more true on Windows for this card. They have all those per-application optimizations built into their drivers, catalyst A.I. swapping out shaders with more optimized versions, etc. there, which aren't necessary or present for the simple OSS games Michael tests on Linux. Although they might have some of that for Unigine.
                      Even so, you're looking at maybe 10% improvement over the life time of the card (sometimes you get something like 15-20% performance improvement in game X using resolution Y, SLI and AA - but that's rare and definitely doesn't change the overall picture). Meanwhile the competition will manage something similar, too. So in the end, the picture you get at launch is pretty telling.

                      Comment

                      Working...
                      X