Announcement

Collapse
No announcement yet.

AMD's Catalyst Evolution For The Radeon HD 7000 Series

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    I still don't see your point. In Ubuntu, for instance, we already have precompiled binaries. If they are not updated during the whole testing period, then the test results are valid, especially relatively. And even looking at absolute values, they would represent real-life performance, even if slightly dated. The differences between optimisations for different cards doesn't matter in that regard, too, because that's the performance you get. Tests like that don't say that card X is better than card Y, but rather that card X performs better on Wine than card Y.

    Comment


    • #17
      Originally posted by GreatEmerald View Post
      I still don't see your point. In Ubuntu, for instance, we already have precompiled binaries. If they are not updated during the whole testing period, then the test results are valid, especially relatively. And even looking at absolute values, they would represent real-life performance, even if slightly dated. The differences between optimisations for different cards doesn't matter in that regard, too, because that's the performance you get. Tests like that don't say that card X is better than card Y, but rather that card X performs better on Wine than card Y.
      Well, ok you're right. U persuaded me (just like one of these Syndicate agents LOL) - but ur absolutely right that u could cancel updates to see the difference between different driver versions. And for real-life tests u could, no you should use the latest version to get realistic results. Then it's up to the user if he needs a better WoW on WINE performance or its better to get more fps in some native application.

      Comment


      • #18
        Originally posted by Nasenbaer View Post
        And then the performance between two companies could be affected by bugs in the D3D<->OpenGL translations which one company can handle better than the other etc.

        Native benchmarks would be better in my opinion. But a benchmark like this one, were u want to see the differences between dirver revision would still be possible when u use a single WINE version. But for long termn comparions its not so good I think.
        There still exist some demanding Windows OpenGL-based games: Rage (OGL only), Serious Sam 3 BFE (supports both D3D and OGL). The second one killed my 6670 (10-20 fps)

        Comment


        • #19
          Originally posted by kwahoo View Post
          There still exist some demanding Windows OpenGL-based games: Rage (OGL only), Serious Sam 3 BFE (supports both D3D and OGL). The second one killed my 6670 (10-20 fps)
          But it would be much better if Carmack would give us a native linux version of Rage.

          Comment


          • #20
            Originally posted by smitty3268 View Post
            It's probably much more true on Windows for this card. They have all those per-application optimizations built into their drivers, catalyst A.I. swapping out shaders with more optimized versions, etc. there, which aren't necessary or present for the simple OSS games Michael tests on Linux. Although they might have some of that for Unigine.
            Even so, you're looking at maybe 10% improvement over the life time of the card (sometimes you get something like 15-20% performance improvement in game X using resolution Y, SLI and AA - but that's rare and definitely doesn't change the overall picture). Meanwhile the competition will manage something similar, too. So in the end, the picture you get at launch is pretty telling.

            Comment


            • #21
              @kwahoo

              I would not say that rage is so demanding. i played it with my old nv 8800 gts 512 @ 1920x1200 without aa using wine and it was fast enough. often games that have got console ports are NOT so demanding as pc only games. of course when you want to enable cuda mode then you need to compile the needed wine lib as well, did not do that for this game. you can not play it with intel oss drivers however, win + hd 4000 would work. fglrx sucked badly with wine. did not check it with that game, but fglrx has got some weird wine "optimisations". when you compare glxinfo -l to what you get when you copy the binary to wine (basically any name with w in front had this effect for me) then the vendor reported switches from amd to ati. Also a few settings report lower numbers, most likely due to a bug. Feel free to compare yourself. The diff in the bug report does not show the amd -> ati change.

              http://ati.cchtml.com/show_bug.cgi?id=528

              Maybe somebody wants to rename the wine BINARY to zine or so. It does not apply for scripts.

              Comment


              • #22
                Well Said

                [QUOTE=Nasenbaer;273888]Maybe because the tested games are faaaar from demanding for this graphics card.



                I would highly suggest to rethink this kind of tests. >400 fps what in hell tells us such a result? Nothing! /QUOTE]

                Well said, lol..

                Thanks for the post.

                Be real, be sober.

                Comment


                • #23
                  @artivision

                  I think I see your point, but if the tests are run at high enough resolution the slower card will fail, yes? No?

                  I think D3D is a huge conflict of interest with the D3Db console game sales.

                  Thanks for the post.

                  A contribution share ratio gives context to the contribution and credits the contributor.

                  Frustration makes the wine taste sweetest, the truth most bitter.
                  Last edited by WSmart; 07-10-2012, 06:20 AM. Reason: Reply didn't appear to have an address; added.

                  Comment


                  • #24
                    Originally posted by Kano View Post
                    @kwahoo

                    I would not say that rage is so demanding. i played it with my old nv 8800 gts 512 @ 1920x1200 without aa using wine and it was fast enough. often games that have got console ports are NOT so demanding as pc only games.
                    Carmack claims "Rage is not console port" Manually tweaked Rage should be more demanding.
                    Last edited by kwahoo; 07-10-2012, 06:40 AM. Reason: I missed a point

                    Comment


                    • #25
                      Originally posted by kwahoo View Post
                      Carmack claims "Rage is not console port" Manually tweaked Rage should be more demanding.
                      One could argue it's not a game either, it's a tech demo. Good enough for a benchmark, tho.

                      Comment


                      • #26
                        Originally posted by smitty3268 View Post
                        You realize that the # of stream processors doesn't directly equal the final performance, right? That's like saying a 3Ghz CPU will always be the same speed, whether it was made by Intel, AMD, or based off an ARM design.

                        And theoretical performance is just that - theoretical. There are all kinds of reasons hardware never reaches those kinds of numbers in practice - the caches might be too small, not enough bandwidth to feed the processors, etc. There are hundreds of possible reasons, and it will be different for each and every design.

                        As far as the precision - ha, i still remember the 9700 vs FX days, when NVidia was insisting the 16 bits was all you ever needed, and no one could tell the difference between that and those fancy 24bit precision ATI cards.

                        Re: a grand conspiracy by MS to help AMD and hurt NVidia - uh, ok. whatever dude.
                        I hope you realize that the kind of execution unit does not matter for stream processing, only the bitrate matters. Kepler has 3.5Tflops@64bitFMAC or 7Tflops@32bitFMAC, wile Radeon7000 has 4Tflops@32bitfmac. Second, there is not any possibility for some one to produce a GPU that does not fill the shaders because of memory for example, that way where is the proof? (why the card reaches maximum thermal and use?). As I said before that is impossible for stream processing. Kepler is 70+% faster than Radeon7000 and 2times more efficient, also is 2*faster than Fermi. In benchmarks is only 20% faster than Radeon7000 and 40% than Fermi because the driver turns in quality and precision mode. So when You have 2*GPUs you have only +50% framerate. Todays benchmarks cant measure quality and precision, and they cant force the hardware on a very specific precision, because they speak to the driver and not directly to the hardware.

                        Comment


                        • #27
                          Originally posted by Kano View Post
                          but fglrx has got some weird wine "optimisations".
                          And you made sure to turn off Catalyst AI?

                          Comment


                          • #28
                            Try it yourself.

                            Comment


                            • #29
                              Originally posted by Kano View Post
                              Try it yourself.
                              Renaming the executable? I'm pretty sure that's not healthy in a package-based environment...

                              Comment


                              • #30
                                You think too complicated.
                                Code:
                                mkdir -p /tmp/fglrx-test
                                cd /tmp/fglrx-test
                                cp /usr/bin/glxinfo wine
                                ./wine -l > w.l
                                glxinfo -l > g.l
                                diff g.l w.l

                                Comment

                                Working...
                                X