Announcement

Collapse
No announcement yet.

AMD Radeon HD 6000 Series Open-Source Driver Becomes More Competitive

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by krasnoglaz View Post
    I don't understand why test target for drivers are decade old shaderless games or opensource relatively light games like Xonotic. Why not Team Fortress 2 and Dota 2?
    See: http://www.phoronix.com/scan.php?pag...tem&px=MTQxMzY
    Michael Larabel
    https://www.michaellarabel.com/

    Comment


    • #12
      Originally posted by krasnoglaz View Post
      I don't understand why test target for drivers are decade old shaderless games or opensource relatively light games like Xonotic. Why not Team Fortress 2 and Dota 2?
      Yes please, absolutely no reason not to try the source benchmark imo.

      Comment


      • #13
        more Team Fortress 2 benchmarks please

        Comment


        • #14
          Very interesting article. Would be very interested to see how the HD4000 (r600) compares vs. Catalyst. Lots of people are still running HD3000/4000 hardware.

          Comment


          • #15
            Originally posted by verde View Post
            more Team Fortress 2 benchmarks please
            See: http://www.phoronix.com/scan.php?pag...tem&px=MTQxMzY
            Michael Larabel
            https://www.michaellarabel.com/

            Comment


            • #16
              I can't believe you seriously had to repeat yourself twice on the same page of this thread... shows how much people really pay attention.


              Anyways, it's pretty exciting to see these test results. I find it interesting how in terms of GPUserformance, it forms a sort of sine wave, where the very low end cards and the very high-end cards perform the worst. I get the impression the devs focus the most on the mainstream GPUs, since the low-end GPUs aren't good for gaming and if you want your money's worth for the high-end parts, you're better off using catalyst.

              Comment


              • #17
                Originally posted by Michael View Post
                No, tests are always done with it disabled, as can be seen from the system logs.
                Huh, then the triangle test result is a bit odd, I thought disabling it improved the results in that test a lot?..

                Comment


                • #18
                  Since Michael won't provide what everyone wants to see, can somebody here on the forums run the tests on Steam or WINE apps?

                  Also, YNOR600SB?

                  Comment


                  • #19
                    Originally posted by schmidtbag View Post
                    Anyways, it's pretty exciting to see these test results. I find it interesting how in terms of GPUserformance, it forms a sort of sine wave, where the very low end cards and the very high-end cards perform the worst. I get the impression the devs focus the most on the mainstream GPUs, since the low-end GPUs aren't good for gaming and if you want your money's worth for the high-end parts, you're better off using catalyst.
                    It's not so much about focusing on mid-range GPUs, it's just that the mid-range GPUs have the least need for hand-tweaking optimization.

                    Low end parts tend to run into memory bandwidth and "tiny shader core" bottlenecks (requiring a lot of complex heuristics), high end parts are so fast that they often get CPU limited before they get GPU limited (requiring a lot of tuning to reduce CPU overhead in the driver), while midrange parts tend to be more balanced and less likely to get badly bottlenecked in a single area.
                    Test signature

                    Comment


                    • #20
                      Originally posted by bridgman View Post
                      It's not so much about focusing on mid-range GPUs, it's just that the mid-range GPUs have the least need for hand-tweaking optimization.

                      Low end parts tend to run into memory bandwidth and "tiny shader core" bottlenecks (requiring a lot of complex heuristics), high end parts are so fast that they often get CPU limited before they get GPU limited (requiring a lot of tuning to reduce CPU overhead in the driver), while midrange parts tend to be more balanced and less likely to get badly bottlenecked in a single area.
                      Is Radeon then going to become a mess of if's and IFDEF's, Bridgman? All that hand-tuning to get every little ounce of performance out of every card or are the devs thinking that its best to keep the code as clean as possible and just go for the 'middle of the road, good for most but not perfect for all' approach?
                      All opinions are my own not those of my employer if you know who they are.

                      Comment

                      Working...
                      X