Announcement

Collapse
No announcement yet.

AMD Radeon HD 6000 Gallium3D Attempts To Compete With Catalyst

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD Radeon HD 6000 Gallium3D Attempts To Compete With Catalyst

    Phoronix: AMD Radeon HD 6000 Gallium3D Attempts To Compete With Catalyst

    Open-source code supporting the AMD Radeon HD 6000 "Northern Islands" GPU hardware has been available since January, but only in the past few days has this Linux code matured to the point of being stable and useful for testing. In this article are our first benchmarks of the AMD Northern Islands and Cayman graphics processors using the open-source Mesa Gallium3D driver and comparing its performance to AMD's proprietary Catalyst driver.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Couldn't these new Gallium3D drivers have been compiled with the new Pathscale compiler (Dirndl)?

    Comment


    • #3
      Originally posted by sabriah View Post
      Couldn't these new Gallium3D drivers have been compiled with the new Pathscale compiler (Dirndl)?
      I'd say that if your graphics driver gets much faster with the Pathscape compiler, you're doing it wrong.

      If it gets (much) faster, the bottleneck is the CPU, whereas it should be on the GPU.

      Comment


      • #4
        Originally posted by sabriah View Post
        Couldn't these new Gallium3D drivers have been compiled with the new Pathscale compiler (Dirndl)?
        Someone did it here some time ago and it didn't make that much of a difference:

        Comment


        • #5
          Originally posted by [Knuckles] View Post
          I'd say that if your graphics driver gets much faster with the Pathscape compiler, you're doing it wrong.

          If it gets (much) faster, the bottleneck is the CPU, whereas it should be on the GPU.
          No shit, Sherlock...

          The CPU actualy is the bottleneck and it will stay to be a big part of the driver unless those cocks at HTC/SGi release some stupid patent licenses....

          Comment


          • #6
            WOW!
            im impressed.
            didnt think the drivers would be around 50% in pretty much every benchmark (even faster in urban terror).
            Last edited by Pfanne; 14 July 2011, 03:41 AM.

            Comment


            • #7
              50% of performance is the difference between lower midrage and absolute high end .... so it's wasting half of your money.

              But the real Problem here is missing powermanagement! Thats so waaaaay more important! It makes your Card more silent and it will soothe your Battery as well as your electricity bill!

              For me, looking at the Open Source Drivers, missing Powermanagement is the biggest Problem since 2 Jears.

              Comment


              • #8
                I too think that the games may be CPU limited. A testing at Eyefinity resolutions (5760x1080) could have given more meaningful results.

                Comment


                • #9
                  Originally posted by V!NCENT View Post
                  No shit, Sherlock...

                  The CPU actualy is the bottleneck and it will stay to be a big part of the driver unless those cocks at HTC/SGi release some stupid patent licenses....
                  The CPU bottleneck is less code that's running in the driver, and more making unnecessary kernel calls resulting in slow context switches, unnecessary flushes of data between the GPU/CPU, passing huge structures of data around that aren't very cache friendly, etc.

                  None of those things are likely to be affected very much by a change to the compiler - they need to be fixed algorithmically.

                  A software rasterizer, on the other hand, could see benefits. As could something like the old hardware that does vertex shaders in software and the rest on the GPU. Then again, these days that's mostly done by LLVM anyways so that generated code probably wouldn't be much different either.

                  Oh, and what patents are you referring to? The only two i know about are the floating-textures one (which is a feature, not anything performance related) and the S3TC one (again, just a new feature, and not anything that would impact performance one way or the other).
                  Last edited by smitty3268; 14 July 2011, 04:00 AM.

                  Comment


                  • #10
                    would be interesting to see what perf would be like at high power levels, it might make the numbers a lot closer.

                    Comment

                    Working...
                    X