Announcement

Collapse
No announcement yet.

AMD Fusion On Gallium3D Leaves A Lot To Be Desired

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD Fusion On Gallium3D Leaves A Lot To Be Desired

    Phoronix: AMD Fusion On Gallium3D Leaves A Lot To Be Desired

    It's been a few months since last running any AMD Fusion tests under Linux, so here's a look at the AMD A8-3870K "Llano" APU performance under both the latest Catalyst driver and the open-source Radeon Gallium3D stack with Ubuntu 12.04. Besides the open-source driver being handily beaten by the Catalyst binary driver, the power efficiency is also a disappointment.

    http://www.phoronix.com/vr.php?view=17255

  • #2
    VLIW... without a better shader compiler the radeon driver don't have any chance.

    and amd is not planing to build a shader compiler based on obsolete technique.

    the hd7970 will get a proper shader compiler.

    in other words opensource divers needs 3-4 years more time to catch up. -


    RIP- VLIW...

    Comment


    • #3
      OK. I don't meant this as a criticism on driver developers. I am sure they are doing their best. And I've got no idea about writing device drivers. But I am wondering how it is possible that one implementation is an order of magnitude slower than another. Is it the complex hardware interface? Or is OpenGL so broken making it so difficult to write fast, efficient drivers? Is the nouveau approach to reverse-engineer a well performing driver maybe the better approach(assuming that a faster driver exists)?

      Comment


      • #4
        Originally posted by Qaridarium View Post
        VLIW... without a better shader compiler the radeon driver don't have any chance.

        and amd is not planing to build a shader compiler based on obsolete technique.

        the hd7970 will get a proper shader compiler.

        in other words opensource divers needs 3-4 years more time to catch up. -


        RIP- VLIW...
        Don't be so sure. Tom Stellar is integrating LLVM backend for r600g as we speak, and once it is done, and LLVM->VLIW packetizer is finished(it is started), we can all enjoy faster shaders both graphics and compute. 3-4 years is awfully pesimistic.

        Comment


        • #5
          Originally posted by log0 View Post
          OK. I don't meant this as a criticism on driver developers. I am sure they are doing their best. And I've got no idea about writing device drivers. But I am wondering how it is possible that one implementation is an order of magnitude slower than another. Is it the complex hardware interface? Or is OpenGL so broken making it so difficult to write fast, efficient drivers? Is the nouveau approach to reverse-engineer a well performing driver maybe the better approach(assuming that a faster driver exists)?
          The thing is that, nvidia had until kepler, sheduler in hardware, and thus it optimizes shareders itself, rather than relying on driver code to to that(in case of AMD VLIW). With GCN AMD integrated hardware sheduler, so performance gap will shrink.

          Comment


          • #6
            Shader compiler *isn't* the culprit, desktop cards are ~50/60% of catalyst while this shit is two orders om magnitude slower
            ## VGA ##
            AMD: X1950XTX, HD3870, HD5870
            Intel: GMA45, HD3000 (Core i5 2500K)

            Comment


            • #7
              Aaah, Phoronix forget to set "GPU clock to LOW", which is AMD's advice for PM issues on open source stack!
              Look at this thread also...

              Comment


              • #8
                Originally posted by Death Knight View Post
                Aaah, Phoronix forget to set "GPU clock to LOW", which is AMD's advice for PM issues on open source stack!
                Look at this thread also...
                I guess that's the opposite case there. Phoronix probably used default state, which is usually low one on APUs. Tip: take a look at power usage chart.

                I suggest re-doing all tests forcing Catalyst to low or radeon to high.

                Comment


                • #9
                  Michael, what is the USB watt-meter that you use? I would like to buy it in order to do some tests, because I think that fps-per-watt is very interesting to measure progress in git drivers. Thank you

                  Comment


                  • #10
                    Originally posted by log0 View Post
                    OK. I don't meant this as a criticism on driver developers. I am sure they are doing their best. And I've got no idea about writing device drivers. But I am wondering how it is possible that one implementation is an order of magnitude slower than another. Is it the complex hardware interface? Or is OpenGL so broken making it so difficult to write fast, efficient drivers? Is the nouveau approach to reverse-engineer a well performing driver maybe the better approach(assuming that a faster driver exists)?
                    The problem is lack of manpower. r600g needs like another 5 developers working full-time to make sure the driver works best - adding new features, fixing bugs, profiling and identifying the bottlenecks and optimizing the driver. So far developers have been mostly adding new features and fixing bugs when they had time. Optimizations must be done in the entire stack, including shared components like core Mesa.

                    I wonder if Michael enabled 2D tiling.
                    Last edited by marek; 04-16-2012, 07:33 AM.

                    Comment

                    Working...
                    X