Page 1 of 3 123 LastLast
Results 1 to 10 of 28

Thread: AMD Fusion On Gallium3D Leaves A Lot To Be Desired

  1. #1
    Join Date
    Jan 2007
    Posts
    15,660

    Default AMD Fusion On Gallium3D Leaves A Lot To Be Desired

    Phoronix: AMD Fusion On Gallium3D Leaves A Lot To Be Desired

    It's been a few months since last running any AMD Fusion tests under Linux, so here's a look at the AMD A8-3870K "Llano" APU performance under both the latest Catalyst driver and the open-source Radeon Gallium3D stack with Ubuntu 12.04. Besides the open-source driver being handily beaten by the Catalyst binary driver, the power efficiency is also a disappointment.

    http://www.phoronix.com/vr.php?view=17255

  2. #2
    Join Date
    Nov 2008
    Location
    Germany
    Posts
    5,411

    Default

    VLIW... without a better shader compiler the radeon driver don't have any chance.

    and amd is not planing to build a shader compiler based on obsolete technique.

    the hd7970 will get a proper shader compiler.

    in other words opensource divers needs 3-4 years more time to catch up. -


    RIP- VLIW...

  3. #3
    Join Date
    Jul 2010
    Posts
    543

    Default

    OK. I don't meant this as a criticism on driver developers. I am sure they are doing their best. And I've got no idea about writing device drivers. But I am wondering how it is possible that one implementation is an order of magnitude slower than another. Is it the complex hardware interface? Or is OpenGL so broken making it so difficult to write fast, efficient drivers? Is the nouveau approach to reverse-engineer a well performing driver maybe the better approach(assuming that a faster driver exists)?

  4. #4
    Join Date
    Aug 2009
    Location
    Russe, Bulgaria
    Posts
    543

    Default

    Quote Originally Posted by Qaridarium View Post
    VLIW... without a better shader compiler the radeon driver don't have any chance.

    and amd is not planing to build a shader compiler based on obsolete technique.

    the hd7970 will get a proper shader compiler.

    in other words opensource divers needs 3-4 years more time to catch up. -


    RIP- VLIW...
    Don't be so sure. Tom Stellar is integrating LLVM backend for r600g as we speak, and once it is done, and LLVM->VLIW packetizer is finished(it is started), we can all enjoy faster shaders both graphics and compute. 3-4 years is awfully pesimistic.

  5. #5
    Join Date
    Aug 2009
    Location
    Russe, Bulgaria
    Posts
    543

    Default

    Quote Originally Posted by log0 View Post
    OK. I don't meant this as a criticism on driver developers. I am sure they are doing their best. And I've got no idea about writing device drivers. But I am wondering how it is possible that one implementation is an order of magnitude slower than another. Is it the complex hardware interface? Or is OpenGL so broken making it so difficult to write fast, efficient drivers? Is the nouveau approach to reverse-engineer a well performing driver maybe the better approach(assuming that a faster driver exists)?
    The thing is that, nvidia had until kepler, sheduler in hardware, and thus it optimizes shareders itself, rather than relying on driver code to to that(in case of AMD VLIW). With GCN AMD integrated hardware sheduler, so performance gap will shrink.

  6. #6
    Join Date
    Nov 2009
    Location
    Italy
    Posts
    1,000

    Default

    Shader compiler *isn't* the culprit, desktop cards are ~50/60% of catalyst while this shit is two orders om magnitude slower

  7. #7
    Join Date
    Mar 2008
    Location
    Istanbul
    Posts
    135

    Exclamation

    Aaah, Phoronix forget to set "GPU clock to LOW", which is AMD's advice for PM issues on open source stack!
    Look at this thread also...

  8. #8
    Join Date
    Oct 2008
    Location
    Poland
    Posts
    185

    Default

    Quote Originally Posted by Death Knight View Post
    Aaah, Phoronix forget to set "GPU clock to LOW", which is AMD's advice for PM issues on open source stack!
    Look at this thread also...
    I guess that's the opposite case there. Phoronix probably used default state, which is usually low one on APUs. Tip: take a look at power usage chart.

    I suggest re-doing all tests forcing Catalyst to low or radeon to high.

  9. #9
    Join Date
    Jan 2012
    Location
    Italy
    Posts
    52

    Default

    Michael, what is the USB watt-meter that you use? I would like to buy it in order to do some tests, because I think that fps-per-watt is very interesting to measure progress in git drivers. Thank you

  10. #10
    Join Date
    Jan 2009
    Posts
    630

    Default

    Quote Originally Posted by log0 View Post
    OK. I don't meant this as a criticism on driver developers. I am sure they are doing their best. And I've got no idea about writing device drivers. But I am wondering how it is possible that one implementation is an order of magnitude slower than another. Is it the complex hardware interface? Or is OpenGL so broken making it so difficult to write fast, efficient drivers? Is the nouveau approach to reverse-engineer a well performing driver maybe the better approach(assuming that a faster driver exists)?
    The problem is lack of manpower. r600g needs like another 5 developers working full-time to make sure the driver works best - adding new features, fixing bugs, profiling and identifying the bottlenecks and optimizing the driver. So far developers have been mostly adding new features and fixing bugs when they had time. Optimizations must be done in the entire stack, including shared components like core Mesa.

    I wonder if Michael enabled 2D tiling.
    Last edited by marek; 04-16-2012 at 08:33 AM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •