Page 1 of 3 123 LastLast
Results 1 to 10 of 24

Thread: AMD Radeon R600 GPU LLVM 3.3 Back-End Testing

  1. #1
    Join Date
    Jan 2007
    Posts
    14,303

    Default AMD Radeon R600 GPU LLVM 3.3 Back-End Testing

    Phoronix: AMD Radeon R600 GPU LLVM 3.3 Back-End Testing

    One of the exciting features of LLVM 3.3 that is due out next month is the final integration of the AMD R600 GPU LLVM back-end. This LLVM back-end is needed for supporting Gallium3D OpenCL on AMD Radeon graphics hardware, "RadeonSI" HD 7000/8000 series support, and can optionally be used as the Radeon Gallium3D driver's shader compiler. In this article are some benchmarks of the AMD R600 GPU LLVM back-end from LLVM 3.3-rc1 when using several different AMD Radeon HD graphics cards and seeing how the LLVM compiler back-end affects the OpenGL graphics performance.

    http://www.phoronix.com/vr.php?view=18709

  2. #2
    Join Date
    Mar 2011
    Posts
    26

    Default

    Gains are more noticeable in Unigine Heaven and Lightmarks. (Xonotic/Doom/Warsow aren't really shader limited).

  3. #3
    Join Date
    Dec 2011
    Posts
    144

    Default Should have been a 3-way comparison

    No offence, but this should have been a 3-way comparison : default shader compiler, llvm an Vadim Girlin's sb. (Sorry if I got the name wrong). Knowing that these different backends are runtime selectable, there is no excuse.

    Also, does anyone know if there are piglit results differences between all these backends? (I don't have the time to run it right now). In any case, it's nice to see this llvm based compiler work. More code sharing, that's always good.
    Serafean

  4. #4
    Join Date
    Sep 2012
    Posts
    277

    Default OpenCL

    Does this means fully working OpenCL is near ? I would really like to use Blender Cycles with OpenCL, even though I'm tempted to buy a Nvidia graphics card only for CUDA.

  5. #5
    Join Date
    Nov 2008
    Location
    Madison, WI, USA
    Posts
    861

    Default

    Quote Originally Posted by wargames View Post
    Does this means fully working OpenCL is near ? I would really like to use Blender Cycles with OpenCL, even though I'm tempted to buy a Nvidia graphics card only for CUDA.
    We've got bfgminer working (minus a lock-up issue on some evergreens, possibly due to some flushing issues), and I believe that many of the Gimp GEGL operations are supported. Cycles may be in the cards, but last time I tried to compile their shaders (several months ago), they were not working, and looked like they broke some of the CL standard... but it was hard to tell, given that it was all CUDA code that was ported/translated through pre-processor macros.

  6. #6
    Join Date
    Dec 2010
    Location
    MA, USA
    Posts
    1,202

    Default

    Quote Originally Posted by wargames View Post
    Does this means fully working OpenCL is near ? I would really like to use Blender Cycles with OpenCL, even though I'm tempted to buy a Nvidia graphics card only for CUDA.
    If you've got more than 1 PCIe slot you could just get the nvidia. I've done that before, using my HD5750 for gaming with a 8400GS for Physx.

  7. #7
    Join Date
    Oct 2012
    Location
    Washington State
    Posts
    406

    Default

    Quote Originally Posted by Veerappan View Post
    We've got bfgminer working (minus a lock-up issue on some evergreens, possibly due to some flushing issues), and I believe that many of the Gimp GEGL operations are supported. Cycles may be in the cards, but last time I tried to compile their shaders (several months ago), they were not working, and looked like they broke some of the CL standard... but it was hard to tell, given that it was all CUDA code that was ported/translated through pre-processor macros.
    Blender has no qualified OpenCL staff to make their code work correctly. They keep blaming AMD when it's on them.

  8. #8
    Join Date
    Nov 2011
    Posts
    267

    Default

    Quote Originally Posted by Serafean View Post
    No offence, but this should have been a 3-way comparison : default shader compiler, llvm and Vadim Girlin's sb. (Sorry if I got the name wrong). Knowing that these different backends are runtime selectable, there is no excuse.

    Also, does anyone know if there are piglit results differences between all these backends? (I don't have the time to run it right now). In any case, it's nice to see this llvm based compiler work. More code sharing, that's always good.
    Serafean
    Nitpick: That's a four-way comparison, since sb is not a shader compiler but a post-compile shader optimizer that can be used with both backends.

  9. #9
    Join Date
    Jul 2010
    Posts
    444

    Default

    Quote Originally Posted by Marc Driftmeyer View Post
    Blender has no qualified OpenCL staff to make their code work correctly. They keep blaming AMD when it's on them.
    Yeah, right... but wait, the issue "does not appear with NVidia GPU OpenCL implementation neither on Intel/AMD CPU OpenCL implementations.": http://www.youtube.com/watch?v=LbEZ6OnpWHA . Btw AMD guys seem to be working on a fix: http://devgurus.amd.com/message/1285984

  10. #10
    Join Date
    Apr 2010
    Posts
    13

    Default

    I've emerged the latest related packages from Gentoo's x11 overlay in order to enable opencl support on my 6970M using the radeon driver, and it all emerged cleanly. Is there a simple way to test whether the opencl support exists, or simply get info on what's enabled?

    Thanks..

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •