Announcement

Collapse
No announcement yet.

AMD Radeon R600 GPU LLVM 3.3 Back-End Testing

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD Radeon R600 GPU LLVM 3.3 Back-End Testing

    Phoronix: AMD Radeon R600 GPU LLVM 3.3 Back-End Testing

    One of the exciting features of LLVM 3.3 that is due out next month is the final integration of the AMD R600 GPU LLVM back-end. This LLVM back-end is needed for supporting Gallium3D OpenCL on AMD Radeon graphics hardware, "RadeonSI" HD 7000/8000 series support, and can optionally be used as the Radeon Gallium3D driver's shader compiler. In this article are some benchmarks of the AMD R600 GPU LLVM back-end from LLVM 3.3-rc1 when using several different AMD Radeon HD graphics cards and seeing how the LLVM compiler back-end affects the OpenGL graphics performance.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Gains are more noticeable in Unigine Heaven and Lightmarks. (Xonotic/Doom/Warsow aren't really shader limited).

    Comment


    • #3
      Should have been a 3-way comparison

      No offence, but this should have been a 3-way comparison : default shader compiler, llvm an Vadim Girlin's sb. (Sorry if I got the name wrong). Knowing that these different backends are runtime selectable, there is no excuse.

      Also, does anyone know if there are piglit results differences between all these backends? (I don't have the time to run it right now). In any case, it's nice to see this llvm based compiler work. More code sharing, that's always good.
      Serafean

      Comment


      • #4
        OpenCL

        Does this means fully working OpenCL is near ? I would really like to use Blender Cycles with OpenCL, even though I'm tempted to buy a Nvidia graphics card only for CUDA.

        Comment


        • #5
          Originally posted by wargames View Post
          Does this means fully working OpenCL is near ? I would really like to use Blender Cycles with OpenCL, even though I'm tempted to buy a Nvidia graphics card only for CUDA.
          We've got bfgminer working (minus a lock-up issue on some evergreens, possibly due to some flushing issues), and I believe that many of the Gimp GEGL operations are supported. Cycles may be in the cards, but last time I tried to compile their shaders (several months ago), they were not working, and looked like they broke some of the CL standard... but it was hard to tell, given that it was all CUDA code that was ported/translated through pre-processor macros.

          Comment


          • #6
            Originally posted by wargames View Post
            Does this means fully working OpenCL is near ? I would really like to use Blender Cycles with OpenCL, even though I'm tempted to buy a Nvidia graphics card only for CUDA.
            If you've got more than 1 PCIe slot you could just get the nvidia. I've done that before, using my HD5750 for gaming with a 8400GS for Physx.

            Comment


            • #7
              Originally posted by Veerappan View Post
              We've got bfgminer working (minus a lock-up issue on some evergreens, possibly due to some flushing issues), and I believe that many of the Gimp GEGL operations are supported. Cycles may be in the cards, but last time I tried to compile their shaders (several months ago), they were not working, and looked like they broke some of the CL standard... but it was hard to tell, given that it was all CUDA code that was ported/translated through pre-processor macros.
              Blender has no qualified OpenCL staff to make their code work correctly. They keep blaming AMD when it's on them.

              Comment


              • #8
                Originally posted by Serafean View Post
                No offence, but this should have been a 3-way comparison : default shader compiler, llvm and Vadim Girlin's sb. (Sorry if I got the name wrong). Knowing that these different backends are runtime selectable, there is no excuse.

                Also, does anyone know if there are piglit results differences between all these backends? (I don't have the time to run it right now). In any case, it's nice to see this llvm based compiler work. More code sharing, that's always good.
                Serafean
                Nitpick: That's a four-way comparison, since sb is not a shader compiler but a post-compile shader optimizer that can be used with both backends.

                Comment


                • #9
                  Originally posted by Marc Driftmeyer View Post
                  Blender has no qualified OpenCL staff to make their code work correctly. They keep blaming AMD when it's on them.
                  Yeah, right... but wait, the issue "does not appear with NVidia GPU OpenCL implementation neither on Intel/AMD CPU OpenCL implementations.": http://www.youtube.com/watch?v=LbEZ6OnpWHA . Btw AMD guys seem to be working on a fix: http://devgurus.amd.com/message/1285984

                  Comment


                  • #10
                    I've emerged the latest related packages from Gentoo's x11 overlay in order to enable opencl support on my 6970M using the radeon driver, and it all emerged cleanly. Is there a simple way to test whether the opencl support exists, or simply get info on what's enabled?

                    Thanks..

                    Comment

                    Working...
                    X