Announcement

Collapse
No announcement yet.

In Road To OpenCL, R600g LLVM Back-End Arrives

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • In Road To OpenCL, R600g LLVM Back-End Arrives

    Phoronix: In Road To OpenCL, R600g LLVM Back-End Arrives

    Before calling it a week, Tom Stellard at AMD published a Git branch that offers up an LLVM shader back-end for the AMD R600 Gallium3D driver. This is one of the steps in bringing Compute/OpenCL support to the open-source AMD Radeon Linux graphics drivers...

    http://www.phoronix.com/vr.php?view=MTAyNTg

  • #2
    Originally posted by phoronix View Post
    Phoronix: In Road To OpenCL, R600g LLVM Back-End Arrives

    Before calling it a week, Tom Stellard at AMD published a Git branch that offers up an LLVM shader back-end for the AMD R600 Gallium3D driver. This is one of the steps in bringing Compute/OpenCL support to the open-source AMD Radeon Linux graphics drivers...

    http://www.phoronix.com/vr.php?view=MTAyNTg
    Very cool. I'll be anxiously awaiting more news about this.

    Comment


    • #3
      this are the stuf that will make Clover HW accelerated right?

      Comment


      • #4
        Originally posted by 89c51 View Post
        this are the stuf that will make Clover HW accelerated right?
        This will make any gallium interface accelerated. Clover needs to support HW acceleration on a gallium interface first though, but this is a step in the right direction.

        Comment


        • #5
          Originally posted by Laughing1 View Post
          This will make any gallium interface accelerated. Clover needs to support HW acceleration on a gallium interface first though, but this is a step in the right direction.
          if i get the whole G3D thing right (which i probably don't) the order is somehow like App(graphics or something) >> OpenGL >> TGSI >> HW >>magic on the screen

          with toms work App(graphics or something) >> OpenGL >> LLVM to TGSI >> HW >> magic on the screen

          so if Clover has (or gets) an llvm backend will be accelerated



          yes the question might be stupid or trivial to some so i appologize

          Comment


          • #6
            Originally posted by 89c51 View Post
            with toms work App(graphics or something) >> OpenGL >> LLVM to TGSI >> HW >> magic on the screen
            Well, that would actually look like this:

            App(graphics or something) >> OpenGL (G3D state tracker) >> TGSI (G3D's IR of choice) >> LLVM IR >> LLVM backend for AMD cards >> HW >> magic on the screen

            I'm just speaking about the code, no runtime or anything.

            Comment


            • #7
              I think the idea is to let clover feed LLVM IR directly into the driver while continuing to support TGSI for graphics. Anyways, the cool thing is that once the code can go from LLVM IR to hardware instructions all kinds of interesting things can be built on top.

              Comment


              • #8
                I don't get all this hype about OpenCL. I'm sure that there are some specialized niche applications making use of OpenCL but as a consumer I'd rather see AMD devoting some manpower to finally get video decoding working on the open drivers.

                Comment


                • #9
                  It currently looks something like this:

                  App -> OpenGL -> GLSL IR -> TGSI -> hardware

                  The new stack allows this:

                  App -> OpenGL -> GLSL IR -> TGSI -> LLVM -> hardware

                  But in the long run, it would make this possible:

                  App -> OpenCL -> LLVM -> hardware,

                  as well as

                  App -> OpenGL -> GLSL IR -> LLVM -> hardware

                  Comment


                  • #10
                    If I remember correctly, clover already generates LLVM IR which it then usually compiles down to x86/x86_64. Hopefully it won't be too difficult to get clover to use the r600g LLVM back-end to compile the CL kernels for execution on a graphics card. Then the kernels just have to be copied to video memory and scheduled (yes, I'm simplifying a bit).

                    Comment

                    Working...
                    X