Announcement

Collapse
No announcement yet.

OpenCL Support In GCC?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • OpenCL Support In GCC?

    Phoronix: OpenCL Support In GCC?

    In early December the OpenCL specification was unveiled, which is an open framework initially conceived by Apple for extending the power of graphics processors to better handle GPGPU computing in a unified way. Both ATI/AMD and NVIDIA are working on bringing Open Computing Language support to their proprietary Linux drivers, while nothing has yet to be started on the open-source side to integrate the support within Mesa...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    If I want to write an application that uses OpenCL and want to compile this application with GCC, does GCC need to support OpenCL in this case?

    Comment


    • #3
      Only if the OpenCL implementation in the drivers makes use of gcc to parse the C99 code and feed it into the lower level compiler stack. Normally the app makes OpenCL calls to compile the compute kernel, and the driver can use whatever C99 implementation it wants.

      Note that the driver itself could be *compiled* with gcc and that would still not require OpenCL support in gcc. The only thing that matters is what the driver does when an OpenCL app says "Compile This !!".
      Last edited by bridgman; 01 February 2009, 11:44 AM.
      Test signature

      Comment


      • #4
        I have seen OpenCL Python bindings, and OpenCL C like code. Are all code in both cases executed on the CPU during the entire process, or are the code made into GPU bytecode that is loaded into the GPU, so it is the GPU that executes the OpenCL code?

        There are pointers in the C like OpenCL. Does that mean that OpenCL can see and modify all of the GPU's memory?

        Comment


        • #5
          Good point - there may be some work (libs, bindings etc..) required to gcc in order to simply *use* OpenCL. I wasn't thinking about that side of it (driver-centric world view I guess ).

          The kernel program provided by the OpenCL app would normally be run on GPU, DSP, Cell, whatever -- but I expect many implementations will also allow execution on the CPU.

          I imagine that pointers could access the entire GPU address space unless blocked by memory management or by checking code generated by the compiler, but presumably the main purpose of them is to navigate predefined data structures.
          Test signature

          Comment


          • #6
            Originally posted by bridgman View Post
            The kernel program provided by the OpenCL app would normally be run on GPU, DSP, Cell, whatever -- but I expect many implementations will also allow execution on the CPU.
            So a GPU can have processes running? Does that mean, that there could be made a "ps" and "top" for GPU processes?

            It would be so cool to have a Gnome/KDE GPU load monitor

            It sounds quite complex, that the OpenCL kernel will be in the GPU, when the Mesa OpenCL driver will run on the CPU.

            Does that mean, that if the Mesa OpenCL driver crashes, the GPU will continue to do its OpenCL calculations?

            Originally posted by bridgman View Post
            I imagine that pointers could access the entire GPU address space unless blocked by memory management or by checking code generated by the compiler, but presumably the main purpose of them is to navigate predefined data structures.
            Quite cool feature

            Comment


            • #7
              Running a kernel under OpenCL is pretty similar to running a pixel shader program under OpenGL -- the app says "for every pixel run this program", then throws triangles or rectangles at the GPU. The GPU then runs the appropriate shader program on every pixel, and on modern GPUs that involves running hundreds of threads in parallel (an RV770 can execute 160 instructions in parallel, each doing up to 5 floating point MADs, or 10 FLOPs per instruction).

              The per-pixel output from the shader program usually goes to the screen, but it could go into a buffer which gets used elsewhere or read back into system memory. The Mesa driver runs on the CPU but the shader programs run on the GPU.

              Same with OpenCL; driver runs on the CPU but a bunch of copies of the kernel run in parallel on the GPU. The key point is that the GPU is only working on one task at a time, but within that task it can work on hundreds of data items in parallel. That's why GPUs are described as data-parallel rather than task-parallel.

              The data-parallel vs task-parallel distinction is also why the question of "how many cores does a GPU have ?" is so tricky to answer. Depending on your criteria, an RV770 can be described as single-core, 10 core, 160 core or 800 core. The 10-core answer is probably most technically correct, while the 160-core answer probably gives the most accurate idea of throughput relative to a conventional CPU.

              Anyways, since a GPU fundamentally works on one task at a time and the drriver time-slices between different tasks it should be possible to hook into the driver and track what percentage of the time is being used by each of the tasks. That hasn't been useful in the past (since all the GPU workload typically comes from whatever app you are running at the moment) but as we start juggling multiple tasks on the GPU that will probably become more important (and more interesting to watch ).
              Last edited by bridgman; 01 February 2009, 04:08 PM.
              Test signature

              Comment


              • #8
                My head just exploded.

                Comment


                • #9
                  It's supposed to hurt. That means you're starting to understand. Congratulations

                  Ever since the introduction of programmable shaders GPU drivers have included an on-the-fly compilation step (going from, say, GLSL to GPU shader instructions) and the GPU hardware has run many copies of those compiled shader programs in parallel to get acceptable performance.

                  GPU vendors did a good job of hiding that complexity from the application -- but with OpenCL you get to see all the scary stuff behind the scenes.

                  Back in 2002 the R300 (aka 9700) was running 8 copies of the pixel shader program in parallel, each working on a different pixel. The RV730 is comparable in terms of pixel throughput but can run 64 copies of a shader program in parallel, ie the ratio of shader power to pixel-pushing power is 4-8 times higher on the RV730. This is why modern chips can run so much *faster* on complex 3D applications even if they run *slower* on glxgears.

                  Unified shader GPUs use multiple shader blocks in order to handle the mix of vertex, geometry and pixel shader work that comes with a single drawing task. In principle the blocks could be designed to work on totally different tasks but that would require a lot more silicon (more $$) and the added complexity would probably *reduce* overall performance.

                  The most important concept to grasp is that with conventional programming you have a single task, executing a program which steps through an array and calculates the results for each element. With data-parallel programming you write a program that calculates the value of ONE element, then the OpenCL / Stream / CUDA runtime executes a copy of the program for each element in the array, using parallel hardware as much as possible.

                  Having the runtime take care of parallelism (rather than the application) makes it possible for an application to run on anything from a single-core CPU to a stack of GPUs without recompilation.
                  Last edited by bridgman; 01 February 2009, 04:20 PM.
                  Test signature

                  Comment


                  • #10
                    bridgman: any thoughts on getting OpenMPI up and running on ATI GPUs, possibly with the help of OpenCL?

                    A lot of scientific projects use MPI. In the Floss Weekly interview ( http://twit.tv/floss50 ) of Jeff Squyres of OpenMPI he mentioned that some Nvidia guy had confronted him about this.

                    Comment

                    Working...
                    X