Announcement

Collapse
No announcement yet.

OpenCL Support In GCC?

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • OpenCL Support In GCC?

    Phoronix: OpenCL Support In GCC?

    In early December the OpenCL specification was unveiled, which is an open framework initially conceived by Apple for extending the power of graphics processors to better handle GPGPU computing in a unified way. Both ATI/AMD and NVIDIA are working on bringing Open Computing Language support to their proprietary Linux drivers, while nothing has yet to be started on the open-source side to integrate the support within Mesa...

    http://www.phoronix.com/vr.php?view=NzAzMA

  • #2
    If I want to write an application that uses OpenCL and want to compile this application with GCC, does GCC need to support OpenCL in this case?

    Comment


    • #3
      Only if the OpenCL implementation in the drivers makes use of gcc to parse the C99 code and feed it into the lower level compiler stack. Normally the app makes OpenCL calls to compile the compute kernel, and the driver can use whatever C99 implementation it wants.

      Note that the driver itself could be *compiled* with gcc and that would still not require OpenCL support in gcc. The only thing that matters is what the driver does when an OpenCL app says "Compile This !!".
      Last edited by bridgman; 02-01-2009, 10:44 AM.

      Comment


      • #4
        I have seen OpenCL Python bindings, and OpenCL C like code. Are all code in both cases executed on the CPU during the entire process, or are the code made into GPU bytecode that is loaded into the GPU, so it is the GPU that executes the OpenCL code?

        There are pointers in the C like OpenCL. Does that mean that OpenCL can see and modify all of the GPU's memory?

        Comment


        • #5
          Good point - there may be some work (libs, bindings etc..) required to gcc in order to simply *use* OpenCL. I wasn't thinking about that side of it (driver-centric world view I guess ).

          The kernel program provided by the OpenCL app would normally be run on GPU, DSP, Cell, whatever -- but I expect many implementations will also allow execution on the CPU.

          I imagine that pointers could access the entire GPU address space unless blocked by memory management or by checking code generated by the compiler, but presumably the main purpose of them is to navigate predefined data structures.

          Comment


          • #6
            Originally posted by bridgman View Post
            The kernel program provided by the OpenCL app would normally be run on GPU, DSP, Cell, whatever -- but I expect many implementations will also allow execution on the CPU.
            So a GPU can have processes running? Does that mean, that there could be made a "ps" and "top" for GPU processes?

            It would be so cool to have a Gnome/KDE GPU load monitor

            It sounds quite complex, that the OpenCL kernel will be in the GPU, when the Mesa OpenCL driver will run on the CPU.

            Does that mean, that if the Mesa OpenCL driver crashes, the GPU will continue to do its OpenCL calculations?

            Originally posted by bridgman View Post
            I imagine that pointers could access the entire GPU address space unless blocked by memory management or by checking code generated by the compiler, but presumably the main purpose of them is to navigate predefined data structures.
            Quite cool feature

            Comment


            • #7
              Running a kernel under OpenCL is pretty similar to running a pixel shader program under OpenGL -- the app says "for every pixel run this program", then throws triangles or rectangles at the GPU. The GPU then runs the appropriate shader program on every pixel, and on modern GPUs that involves running hundreds of threads in parallel (an RV770 can execute 160 instructions in parallel, each doing up to 5 floating point MADs, or 10 FLOPs per instruction).

              The per-pixel output from the shader program usually goes to the screen, but it could go into a buffer which gets used elsewhere or read back into system memory. The Mesa driver runs on the CPU but the shader programs run on the GPU.

              Same with OpenCL; driver runs on the CPU but a bunch of copies of the kernel run in parallel on the GPU. The key point is that the GPU is only working on one task at a time, but within that task it can work on hundreds of data items in parallel. That's why GPUs are described as data-parallel rather than task-parallel.

              The data-parallel vs task-parallel distinction is also why the question of "how many cores does a GPU have ?" is so tricky to answer. Depending on your criteria, an RV770 can be described as single-core, 10 core, 160 core or 800 core. The 10-core answer is probably most technically correct, while the 160-core answer probably gives the most accurate idea of throughput relative to a conventional CPU.

              Anyways, since a GPU fundamentally works on one task at a time and the drriver time-slices between different tasks it should be possible to hook into the driver and track what percentage of the time is being used by each of the tasks. That hasn't been useful in the past (since all the GPU workload typically comes from whatever app you are running at the moment) but as we start juggling multiple tasks on the GPU that will probably become more important (and more interesting to watch ).
              Last edited by bridgman; 02-01-2009, 03:08 PM.

              Comment


              • #8
                My head just exploded.

                Comment


                • #9
                  It's supposed to hurt. That means you're starting to understand. Congratulations

                  Ever since the introduction of programmable shaders GPU drivers have included an on-the-fly compilation step (going from, say, GLSL to GPU shader instructions) and the GPU hardware has run many copies of those compiled shader programs in parallel to get acceptable performance.

                  GPU vendors did a good job of hiding that complexity from the application -- but with OpenCL you get to see all the scary stuff behind the scenes.

                  Back in 2002 the R300 (aka 9700) was running 8 copies of the pixel shader program in parallel, each working on a different pixel. The RV730 is comparable in terms of pixel throughput but can run 64 copies of a shader program in parallel, ie the ratio of shader power to pixel-pushing power is 4-8 times higher on the RV730. This is why modern chips can run so much *faster* on complex 3D applications even if they run *slower* on glxgears.

                  Unified shader GPUs use multiple shader blocks in order to handle the mix of vertex, geometry and pixel shader work that comes with a single drawing task. In principle the blocks could be designed to work on totally different tasks but that would require a lot more silicon (more $$) and the added complexity would probably *reduce* overall performance.

                  The most important concept to grasp is that with conventional programming you have a single task, executing a program which steps through an array and calculates the results for each element. With data-parallel programming you write a program that calculates the value of ONE element, then the OpenCL / Stream / CUDA runtime executes a copy of the program for each element in the array, using parallel hardware as much as possible.

                  Having the runtime take care of parallelism (rather than the application) makes it possible for an application to run on anything from a single-core CPU to a stack of GPUs without recompilation.
                  Last edited by bridgman; 02-01-2009, 03:20 PM.

                  Comment


                  • #10
                    bridgman: any thoughts on getting OpenMPI up and running on ATI GPUs, possibly with the help of OpenCL?

                    A lot of scientific projects use MPI. In the Floss Weekly interview ( http://twit.tv/floss50 ) of Jeff Squyres of OpenMPI he mentioned that some Nvidia guy had confronted him about this.

                    Comment


                    • #11
                      Originally posted by bridgman View Post
                      Running a kernel under OpenCL is pretty similar to running a pixel shader program under OpenGL -- the app says "for every pixel run this program", then throws triangles or rectangles at the GPU. The GPU then runs the appropriate shader program on every pixel, and on modern GPUs that involves running hundreds of threads in parallel (an RV770 can execute 160 instructions in parallel, each doing up to 5 floating point MADs, or 10 FLOPs per instruction).

                      The per-pixel output from the shader program usually goes to the screen, but it could go into a buffer which gets used elsewhere or read back into system memory. The Mesa driver runs on the CPU but the shader programs run on the GPU.

                      Same with OpenCL; driver runs on the CPU but a bunch of copies of the kernel run in parallel on the GPU. The key point is that the GPU is only working on one task at a time, but within that task it can work on hundreds of data items in parallel. That's why GPUs are described as data-parallel rather than task-parallel.

                      The data-parallel vs task-parallel distinction is also why the question of "how many cores does a GPU have ?" is so tricky to answer. Depending on your criteria, an RV770 can be described as single-core, 10 core, 160 core or 800 core. The 10-core answer is probably most technically correct, while the 160-core answer probably gives the most accurate idea of throughput relative to a conventional CPU.

                      Anyways, since a GPU fundamentally works on one task at a time and the drriver time-slices between different tasks it should be possible to hook into the driver and track what percentage of the time is being used by each of the tasks. That hasn't been useful in the past (since all the GPU workload typically comes from whatever app you are running at the moment) but as we start juggling multiple tasks on the GPU that will probably become more important (and more interesting to watch ).
                      Okay, so a pixel shader is more or less an infinite while-loop?

                      So if we have OpenCL into play, does that mean, that the OpenGL and OpenCL driver schedule which turn it is to get data processed, as the GPU only can handle one task at a time?

                      Let's say I write a OpenCL program that simulates a flow. Is that program the kernel for the GPU? Or is the kernel something Mesa would write to intercept my flow simulation program?

                      How many kernels can the GPU have running?

                      Comment


                      • #12
                        Originally posted by bridgman View Post
                        It's supposed to hurt. That means you're starting to understand. Congratulations

                        Ever since the introduction of programmable shaders GPU drivers have included an on-the-fly compilation step (going from, say, GLSL to GPU shader instructions) and the GPU hardware has run many copies of those compiled shader programs in parallel to get acceptable performance.

                        GPU vendors did a good job of hiding that complexity from the application -- but with OpenCL you get to see all the scary stuff behind the scenes.

                        Back in 2002 the R300 (aka 9700) was running 8 copies of the pixel shader program in parallel, each working on a different pixel. The RV730 is comparable in terms of pixel throughput but can run 64 copies of a shader program in parallel, ie the ratio of shader power to pixel-pushing power is 4-8 times higher on the RV730. This is why modern chips can run so much *faster* on complex 3D applications even if they run *slower* on glxgears.

                        Unified shader GPUs use multiple shader blocks in order to handle the mix of vertex, geometry and pixel shader work that comes with a single drawing task. In principle the blocks could be designed to work on totally different tasks but that would require a lot more silicon (more $$) and the added complexity would probably *reduce* overall performance.

                        The most important concept to grasp is that with conventional programming you have a single task, executing a program which steps through an array and calculates the results for each element. With data-parallel programming you write a program that calculates the value of ONE element, then the OpenCL / Stream / CUDA runtime executes a copy of the program for each element in the array, using parallel hardware as much as possible.

                        Having the runtime take care of parallelism (rather than the application) makes it possible for an application to run on anything from a single-core CPU to a stack of GPUs without recompilation.
                        I am beginning to get the feeling that GPU vendors start off by shooting down an UFO, and use their technology to make GPU's.

                        I hope you don't have small green antennas

                        Comment


                        • #13
                          Originally posted by Louise View Post
                          Okay, so a pixel shader is more or less an infinite while-loop?
                          Sort of.. more like one pass through the loop, but with many copies running in parallel each on a separate piece of the answer.

                          Originally posted by Louise View Post
                          So if we have OpenCL into play, does that mean, that the OpenGL and OpenCL driver schedule which turn it is to get data processed, as the GPU only can handle one task at a time?
                          Yep. This is how it works today when both the X driver and the Mesa driver want to use the chip. The drm driver arbitrates between multiple clients and uses a lock so that only one of them can have the GPU at a time.

                          Originally posted by Louise View Post
                          Let's say I write a OpenCL program that simulates a flow. Is that program the kernel for the GPU? Or is the kernel something Mesa would write to intercept my flow simulation program?
                          Can I use a simpler example 'cause I slept through too many physics classes ? Let's say we have a couple of arrays, and we want to run a complex program against those arrays to create a third array. The kernel would contain the code required to generate the result for one element of that third array... then OpenCL would run a separate copy of that program for each element in the third array, passing it the appropriate parameters so that the proper portions of the first two arrays would be used in the calculation.

                          You would write the kernel program in C, following the OpenCL guidelines. The OpenCL runtime (mostly the driver) would compile that program on the fly to hardware-specific instructions (see the r600_isa doc for details) and then run a bazillion copies of that compiled program in parallel.

                          Originally posted by Louise View Post
                          How many kernels can the GPU have running?
                          It varies from chip to chip, but I think it's in the "thousands" range. It depends a bit on how complex the kernel program is, specifically how many different registers it uses. The GPU is built around a big honkin' register file -- the more registers required by an individual thread, the fewer threads you can run at the same time.

                          The GPU won't actually execute thousands of threads at the same time; many threads may be waiting for memory accesses and so the hardware scheduler runs only the threads which are not waiting for anything. The RV770 has enough stream processors to actually *execute* 160 threads at a time (10 16-way SIMD blocks), with each thread performing up to 5 floating point instructions per clock. That's where the 1.2 teraflop number comes from -- 160 threads x 5 operations per thread per clock x 2 FLOPs per operation (Multiply-Add) x 750 MHz clock rate.

                          Originally posted by Louise View Post
                          I am beginning to get the feeling that GPU vendors start off by shooting down an UFO, and use their technology to make GPU's.
                          It's possible, but nobody is talking. The Terminator explanation seems more believable to me.

                          Originally posted by Louise View Post
                          I hope you don't have small green antennas
                          No, but I'm not the hardware designer
                          Last edited by bridgman; 02-01-2009, 06:16 PM.

                          Comment


                          • #14
                            Originally posted by bridgman View Post
                            Sort of.. more like one pass through the loop, but with many copies running in parallel each on a separate piece of the answer.
                            Reading your entire answer I think I understand now

                            Originally posted by bridgman View Post
                            Can I use a simpler example 'cause I slept through too many physics classes ? Let's say we have a couple of arrays, and we want to run a complex program against those arrays to create a third array. The kernel would contain the code required to generate the result for one element of that third array... then OpenCL would run a separate copy of that program for each element in the third array, passing it the appropriate parameters so that the proper portions of the first two arrays would be used in the calculation.
                            Excellent. So my program becomes the kernel.

                            On Windows are there a lot of malware e.g. the proof of concept Blue Pill virtual machine, which is very hard to measure that it is running.

                            So I am just thinking about security. Should malware running in the GPU be a concern for Windows users?

                            Originally posted by bridgman View Post
                            You would write the kernel program in C, following the OpenCL guidelines. The OpenCL runtime (mostly the driver) would compile that program on the fly to hardware-specific instructions (see the r600_isa doc for details) and then run a bazillion copies of that compiled program in parallel.
                            If only it worked the same way for CPU's

                            Originally posted by bridgman View Post
                            It varies from chip to chip, but I think it's in the "thousands" range. It depends a bit on how complex the kernel program is, specifically how many different registers it uses. The GPU is built around a big honkin' register file -- the more registers required by an individual thread, the fewer threads you can run at the same time.
                            It is impressive how much technology the customer get for only $100 now a days! What a great time we are living in!

                            Originally posted by bridgman View Post
                            It's possible, but nobody is talking. The Terminator explanation seems more believable to me.
                            Scary

                            Originally posted by bridgman View Post
                            No, but I'm not the hardware designer
                            So the surviving aliens are only used for designing the hardware?

                            I guess that justifies it

                            Comment


                            • #15
                              Originally posted by Louise View Post
                              So I am just thinking about security. Should malware running in the GPU be a concern for Windows users?
                              It should be a concern for all users, but since GPUs can't really have long-running processes on them (yet) the main concern is malware running on the CPU but using the GPU to gain access to areas of memory which are blocked for CPU access. There are a number of safeguards in place (in all OSes, not just Windows) to prevent this.

                              One of the recurring debates on both #radeon and #dri-devel is exactly how much checking the drm should do before passing a command buffer to the GPU, eg validating memory addresses, register offsets etc...

                              Comment

                              Working...
                              X