Announcement

Collapse
No announcement yet.

OpenCL 1.0 Specification Released!

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • OpenCL 1.0 Specification Released!

    Phoronix: OpenCL 1.0 Specification Released!

    Khronos Group has today announced the ratification of the OpenCL 1.0 specification! The 1.0 specification of the Open Computing Language is backed by Apple, AMD, NVIDIA, Intel, and other industry leaders as a new open standard to exploit graphics processors for general-purpose computational needs. What OpenCL 1.0 defines is a C99 programming language with extensions geared for parallel programming, an API for coordinating data and task-based parallel computation across a wide range of heterogeneous processors, numeric requirements based on the IEEE 754 standard, and efficient interoperability with OpenGL, OpenGL ES, and other graphics APIs. The press release announcing the release of the OpenCL 1.0 specification can be found in the Khronos news area. NVIDIA has already announced today as well that the OpenCL 1.0 specification will be added to their GPU computing toolkit...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Heterogeneous computing FTW!

    Comment


    • #3
      According to Tim Sweeney, gpgpus will make graphics programming interfaces obsolete. So how does the adoption of openCL affect Linux's capabilities as a gaming platform?

      I realize that it's all up to game developers, as to what middleware they pick up, and the linux standards base will probably be a factor too. But would anyone have insights to share about probable and possible outcomes?

      Comment


      • #4
        Great the competitive companies use one standard. And this standard is open source, yeah, that's great!

        Comment


        • #5
          Originally posted by OpenCL press release
          OpenCL is being created by the Khronos Group with the participation of many industry-leading companies and institutions including [...] Takumi [...].
          Are they talking about Takumi Corporation, developer of e.g. Mars Matrix and Gigawing? I don't know of any other organization called "Takumi", let alone one that might have a compelling interest in OpenCL, but it's sort of strange seeing that name there without Taito, Capcom, Konami, or other such heavyweights of the Japanese arcade industry...

          Comment


          • #6
            Originally posted by Pahanilmanlintu View Post
            According to Tim Sweeney, gpgpus will make graphics programming interfaces obsolete. So how does the adoption of openCL affect Linux's capabilities as a gaming platform?
            Well I don't know what Tim Sweeney said or whatever.

            But on all modern video card acceleration it's all software. There isn't any real 'OpenGL hardware engine' or anything like that anymore. Now everything is 'GPUs'. GPUs are, more and more, just general purpose processors with their design very highly optimized for the sort of workloads you see with graphics. Highly parallel and that sort of thing.

            So 'drivers' are a mixture of OpenGL or DirectX software rendering stack that is tailored for that specific GPU then the normal commands and controls to control settings in the video card (like mode setting).

            Not like I am a expert or anything like that. This is just my understanding.

            It's probably not quite like that today.. but this is the trend.

            With things like Intel's Larrabee and AMD's Fusion your going to see GPUs be made more generalized and gain standardized interfaces.

            Standardized interfaces fro hardware processing is called a 'ISA' (at least one term for it). There are x86 ISA, AMD64 ISA, PowerPC ISA, etc etc. And what this is a sort of standard sets of interfaces the processor must support in order to be compatible with software.

            A modern Intel x86 processor, for example, is VERY different from the old CISC designs. The actual part of the CPU that does the processing is in fact a high speed RISC processor, but there is a great deal of hardware logic to turn the x86 CISC-style commands into something that the modern CPU can deal with.

            This is how Intel is able to maintain machine code-level compatibility back into DOS days.

            So the future seems to be a ISA for GPU hardware. Right now the code interfaces and whatnot is very specific to a particular generation of GPUs and particular manufacturer. Having a GPU ISA then that means code optimized to run on one generation of graphics acceleration will run on the next.

            As you know there is dicussions underway about the scalability of proccessors and hardware. People are figuring out that the practical limit for the amount of processing cores on a single chip is going to be around 16 or so. Beyond that and you run into severe packaging issues... there simply isn't enough pins to provide enough memory bandwidth to a processor with 32 cores or whatever.

            (sure there are Linux machines right now with 512 processors, but that is spread out over a NUMA architecture, not having 512 cores on a single processor)

            So it seems to me the logical way is to have cores optimized for different workloads. So you have some cores for x86 compatibility, but other cores for doing Disk I/O and Network I/O and other cores for GPU-style workloads.

            And that, seems to me, were OpenCL comes in.

            I realize that it's all up to game developers, as to what middleware they pick up, and the linux standards base will probably be a factor too. But would anyone have insights to share about probable and possible outcomes?

            Well hopefully if OpenCL takes off then Linux will have a standardized way to do graphics acceleration and accelerated video playback without resorting to vendor-specific APIs.

            So lets say FFmpeg decides they want to have very fast encoding and decoding for media playback by taking advantage of GPUs. Right now there are specific APIs that differ based on if your using a VIA system vs Nvidia vs AMD vs Intel. (well it only really exists for Nvidia on Linux, but requires proprietary driver). With OpenCL they should be able to code it once and not worry to much about card-specifics.

            Comment


            • #7
              Originally posted by drag View Post
              A modern Intel x86 processor, for example, is VERY different from the old CISC designs. The actual part of the CPU that does the processing is in fact a high speed RISC processor, but there is a great deal of hardware logic to turn the x86 CISC-style commands into something that the modern CPU can deal with.

              This is how Intel is able to maintain machine code-level compatibility back into DOS days.
              This is what worries me a little about Larrabee. I'm certainly no expert, but what I've read about x86 is that it's not the most efficient architecture out there, yet Intel is pushing for it to become the standard ISA for graphics & stream processing as well as general computing. Even if x86 is easier to code for today, wouldn't it be better in the long run if a better architecture was chosen? If we're essentially starting fresh anyway, why waste valuable die space converting x86 commands to RISC when we can just code in RISC in the first place?

              Comment


              • #8
                What I'm wondering about: how does this work exactly? Is there some kind of scheduler for these programs? Or will one OpenCL program 'claim' the entire GPU exclusively?

                If there were some kind of scheduling, wouldn't the driver have to replicate a very large part of the linux kernel functions?

                Comment


                • #9
                  Originally posted by chaos386 View Post
                  This is what worries me a little about Larrabee. I'm certainly no expert, but what I've read about x86 is that it's not the most efficient architecture out there, yet Intel is pushing for it to become the standard ISA for graphics & stream processing as well as general computing. Even if x86 is easier to code for today, wouldn't it be better in the long run if a better architecture was chosen? If we're essentially starting fresh anyway, why waste valuable die space converting x86 commands to RISC when we can just code in RISC in the first place?
                  Well despite the baggage Intel and AMD are able to make their systems outperform PowerPC by a wide margin, price-wise. Most problems I see are with very low-power cpus like those systems which are now dominated by ARM-style CPUs.

                  So I don't think it's nearly a big of a deal as people try to make it out to be.


                  What I'm wondering about: how does this work exactly? Is there some kind of scheduler for these programs? Or will one OpenCL program 'claim' the entire GPU exclusively?

                  If there were some kind of scheduling, wouldn't the driver have to replicate a very large part of the linux kernel functions?

                  Well this is probably going to be driver specific for the time being, until the GPGPU folks figure out how to create a new architecture and standardized ISA so that Linux can simply treat it as a new type of computer.

                  A huge part of it, for Linux and open source, will be Gallium.

                  Gallium is, essentially, a modularized DRI2 driver. A Mesa-derived DRI-style driver currently only support OpenGL and are extremely complex. HOWEVER, only a relatively small part of a driver of that type is actually very hardware-specific. So Gallium's goal is to talk to the in-kernel DRM driver, via the DRI2 interfaces, and separate out the hardware-specific portions of the DRI2 driver into a Winsys (I think) portion and then add support for running many different types of APIs.

                  So how much of your GPU is consumed by running OpenCL would depend on the Linux kernel scheduling and the memory management facilities. Hence the need for GEM-type advanced memory management in the DRM driver.

                  The Linux kernel should be able to effectively manage application's time slices on the GPU and try to manage video memory access, trying to balance everything else.

                  This leads to quite a bit more overhead then just running a pure graphics or pure compute workload, but allows for multitasking and whatnot.



                  -----------------------

                  Somewhat perversely I expect that one of the reasons AMD and Intel have taken such a interest in making sure that Linux has open source drivers is to provide a effective clustering and workstation OS for taking advantage of their new GPGPU-style platforms.

                  -------------------------------


                  Of course it's important to keep in mind that Nvidia and Apple are the ones that are spearheading this OpenCL stuff and Linux far behind. (and Nvidia has significant financial benefits from high-end graphics stuff on Linux).

                  I am hoping, however, that Linux devs should hopefully be able to take aggressive advantage of this sort of thing as soon as the Gallium stuff starts working out.
                  Last edited by drag; 09 December 2008, 06:47 PM.

                  Comment


                  • #10
                    Ars Technica has a nice little write up on openCL today.

                    Comment

                    Working...
                    X