Announcement

Collapse
No announcement yet.

IWOCL 2016 Slides Posted To Learn More About The Latest OpenCL Tech

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • IWOCL 2016 Slides Posted To Learn More About The Latest OpenCL Tech

    Phoronix: IWOCL 2016 Slides Posted To Learn More About The Latest OpenCL Tech

    Taking place last month in Vienna was the International Workshop on OpenCL (IWOCL) where much could be learned about this open computing language specification from The Khronos Group...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    developments in opencl is interesting in an academic sort of way. practically, it doesnt much matter what they do as long as vendors conspire to not implement it.

    Comment


    • #3
      Originally posted by quaz0r View Post
      as long as vendors conspire to not implement it.
      Huh? There's no conspiracy, and there are certainly vendors implementing it. Intel, AMD, Qualcomm all have implementations of v2.0+. I don't know what version Xilinx and Altera support, but they both had papers at this conference. In general, the new features introduced by each version of the standard correspond to new capabilities in the hardware. So, chip makers often can't support newer standards (2.0, in particular) on old chips.

      If you want to point fingers, I think NVidia deserves the most blame, for remaining stuck at v1.2. I'm also disappointed in Apple and Google, for their use of proprietary frameworks (Metal and RenderScript, respectively). They might've been the only ones with enough clout to force NVidia to stay up-to-date.

      AMD and Intel should be funding open source developers to do OpenCL ports of popular HPC libraries and apps. It seems like they (AMD, in particular) took the attitude that: if we build it, developers will support it. But CUDA had too much momentum, and could add new features more quickly. I was sad to see AMD cave and offer CUDA support, but they were probably getting frozen out of too many opportunities that involved CUDA-only code.
      Last edited by coder; 04 May 2016, 08:03 PM.

      Comment

      Working...
      X