Announcement

Collapse
No announcement yet.

Open-Source OpenCL Adoption Is Sadly An Issue In 2017

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Open-Source OpenCL Adoption Is Sadly An Issue In 2017

    Phoronix: Open-Source OpenCL Adoption Is Sadly An Issue In 2017

    While most of the talks that take place at the annual X.Org Developers' Conference are around the exciting progress being made across the Linux graphics landscape, at XDC2017 taking place this week at Google, the open-source GPGPU / compute talk is rather the let down due to the less than desirable state of the open-source OpenCL ecosystem...

    http://www.phoronix.com/scan.php?pag...7-OpenCL-GPGPU

  • #2
    >On the tooling side,

    Is there anything about Jan Hubicka nad Martin Jabor and their work? It is anniversary today


    An Early Port Of GCC To AMD's GCN Architecture
    Written by Michael Larabel in Compiler on 22 September 2016
    https://www.phoronix.com/scan.php?pa...-AMD-GCN-Early
    SUSE Developers Publish Radeon GCN Backend Code For GCC Compiler
    Written by Michael Larabel in Radeon on 16 March 2017

    This GCN back-end for GCC is primarily focused on compute capabilities rather than compiling graphics shaders.
    https://www.phoronix.com/scan.php?pa...For-GCC-Branch

    No mentionabout Couldron 2017 dealing with
    AMD Graphics Core Next
    New and clean instruction set used by AMD GPUs
    First specification released in 2011, Generation 3 in 2015
    Primarily designed for graphic cards, but with parallel
    computation in mind
    GCN code generator is needed to complete HSA
    infrastructure (currently we rely on a proprietary finalizer)


    Similar to traditional CPUs (unlike the predecessors) !!!



    https://gcc.gnu.org/wiki/cauldron201...GCC+to+GCN.pdf

    Or will it be discussed on SUSECON 2017?

    PRAGUE, CZECHIA

    25–29 September 2017
    https://www.susecon.com/

    Comment


    • #3
      I too would prefer to see more OpenCL, but it doesn't surprise me why CUDA became more of a success. Consider the following:
      * Even though Intel's support is decent, their GPUs aren't good enough to be worth considering to any major developers. So they don't have much of an impact. Obviously Nvidia isn't going to push for OpenCL, so that just leaves AMD as the primary option.
      * AMD hardware is pretty great with OpenCL, but drivers are a problem. The Windows drivers are decent, but there isn't much demand for GPU compute in Windows. There could be better demand in Linux, but the open-source drivers are too incomplete. Meanwhile, the closed drivers are too picky about which kernel version you're using. This could be a real turnoff to many developers.
      * Nvidia, for obvious reasons, seems to intentionally cripple OpenCL performance vs running similar tasks in CUDA. Not only is Nvidia hardware more common (and therefore makes CUDA the more obvious choice) but this can lead mis-informed people that CUDA is hands-down the better API, which arguably it isn't.
      * Most hardware currently being used by people isn't compatible with OpenCL 2.0+ (obviously most/all new hardware is compatible). If developers are going to invest in a technology they currently aren't using, they may as well use the latest version, but their hardware might not be compatible.

      I really want to see more OpenCL - I find it really exciting and the only way to really increase productivity without the need of expensive hardware. But, I can see why it isn't doing so great.

      Comment


      • #4
        Running OpenCL over Vulkan could potentially solve a lot of issues that we have on the graphic stack right now, would be great to see some improvements in that area.

        Comment


        • #5
          Libraries like Tensorflow currently only running on CUDA means the ecosystem is heavily tilted towards Nvidia. This wiki page shows how far ahead CUDA is: https://en.wikipedia.org/wiki/Compar...rning_software I hope the open source ecosystem (kernel + GCC, LLVM) eventually helps AMD catch up and fast. Another generation or two and Nvidia will be charging $250 for a CUDA enabled card. It's disabled on the current GT1030, and I wonder why.

          Comment


          • #6
            The problem is the lack of developping tools.
            CodeXL is very good to debug and optimize programs, but I still haven't managed to get equivalent stuff on intel and nvidia for linux.

            Comment


            • #7
              So sad! It's really trivial to use OpenMP in C/C++ code. So many programs can get amazing low-hanging-fruit speed boosts with the most minimal effort.

              Just for fun, download the source code of your favorite C/C++ workhorse library and spend 30 minutes OpenMP-optimizing some for loops. I got a 100x speedup on one (with my GTX 980).

              Comment


              • #8
                It's not the lack of libraries, but the abysmal state of the OpenCL stack by AMD which then doesn't drive Nvidia to bother as they will always push CUDA.

                Comment


                • #9
                  Well no duh. We could all have guessed that one. Clover is dead.Beigenet is just as good as proprietary. And because AMD mixes different chip generations each year, ROCm only works on a some of AMD's hardware.

                  Comment


                  • #10
                    Originally posted by audi100quattro View Post
                    Libraries like Tensorflow currently only running on CUDA means the ecosystem is heavily tilted towards Nvidia. This wiki page shows how far ahead CUDA is: https://en.wikipedia.org/wiki/Compar...rning_software
                    The answer to CUDNN is MIOpen.

                    https://gpuopen.com/compute-product/miopen/

                    To use it, you need ROCm support. So, we just need that to get upstreamed, and hopefully the various forks of deep learning frameworks that are already using MIOpen get merged into the trunks of their respective projects.

                    You can see their forks of Caffe, Caffe2, TensorFlow, etc. on their github page:

                    https://github.com/ROCmSoftwarePlatform

                    Originally posted by audi100quattro View Post
                    Another generation or two and Nvidia will be charging $250 for a CUDA enabled card. It's disabled on the current GT1030, and I wonder why.
                    That's completely bizarre. They should want CUDA to run on all their cards, so that the userbase is as large as possible and casual developers can get their feet wet. The 1030 is so weak that there's already plenty of reason to upgrade, for those wanting more performance. No need to artificially kneecap it.

                    Anyway, I checked and it's not listed here:

                    https://developer.nvidia.com/cuda-gpus


                    Maybe the GP108 has some hardware bug.
                    Last edited by coder; 09-22-2017, 09:53 PM.

                    Comment

                    Working...
                    X