Announcement

Collapse
No announcement yet.

Open-Source OpenCL Adoption Is Sadly An Issue In 2017

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by schmidtbag View Post
    I too would prefer to see more OpenCL, but it doesn't surprise me why CUDA became more of a success. Consider the following:
    * Even though Intel's support is decent, their GPUs aren't good enough to be worth considering to any major developers. So they don't have much of an impact. Obviously Nvidia isn't going to push for OpenCL, so that just leaves AMD as the primary option.
    That's not true. Intel GPUs are fast enough to provide plenty of speedup vs. CPU. Just because they're not in the same league as dGPUs doesn't mean they're not interesting.

    Originally posted by schmidtbag View Post
    * AMD hardware is pretty great with OpenCL, but drivers are a problem. ... the closed drivers are too picky about which kernel version you're using. This could be a real turnoff to many developers.
    Their new software stack is partly intended to address the issue of limited kernel support.

    Meanwhile, their HIP layer is providing a CUDA transition path that should make it easier to add AMD support for apps and libraries that are currently CUDA-only.

    Comment


    • #12
      Intel GPUs are great with OpenCL and they get the job done just fine, IMO. They certainly have no issues on Windows, where many programs utilize their computing power. The real problem lies more with the lack of end-user, non-enterprise Linux programs using OpenCL to begin with, as many media creation programs that could be using it, don't and won't use it.

      Comment


      • #13
        That boils down to the fact that you can't guarantee a working OpenCL implementation on linux.

        Comment


        • #14
          Clearly, systemd needs to be involved. jk

          Comment


          • #15
            The vast majority of desktop apps do not have the need for massive parallel computation so it's hardly surprising that they don't support opencl.

            Comment


            • #16
              whatever your feelings are towards ecosystem and why things are the way they are, OpenCL is screwed for at least 2 more years baring an awesome clang based project that can make everything 2.0+ work on nvidia and AMD hardware. Why 2 more years? To allow industry a chance to adopt OpenCL through Vulkan and shaders. This may warp from how we expect things to work as a compute platform though. As far as I can tell though, through a series of unfortunate events, OpenCL is presently the standard that was anything but and everybody on the committee has an agenda (reminds me of US congress). I think it's a better technology than CUDA, honestly, but right now I think HIP/CUDA are the way to go. AMD/community just needs to figure out a better way to bring first class CUDA support in (hipify needs to go away).

              Comment


              • #17
                OpenCl 2.2 is a BIG step foward. But without drivers, SDK, etc.....

                P.S. A curiosity. The boss of OpenCl group in Kronos Group is a Nvidia guy. :-O

                Comment


                • #18
                  Originally posted by nevion View Post
                  everybody on the committee has an agenda
                  That's pretty typical for industry standards bodies. What would make it work better is if it were driven more by strong users of the technology, like Apple & Google, rather than being mostly vendor-dominated. Sadly, Apple, Google, and MS all have their own proprietary standards. So, nobody is forcing the vendors in line and to deliver conformant implementations, in a timely fashion.

                  Comment


                  • #19
                    Well, its AMD. They have good engineers, but their management is awful, and has been like this for a while. Creating OpenCL? Oh, it has been a great idea, it even got traction across opensource devs, who were eager to adopt it instead of using proprietary CUDA and so on. This even includes some quite large compute markets like e.g. coins mining software, where ppl are buying loads of high-end GPUs. But wait, there was some small problem: it works like crap on AMDs in Linux, where most enthusiastic devs dwell, where miners are running their rigs, etc. AMD has been "wise" enough to let Clover go, right when it started working, at least somewhat. So they gone with ROCm, but it SUCm, as in only supports very few newest HW "for teh greater good", and everyone else is loser and out of luck, as they would never see anything close to advertised on GPU box under Linux (such a great HW support its basically called "ripoff").

                    Like if it wasn't enough, AMD ensured their brand new GPUs in Linux are pain in the rear: they've got entangled into never-ending DC/DAL saga and basically fucked everything up once again. So nono, you can't just plug AMD GPU and compute. Especially using Linux, which is good for batch-mode operations. It just not going to work this way. Either your GPU is "too old" and unsupported, or it is too new, requires foreign DAL and ... unsupported once again. So it never works right, dammit. Needless to say it is SAD state of things. Maybe AMD management should be awarded IgNobel prize, if applicable, because they're amazingly great at fucking everything up.

                    Comment


                    • #20
                      OFC it is an issue and it is mostly AMD is to blame. They've created standard, it got some traction across opensource devs instead of cuda, etc. Including rather popular things like mining rigs, mostly running Linux. Then.... look, Clover barely started working and AMD decided to ... ditch it? So most AMD HW would never see proper support of whatever has been advertised on its box under Linux. Because it is "too old". Then they've gone for ROCm but overall they've managed to screw it up, since new GPUs got hopelessly entangled in DAL/DC saga, so one can't expect reasonable out of box support either in Linux. So it well below of what could be anticipated by devs and users, to say the least. You can buy "too old" GPU or "too new" GPU. But it's not like if you could just plug this crap and get it to compute without weird issues.

                      Isn't it ironic when company who creates standard utterly fails to show very own support to this standard, doing some weird bunch of stupid things instead? The whole story of AMD GPU compute under Linux is just one big FACEPALM.

                      Comment

                      Working...
                      X