Announcement

Collapse
No announcement yet.

Blender's "Cycles X" Showing Nice Performance But Dropping OpenCL Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Sdar View Post
    OpenCL was killed by AMD because their compiler was a buggy piece of ####, complex kernels just didn't work, so why use opencl if it's not going to work on amd anyway?
    Imagine being AMD at that time. Your direct competitor has like 80% marked share and does not support a specific API version people would need for basic needs. Yeah sure. You put ALL your resources into THAT API because you think when you will support it everyone will ditch the marked leader and move to your GPU.

    Yeah... sure...

    Comment


    • #22
      Originally posted by Jumbotron View Post
      And to think coder and others berated me when I rightfully said on another thread that the market had rejected OpenCL for OneAPI and CUDA.
      Don't misquote me. What I said is that some hardware and platform vendors rejected it.

      There was a ton of interest, but users are only able to put up with so much. If you read what LightPathVertex quoted, that's exactly what happened, here. They couldn't continue more for lack of vendor commitment, than anything else.

      Originally posted by Jumbotron View Post
      AMD at this point should just immediately and whole sale abandon ROCm and move to OneAPI.
      This just shows that you don't even understand what ROCm is! If AMD implemented oneAPI, it would be atop ROCm.

      Originally posted by Jumbotron View Post
      ROCm at this point is AMD's 3dNow! of GPU compute.
      If you're trying to make yourself look like an idiot, you're not doing a half-bad job.

      Comment


      • #23
        Originally posted by GruenSein View Post
        Just to make sure I haven't misunderstood anything: They are voluntarily going proprietary only when it comes to hardware acceleration?
        Hopefully they would at least start on HIP so that they can test and run on AMD and NVidia from the start, and that would also get them pretty close to being ready for Intel.
        Last edited by bridgman; 23 April 2021, 04:37 PM.
        Test signature

        Comment


        • #24
          Originally posted by -MacNuke- View Post

          The main Cycles developers are NVidia fanboys. So no surprise here. It was always "we implement it in CUDA, the rest is up to other people".
          They simply can't support AMD/Intel when their GPU computing stack simply doesn't work at all/makes no sense to use.

          Comment


          • #25
            Originally posted by tildearrow View Post

            OpenCL was killed by Apple because they decided to deprecate both CL and GL in macOS Mojave, plus it was heavy and hard to set up...
            OpenCL is a DoA standard: too funny to use when comparing with CUDA

            For example:
            To initialize CUDA:
            Code:
            cudaSetDevice()
            To initialize OpeCL: there is not enough space in the margin.

            Comment


            • #26
              Originally posted by zxy_thf View Post
              To initialize OpeCL: there is not enough space in the margin.
              That's what the C++ header is for.



              Anyway, if you're doing anything substantial, a bit of initialization code is nothing to sweat.
              Last edited by coder; 23 April 2021, 05:51 PM.

              Comment


              • #27
                Originally posted by coder View Post
                Don't misquote me. What I said is that some hardware and platform vendors rejected it.

                There was a ton of interest, but users are only able to put up with so much. If you read what LightPathVertex quoted, that's exactly what happened, here. They couldn't continue more for lack of vendor commitment, than anything else.


                This just shows that you don't even understand what ROCm is! If AMD implemented oneAPI, it would be atop ROCm.


                If you're trying to make yourself look like an idiot, you're not doing a half-bad job.
                Well, i do agree ROCm and oneAPI sound nice on paper but both are an PoS at the implementation level and this is the main reason they won't even get near CUDA for a long long time.

                I hate nVidia as much as the next guy and i'm an AMD only guy since the first GCN gpus but after dealing with ROCm i lost all hope in the near term, that PoS barely works on few GPUs with the propietary drivers because from sources is almost unbuildable and is as stable as a castle of cards on a tornado and oneAPI is a mess, to this day even build oneTBB is a whole challenge on its own.

                And this is where CUDA design shines, the API is really neat and the installation is extremely simple. the only way i see Intel and AMD having something to compete with nvidia on compute is using NIR compute on Mesa and implement the APIs on top as State trackers(CUDA, CL, SYCL, etc.) but they both seem on a war path atm and somehow each need to reinvent the wheel from scratch with their own API brand

                Comment


                • #28
                  Originally posted by coder View Post
                  That's what the C++ header is for.
                  Anyway, if you're doing anything substantial, a bit of initialization code is nothing to sweat.
                  The initialization is just the beginning, and almost every design in OpenCL is to make it harder to learn and program.
                  Another example: cudaMemcpy vs clEnqueueCopyBuffer (and this time the header file won't save you)

                  Most of the time, CUDA simply works with the default settings and parameters.
                  Last edited by zxy_thf; 23 April 2021, 06:53 PM.

                  Comment


                  • #29
                    Originally posted by blacknova View Post
                    Cause it just works? Most people prefer predictable results after all.
                    Spending 6 months and the two best Cycles developers (one of them its creator) on improving a single vendor proprietary solution is idiotic when the entire industry is moving to Vulkan as a full stack replacement of OpenGL, DirectX, CUDA, OpenCL, even Nvidia is fully supporting Vulkan, as they realize that when the rest of the tech companies finally back a single solution, they will be screwed if they don't.

                    And Blender will replace OpenGL with Vulkan, starting with EEVEE, and obviously Blender will have Vulkan hardware accelerated rendering support for Cycles as well, but they could be one of the first packages out of the gate with stellar Vulkan support, instead they will be spending precious time and their best rendering developers on improving a proprietary single vendor framework which already works well for Blender.

                    I used to think that the one big problem for Blender was funding, now that this is no longer the case, I see that there are unfortunately other glaring problems. Spending these 6 months on just improving CPU rendering would be a more worthwhile investment than what they are doing, talk about dropping the ball.

                    Comment


                    • #30
                      Originally posted by jrch2k8 View Post
                      And this is where CUDA design shines, the API is really neat and the installation is extremely simple. the only way i see Intel and AMD having something to compete with nvidia on compute is using NIR compute on Mesa and implement the APIs on top as State trackers(CUDA, CL, SYCL, etc.) but they both seem on a war path atm and somehow each need to reinvent the wheel from scratch with their own API brand
                      The ROCm stack literally has a function-for-function drop-in for CUDA in HIP, so in that sense it "shines" just as bright as CUDA. All of the issues you raise around implementation quality and support still exist, of course. IMO the more fundamental issue for AMD is that they lack sufficient software coverage for their hardware, especially the newer cards that now constitute a large proportion of their consumer marketshare. Likewise, Intel's biggest problem is that they lack compelling hardware to run their (somewhat more coherent) software stack on. I do wish we could have a mature Vulkan compute or Mesa-native compute solution, but they seem unlikely to materialize for a few years yet.

                      Comment

                      Working...
                      X