Announcement

Collapse
No announcement yet.

Blender's "Cycles X" Showing Nice Performance But Dropping OpenCL Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by StillStuckOnSI View Post

    The ROCm stack literally has a function-for-function drop-in for CUDA in HIP, so in that sense it "shines" just as bright as CUDA. All of the issues you raise around implementation quality and support still exist, of course.
    This is the funniest part of the story. After more than a decade the others still cannot figure out any design that's cleaner than CUDA.
    (oneAPI is too C++-ish and may not please everyone)

    Comment


    • #32
      Originally posted by tildearrow View Post

      OpenCL was killed by Apple because they decided to deprecate both CL and GL in macOS Mojave, plus it was heavy and hard to set up...
      My understanding was that OpenGl was killed because of the heavy influence of large CAD vendors, on the standards board, that didn't want OpenGL improved in the way that Apple wanted. Effectively the changes couldn't be made to improve performance and add features to make OpenGL competitive.

      As for this move by the Blender people, I can't say I agree with it but i'm not a user so it isn't a big deal to me. If anybody is really put off by the decision they can always port Blender. As for AMD it will likely be a couple of years until they have a good compute solutions. They are focused on the super computing contracts and as such I only expect a trickle of improvements in the mean time. They have to get that code base right and once it is good I suspect we will see it released to the wider markets.

      The funny thing here is that AMD if they get CDNA right and get the other compute ducks in a row should come out on top of NVidia. Honestly it doesn't look like people are looking all that far into the future making this decision.

      Comment


      • #33
        Originally posted by zxy_thf View Post
        This is the funniest part of the story. After more than a decade the others still cannot figure out any design that's cleaner than CUDA.
        (oneAPI is too C++-ish and may not please everyone)
        OneAPI level zero is arguably cleaner than the CUDA C API and has the advantage of working with standard toolchains (as does OneAPI SYCL). The main reason AMD made HIP is to ease migration for CUDA users. I highly doubt they'd design something like it if they weren't trying for that.

        Comment


        • #34
          Originally posted by wizard69 View Post
          The funny thing here is that AMD if they get CDNA right and get the other compute ducks in a row should come out on top of NVidia. Honestly it doesn't look like people are looking all that far into the future making this decision.
          If we are looking that far into the future either NVidia is building 128 core equivalents to Apples M1 or we are all running quantum computers in the cloud from our cell phones.

          Comment


          • #35
            Originally posted by tildearrow View Post

            ...at the cost of pushing the non-greedy standard out of the market? No thanks....

            Unless they start on a Vulkan renderer or a CUDA on AMD wrapper I am not interested.
            This forum is just full of entitled people who don't contribute but just complain. I would hate developing anything for free.

            Comment


            • #36
              Originally posted by StillStuckOnSI View Post

              OneAPI level zero is arguably cleaner than the CUDA C API and has the advantage of working with standard toolchains (as does OneAPI SYCL). The main reason AMD made HIP is to ease migration for CUDA users. I highly doubt they'd design something like it if they weren't trying for that.
              AMD's intention may be good ("ease migration for CUDA users") but the reality is (1) CUDA-based software still need migration and (2) the HIP stack isn't available to every AMD GPU nor is it installed by default / integrated to Linux distros' official repository. The success rate is hit-and-miss. Why would one spend their own time to port their CUDA-based software when the destination is unstable and limited? HIP is open-source in name but narrower in hardware choice then CUDA the proprietary in reality. The rationale of using open standard, aside from ideology, is to give broader past / present / future hardware and software choice to users. HIP and ROCm fail in this regard.

              Comment


              • #37
                I'd say more "vital" than "good" (though it may well be both). The main target was always HPC users who had a butt-ton of CUDA code to translate to HIP.

                Now that they apparently are stable enough cash-wise to revisit consumer hardware support, I think the real test will be how nice the software stack is in 2-3 years.

                Comment


                • #38
                  Originally posted by wizard69 View Post
                  My understanding was that OpenGl was killed because of the heavy influence of large CAD vendors, on the standards board, that didn't want OpenGL improved in the way that Apple wanted. Effectively the changes couldn't be made to improve performance and add features to make OpenGL competitive.
                  I'm curious what this is referring to, since OpenGL got bindless textures and direct state access well before Vulkan hit the scene. But, Apple was probably already committed to Metal, by then

                  Comment


                  • #39
                    Originally posted by coder View Post
                    Don't misquote me. What I said is that some hardware and platform vendors rejected it.

                    There was a ton of interest, but users are only able to put up with so much. If you read what LightPathVertex quoted, that's exactly what happened, here. They couldn't continue more for lack of vendor commitment, than anything else.


                    This just shows that you don't even understand what ROCm is! If AMD implemented oneAPI, it would be atop ROCm.


                    If you're trying to make yourself look like an idiot, you're not doing a half-bad job.

                    Yeah...just like I was called an idiot for going on my favorite AMD fansite of the day, AMDZone, and proclaiming as much as Ioved AMD and 3DNow! the sad truth was the developers of that day were not going to accept writing two sets of SIMD code to match the advantages of two different x86 CPU vendors and that AMD would eventually abandoned it as the market had.

                    I stand by my statement . ROCm is the 3dNow! of compute.

                    Comment


                    • #40
                      Folks here calling Blender devs 'CUDA Fanboys' is hilarious.

                      The only API-fanboys, are right here in this thread, flipping out.

                      Just to offer a different perspective: I'm a Blender user, and I have an AMD GPU and I fully understood their blog post and I'm not concerned in the slightest.

                      The Blender devs are reworking things from the ground up for better speed and to make it easier to implement new features over the next 10 years, freeing themselves of 10 years worth of technical debt in the process, because the current architecture is slowing them down and making it difficult/impossible to implement new features or performance improvements. They need a clean slate.

                      The changes they are talking about are unlikely to land in Blender until at least version 3.1, possibly/probably 3.2 or later. So realistically, at the very end of this year or early next year.

                      Since a lot of you seem to be blind, re-read this part of the blog post a few times:

                      We can only make the kinds of bigger changes we are working on now by starting from a clean slate.
                      We are working with AMD and Intel to get the new kernels working on their GPUs, possibly using different APIs. This will not necessarily be ready for the first release, the implementation needs to reach a higher quality bar than what is there now. Long term, supporting all major GPU hardware vendors remains an important goal.
                      They're not announcing that Blender is now an NVIDIA exclusive.

                      They're reworking Cycles, and as part of that process they've started with a clean slate, and that clean slate does not have AMD support YET. Most likely because they've only been working on Cycles X for 2 months. Both the Blender Foundation and AMD is well aware of the importance of Cycles GPU acceleration on AMD GPUs, AMD will work hard with the Blender devs with the goal of having a solution before these major changes are merged, or as soon as possible.

                      So chill out!

                      Comment

                      Working...
                      X