Announcement

Collapse
No announcement yet.

AMD Posts Patch Enabling Vega APU/GPU Support For Blender's HIP Backend

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD Posts Patch Enabling Vega APU/GPU Support For Blender's HIP Backend

    Phoronix: AMD Posts Patch Enabling Vega APU/GPU Support For Blender's HIP Backend

    With the AMD Radeon "HIP" acceleration in Blender 3.2 with the Cycles back-end, an unfortunate early limitation is that this is limited to just AMD RDNA2 (Radeon RX 6000 series) graphics processors while prior generation RDNA1 GPUs have issues with some textures like those used in the benchmarks. This week though AMD did post a new patch for Blender enabling HIP support on Windows and Linux for Vega/GFX9 graphics...

    https://www.phoronix.com/scan.php?pa...-Blender-Patch

  • #2
    I hope distro maintainers can backport this patch. But I wonder how many distros actually enabled HIP support, it might be only Arch AUR.

    Comment


    • #3
      i wonder how fast will this be on APU compare to the host CPU

      Comment


      • #4
        Originally posted by loganj View Post
        i wonder how fast will this be on APU compare to the host CPU
        That kind of doesn't matter. AMD needs to make HIP work on as many GPUs as possible, so people actually start adopting it and testing it. Like for example I am working on packaging HIP, and only have a Vega GPU.
        AMD first and foremost needs to make ROCm stack work on as many GPUs as possible, like CUDA works on any GPU. Performance can come a bit later.
        Right now CUDA competitor's problems is not performance, is that few programs support it, and even when we get software support, hardware support is awful.
        Last edited by JacekJagosz; 22 June 2022, 07:51 AM.

        Comment


        • #5
          Originally posted by loganj View Post
          i wonder how fast will this be on APU compare to the host CPU
          Me too. >10 years ago, I was promised APUs would be the future of compute, because the GPU part would do the heavy lifting. There was painfully little software that supported OpenCL or whatever else AMD had to offer. I have a glimpse of hope that one good app that converted from CUDA (I guess?) gives Rocm the boost it needs for others to follow. Together with the next gen Ryzen coming with a GPU by default.

          BTW: Is there a benchmark comparing Blender 2.9 OpenCL with Blender3.2 HIP somewhere? I know it is not apples to apples, but IMO that would still be interesting.

          Comment


          • #6
            [QUOTE=Mathias;n1330095]

            Me too. >10 years ago, I was promised APUs would be the future of compute, because the GPU part would do the heavy lifting. There was painfully little software that supported OpenCL or whatever else AMD had to offer. I have a glimpse of hope that one good app that converted from CUDA (I guess?) gives Rocm the boost it needs for others to follow. Together with the next gen Ryzen coming with a GPU by default.

            Yes my hope was on OpenCL ...but then CUDA ripped the whole market and destroyed the hope of an Opensoruce Solution

            Comment


            • #7
              Originally posted by JacekJagosz View Post
              AMD first and foremost needs to make ROCm stack work on as many GPUs as possible, like CUDA works on any GPU. Performance can come a bit later.
              You're not wrong but I don't totally agree either. If the performance is too underwhelming, nobody is going to care about availability. However... people on older GPUs would certainly appreciate the availability of the features more than people who are buying fresh and new. This is because they're not buying a GPU (let alone for the sake of compute performance) so any additional features or more performance out of their existing product will be useful and enticing to them. Anyone looking to buy a new GPU and seeing underwhelming compute performance (while needing it) might just buy a competing product instead. That will affect AMD's sales numbers.
              If HIP were much closer in performance to CUDA then I would say supporting older GPUs is the smartest objective, but as of right now, I think they need to focus more on optimization. This is coming from someone who has a GPU from 2014.
              Right now CUDA competitor's problems is not performance, is that few programs support it, and even when we get software support, hardware support is awful.
              Haha again, not wrong, but I don't totally agree here either. For a lot of people, it's more that alternatives to CUDA just simply are harder to use. Nvidia invested a lot in making it as easy as possible to implement CUDA wherever you want to use it (except with a GeForce card in a VM, which was a conscious decision on Nvidia's part). Nvidia made great libraries for a variety of languages and easy-to-follow documentation. For hobbyists, CUDA really is the obvious choice, which is a real shame due to its closed nature. It's annoying how programs like Blender and Meshroom perform best with Nvidia, but if I were a volunteer developer, I would too.

              Comment


              • #8
                schmidtbag Right now ROCm stack is supported in very few programs, when it is the hardware support is super limited, and the performance is worse than Nvidia.
                What I pointed out that the fact most parts of the stack (which is quite fragmented) have bad hardware support, so developers even if they wanted to can't test it on their hardware, or the potential group of users is small. Then how can you expect significant developments happening?
                The problem with ROCm is that they really quickly dropped support of older architectures, for the long time didn't support new architectures (RDNA), even different parts of the stack have different GPU support, like OpenCL vs HIP.
                I just think when it actually works the performance is acceptable and much better than not having it, but very few people can use it.

                Comment


                • #9
                  Originally posted by JacekJagosz View Post
                  schmidtbag
                  ... have bad hardware support, so developers even if they wanted to can't test it on their hardware, or the potential group of users is small. Then how can you expect significant developments happening?
                  You have to look at it like that: ROCm was made for GPU Compute customers, so it was initially limited to products which where meant for this market. Luckily they had the same Architecture or at least a similar one GFX9 (GCN) or Vega was seen in consumer cards (Radeon Vega 56/64) as well as in Compute cards (MI50/MI60).
                  In the next generation they have splitted the architecture into a Compute (CDNA) and a graphics architecture (RDNA). Although they share some similarities, they are different chips with different architectures, memory layouts and so on.

                  ROCm did only support CDNA (MI100/MI200 Aldeberan and so on), which are 10k$+ Cards without graphics output.

                  If you look closely in ROCm there are hand written machine code kernels for a lot of the compute functions, as the architectures differ, they did not do it for the consumer cards / RDNA, as there was no software using these + they are a lot slower than the CDNA for most compute tasks, so "compute" customers are not interested in them..
                  Hardware support for older generations is dropped pretty quickly, as the large datacenters always choose the cheaper options between power cost and buying new, more efficient, hardware. As soon as the older gen is no longer attractive in this metric, there is next to no customer for AMD for this functionality.

                  But i agree, they should add first class support for all consumer architectures on launch day, as this might not pay off directly, but platform adoption will sure increase sales of their CDNA offerings.. In the end you need someone who writes software for their GPUs / Accelerators, if a student canĀ“t get a foot into the door with the card he already has, they will probably wander off to the competition and later on in their job will choose the platform they already know.

                  Comment


                  • #10
                    ROCm has a trinity of problems, all of which feed into each other - hardware support, software support, and performance.

                    Performance will improve as people become more familiar with coding quirks, and the development stack matures. CUDA wasn't exactly earth shattering back in the early days either, it's just that the only competition was from CPUs... which in some very specific (small, highly parallelisable) scenarios couldn't compete, so CUDA looked amazing (and it was).

                    "Heterogeneous computing" was the phrase thrown around a lot by AMD with the early APUs (I even have a book on OpenCL development co-authored by an AMD guy) but it really felt like an absolute joke and seemingly quite quickly got shuffled off into the corner wearing the Dunce Cap.

                    What kills ROCm for me is hardware support. I really, really, really want to get away from my absolute dependence on CUDA, but I cannot in any way, shape or form justify a system which actually officially supports ROCm without some evidence that it will work well for us. That means working, (fairly) portable proof-of-concept code which doesn't require ritual sacrifice and the alignment of the stars to get the hardware and software to do what I can do with CUDA by slapping an nVidia GPU in a PCI-E slot, typing apt install cuda and rebooting.

                    I really hope the recent announcements of all-AMD supercomputers will spur further broadening of the support umbrella for the cards which aren't either a) really old, b) impossible to buy or c) insanely freakin' expensive.

                    Official support for the GPU in the mobile 5000/6000 series, or the 5xx0G chips, or (please?) 6800(XT/M)/6900XT cards would go a long way.

                    But for now I'm resigned to continuing dependence on CUDA.

                    Comment

                    Working...
                    X