Announcement

Collapse
No announcement yet.

Blender 3.2 Performance With AMD Radeon HIP vs. NVIDIA GeForce On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    yeah I saw this coming, I wonder if the upcoming vulkan work will be any different, well in any case, it's not bad for initial support.

    Originally posted by scottishduck View Post
    Well I guess it’s lucky that no one using their computer for work uses AMD GPUs anyway
    which is a shame because generally amd cards are actually pretty decent at compute, there is just a massive lack of ecosystem around them that HIP is supposed to address, but I feel will make worse.

    Comment


    • #22
      I'm an AMD engineer with a personal interest in Blender rendering. All opinions are my own and do not necessarily represent the views of my employer.

      I would have liked to have seen the CUDA backend in the benchmark results. HIP is meant to match CUDA, but Optix blows CUDA out of the water. All this data tells me is "AMD doesn't have an equivalent to the Optix library," which is something I already knew. I'd be much more interested in how AMD or NVIDIA GPUs compare when running more or less the same code.

      And, frankly, including CUDA in the benchmark would be useful anyway. Last year, when I was using Blender heavily, I could only use CUDA rendering and CPU rendering despite having both a Radeon VII and an 2080 RTX available. I found that for volumetric rendering, both OpenCL rendering and Optix rendering both produced noticeably different frames from the CPU-rendered frames. Only CUDA-rendered frames were interchangeable with CPU-rendered frames. However, I'm not sure if that's still true with newer versions of Blender.

      Comment


      • #23
        Originally posted by skeevy420 View Post
        Good Lord, out of Michael's cards the crappiest NVIDIA is better than the best AMD.
        I knew before going in that NVIDIA would be better, but I wasn't expecting it to be by that much.
        I love AMD, but based on those results I have to say that Linux professionals should buy NVIDIA for the time being. Linux enthusiasts only playing games and running desktops can stick to AMD.
        Maybe because optix use both tensor and rt cores and amd sadly dont have any similar hardware to nvidia tensor cores in desktop gpus ?



        Comment


        • #24
          Originally posted by skeevy420 View Post
          Good Lord, out of Michael's cards the crappiest NVIDIA is better than the best AMD. I knew before going in that NVIDIA would be better, but I wasn't expecting it to be by that much. I love AMD, but based on those results I have to say that Linux professionals should buy NVIDIA for the time being. Linux enthusiasts only playing games and running desktops can stick to AMD.
          I am hoping the code gets optimized as time passes.
          Then AMD performance will go head-to-head with NVIDIA.

          The only problem with NVIDIA is that they don't support KMS capture! Instead they make you use their own API called NvFBC (and even worse, this API is only officially available for Quadro/professional cards!). Here I am hoping for Intel to finally release a good card that doesn't cater to "running Android games in the cloud", because Intel is the only other company that provides graphics chips with 4:4:4 encoding (AMD does not).

          Comment


          • #25
            Originally posted by scottishduck View Post
            Well I guess it’s lucky that no one using their computer for work uses AMD GPUs anyway
            You really need to define "work" better. A lot of people use AMD's for work, just not for rendering.

            Comment


            • #26
              Originally posted by Quackdoc View Post
              yeah I saw this coming, I wonder if the upcoming vulkan work will be any different, well in any case, it's not bad for initial support.



              which is a shame because generally amd cards are actually pretty decent at compute, there is just a massive lack of ecosystem around them that HIP is supposed to address, but I feel will make worse.
              I don't think there can be ever any significant "ecosystem" if they always only support the latest generation. Vega cards were praised to be good at compute when they were just released. The Radeon VII card was first sold in 2019, only 3 years ago. Now they may never have official support or promise of support of the ROCm ecosystem. There is no assurance that 3 years later, the new and shiny RDNA2 won't fall into the same trashy situation of Vegas.

              Comment


              • #27
                Originally posted by Quackdoc View Post
                yeah I saw this coming, I wonder if the upcoming vulkan work will be any different, well in any case, it's not bad for initial support.



                which is a shame because generally amd cards are actually pretty decent at compute, there is just a massive lack of ecosystem around them that HIP is supposed to address, but I feel will make worse.
                Vulkan compute won't make much diffrence, theoretically it allows even lower CPU overhead then solutions like OpenCL (and possibly CUDA/HIP too), but essentially you still use same compute hardware. Vulkan is good but RDNA gpus are bad at compute, and VEGA that actaully was good at compute has pretty much no support. So essentially, yes Nvidia is the only way for Blender (especially if you use OptiX).

                Comment


                • #28
                  Comparing lots of lower end AMD vs mainly high end NVidia gpus is not very fair.
                  Where are the AMD 6900 series of you bench the Nvidia 3090 ?

                  But let’s be honest, AMD still has a long way to go.

                  Comment


                  • #29
                    Originally posted by tildearrow View Post

                    I am hoping the code gets optimized as time passes.
                    Then AMD performance will go head-to-head with NVIDIA.

                    The only problem with NVIDIA is that they don't support KMS capture! Instead they make you use their own API called NvFBC (and even worse, this API is only officially available for Quadro/professional cards!). Here I am hoping for Intel to finally release a good card that doesn't cater to "running Android games in the cloud", because Intel is the only other company that provides graphics chips with 4:4:4 encoding (AMD does not).
                    This is OptiX rendering. When CUDA rendering for Nvidia is still doing significantly better then RDNA2, until AMD wires up raytrading support into HIP or something they wont' be able to come close to Nvidia.

                    Comment


                    • #30
                      Originally posted by billyswong View Post

                      I don't think there can be ever any significant "ecosystem" if they always only support the latest generation. Vega cards were praised to be good at compute when they were just released. The Radeon VII card was first sold in 2019, only 3 years ago. Now they may never have official support or promise of support of the ROCm ecosystem. There is no assurance that 3 years later, the new and shiny RDNA2 won't fall into the same trashy situation of Vegas.
                      yup, which is why Im not a fan of buying AMD, nvidia might be annoying to work with, but at least they support their cards. buying AMD is like a kick in the nuts after being told it's "so great" on linux. the only "good" platform for linux lately is intel. so unless you have an intel igpu, amd and nvidia dgpus, you always gonna be lacking something...

                      Comment

                      Working...
                      X