Announcement

Collapse
No announcement yet.

Blender 3.2 Performance With AMD Radeon HIP vs. NVIDIA GeForce On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by piotrj3 View Post

    Vulkan compute won't make much diffrence, theoretically it allows even lower CPU overhead then solutions like OpenCL (and possibly CUDA/HIP too), but essentially you still use same compute hardware. Vulkan is good but RDNA gpus are bad at compute, and VEGA that actaully was good at compute has pretty much no support. So essentially, yes Nvidia is the only way for Blender (especially if you use OptiX).
    software matters too, no matter how good the hardware is, if the software isn't good, it won't be good. to me this looks like software issues.

    Comment


    • #32
      This looks like a software issue

      Using AMD ProRender 3.4.0 plugin on Blender 3.2, rx 480 8GB (Polaris) , ROCM 5.1.3 (compiled from source), opensource amdgpu kernel module (Arch Linux 5.18) , rendering the bmw27 scene I get:

      Time: 00:20.54 (aka about 21 seconds)

      To reproduce:
      1. Download blender file https://download.blender.org/demo/te...27_2.blend.zip
      2. unzip and open file: bmw27_gpu.blend
      3. Select AMD ProRenderer and hit F12 to render ....

      AMD ProRender plugin download and install required for last step:
      Adds support for Blender 3.1 and 3.2. Further options for creating fog and smoke atmosphere effects Options for changing shadow properties Hair support in RPR - Interactive And much More!



      Comment


      • #33
        Originally posted by ddriver View Post
        Shocking! It's gimped at the design level. That's trinkets well spent on blender devs by nvidia.
        Yes, I'm sure the Blender devs intentionally wrote poor code because Nvidia payed them off. After all, it's not like AMD's compute software ecosystem has been a dumpster fire for years.

        Comment


        • #34
          Originally posted by Lbibass View Post
          I'm not surprised in the slightest. Nvidia can spend millions on software support for their products, enhancing their stranglehold on the market. I hope that over time AMD's software support can get better.
          AMD hasn't been broke for a while now. They can spend millions on software support too. From an outsider looking in, it's hard to not think that they have under invested here for the last few years. It's just taking too long.

          Comment


          • #35
            Originally posted by rmfx View Post
            Where are the AMD 6900 series of you bench the Nvidia 3090 ?
            Nvidia provided the 3090 to Michael for review, AMD didn't provide 6900. Just read the article...

            Comment


            • #36
              Originally posted by Grinness View Post
              This looks like a software issue

              Using AMD ProRender 3.4.0 plugin on Blender 3.2, rx 480 8GB (Polaris) , ROCM 5.1.3 (compiled from source), opensource amdgpu kernel module (Arch Linux 5.18) , rendering the bmw27 scene I get:

              Time: 00:20.54 (aka about 21 seconds)

              To reproduce:
              1. Download blender file https://download.blender.org/demo/te...27_2.blend.zip
              2. unzip and open file: bmw27_gpu.blend
              3. Select AMD ProRenderer and hit F12 to render ....

              AMD ProRender plugin download and install required for last step:
              https://github.com/GPUOpen-Libraries...ses/tag/v3.4.0

              Some extra numbers increasing the number of samples:
              Render Device N. Min Samples N.max Samples Tile Rendering Time (min:sec)
              AMD ProRender GPU (Rx480) 64 128 No 00:20.54
              AMD ProRender GPU (Rx480) 64 256 No 00:24.16
              AMD ProRender GPU (Rx480) 64 512 No 00:25.83
              AMD ProRender GPU (Rx480) 128 128 No 00:27.44
              AMD ProRender GPU (Rx480) 256 256 No 00:53.56
              AMD ProRender GPU (Rx480) 512 512 No 01:45.22
              AMD ProRender CPU (r9 5900X) 512 512 No 05:29.18
              AMD ProRender GPU (Rx480) 1024 1024 No 03:29.05
              AMD ProRender GPU+CPU 1024 1024 No 02:50.02
              Using Blender native HIP a 6400 takes 184 seconds.
              A rx480 gets 209 seconds with min/max n. samples at 1024 (card from 2016, using ROCm 5.1.3)

              Again, I suspect issues at software level....

              Comment


              • #37
                Originally posted by pWe00Iri3e7Z9lHOX2Qx View Post

                AMD hasn't been broke for a while now. They can spend millions on software support too. From an outsider looking in, it's hard to not think that they have under invested here for the last few years. It's just taking too long.
                Nvidia is a few years ahead of AMD. People make fun of Nvidia’s Fermi but that revolutionized GPGPUs. Reviews say AMD’s 7000 series with GCN was AMD’s entrance into GPGPU but I still consider Vega their entrance.

                Comment


                • #38
                  AMD =/= Software

                  Comment


                  • #39
                    Originally posted by ddriver View Post
                    Shocking! It's gimped at the design level. That's trinkets well spent on blender devs by nvidia.
                    ROCm/HIP is a CUDA clone at source code level they did this because to support CUDA at binary level would mostly agaist the law...

                    well this CUDA clone at source code level is still better than using the """CPU""" so it still has it usercase.

                    but what the people really waiting about is the Vulkan backend for Blender because till will be faster.

                    Vulkan can easily be faster than even CUDA itself.

                    but never hope for ROCm/HIP to outperform CUDE amd itself said something like this: in the best case szenario you lose 2% or more performance compared to native CUDA...
                    Phantom circuit Sequence Reducer Dyslexia

                    Comment


                    • #40
                      Originally posted by cgmb View Post
                      And, frankly, including CUDA in the benchmark would be useful anyway. Last year, when I was using Blender heavily,
                      Michael I'd find this interesting as well. Is there any chance you could run another round with cuda-only benchmarks?

                      Comment

                      Working...
                      X