No announcement yet.

Blender 3.2 Performance With AMD Radeon HIP vs. NVIDIA GeForce On Linux

  • Filter
  • Time
  • Show
Clear All
new posts

  • Blender 3.2 Performance With AMD Radeon HIP vs. NVIDIA GeForce On Linux

    Phoronix: Blender 3.2 Performance With AMD Radeon HIP vs. NVIDIA GeForce On Linux

    This week's release of Blender 3.2 brings AMD GPU rendering support on Linux via AMD's HIP interface in conjunction with their ROCm compute stack. Eager to see the AMD GPU support on Linux finally arrive, I quickly began trying out this new Blender open-source 3D modeling software release while seeing how the AMD RDNA2 HIP performance compares to that of NVIDIA GeForce RTX 30 GPUs that have long enjoyed top-notch support under Blender.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Was absolutely expecting that, slow, minimal hardware support and sigsegv festival on most hardware.


    • #3
      Shocking! It's gimped at the design level. That's trinkets well spent on blender devs by nvidia.


      • #4
        My 6700XT does not work with and Blender 3.2


        • #5
          The ~10x cost Δ of the gpus seems relevant when performance testing.

          Edit: The cost is 10.6x and the seconds are 22.2x resulting in a 4.4x dollar second win for The Nvidia GeForce RTX 3080.

          Manufacturer_ Model_______________ seconds_ USD______ $s
          Nvidia_______ GeForce_RTX_3080____ 21.63___ 799.99___ 17303.78
          Nvidia_______ GeForce_RTX_3060____ 40.61___ 429.99___ 17461.89
          Nvidia_______ GeForce_RTX_3060_Ti_ 31.27___ 569.99___ 17823.59
          Nvidia_______ GeForce_RTX_3070_Ti_ 26.43___ 699.99___ 18500.74
          Nvidia_______ GeForce_RTX_3070____ 28.68___ 699.99___ 20075.71
          Nvidia_______ GeForce_RTX_3080_Ti_ 18.83___ 1199.99__ 22595.81
          Nvidia_______ GeForce_RTX_3090____ 18.35___ 1699.99__ 31194.82
          AMD__________ Radeon_RX_6500_XT___ 287.46__ 189.99___ 54614.53
          AMD__________ Radeon_RX_6750_XT___ 114.32__ 549.99___ 62874.86
          AMD__________ Radeon_RX_6600______ 181.07__ 349.99___ 63372.69
          AMD__________ Radeon_RX_6800_XT___ 79.78___ 799.99___ 63823.20
          AMD__________ Radeon_RX_6400______ 407.73__ 159.99___ 65232.72
          AMD__________ Radeon_RX_6600_XT___ 164.15__ 399.99___ 65658.36
          AMD__________ Radeon_RX_6800______ 94.84___ 798.99___ 75776.21
          AMD__________ Radeon_RX_6700_XT___ 117.08__ 649.99___ 76100.83
          min__________ 18.35_______________ 159.99__ 17303.78
          max__________ 407.73______________ 1699.99_ 76100.83
          diff__________ 22.22_______________ 10.63___ 4.40

          (_'s because Michael did not put the Forum code on github)
          Last edited by elatllat; 10 June 2022, 10:38 AM.


          • #6
            Nvidia is superior in any case.


            • #7
              What is worse? RDNA2 RT or AMD's software support?


              • #8
                I'm not surprised in the slightest. Nvidia can spend millions on software support for their products, enhancing their stranglehold on the market. I hope that over time AMD's software support can get better.


                • #9
                  I miss a bit CUDA performance of Nvidia from those graphs. I still expect AMD to be behind CUDA, but given how problematic AMD setup is I wouldn't consider it to be production ready.


                  • #10
                    If it'd be allowed to express wishes, I'd like to see a final Performance Per Watt graph.