Blender 3.3 AMD Radeon HIP vs. NVIDIA CUDA/OptiX Performance
Earlier this month Blender 3.3 released and in addition to introducing an Intel oneAPI back-end, it's notable for bringing improvements to the AMD HIP back-end for Radeon GPUs. Significant on the AMD side is extending GPU support back to GFX9/Vega. Thus it's a good time for a fresh round of benchmarking for showing how the AMD Radeon HIP performance against that of NVIDIA's existing CUDA and OptiX back-ends.
For your viewing pleasure today are a set of benchmarks for Blender 3.3 on Ubuntu Linux looking at the performance for an assortment of AMD Radeon and NVIDIA GeForce graphics cards with their respective accelerated back-ends for this open-source 3D modelling software. Just NVIDIA vs. AMD performance is being looked at while Intel oneAPI performance with Arc Graphics will be looked at separately. The graphics cards I had available for this round of Blender 3.3 benchmarking included:
- RTX 2060
- RTX 2060 SUPER
- RTX 2080 SUPER
- RTX 3060
- RTX 3060 Ti
- RTX 3070
- RTX 3070 Ti
- RTX 3080
- RTX 3080 Ti
- RTX 3090
- RX Vega 56
- Radeon VII
- RX 5700 XT
- RX 6400
- RX 6500 XT
- RX 6600
- RX 6600 XT
- RX 6700 XT
- RX 6750 XT
- RX 6800
- RX 6800 XT
All of these benchmarks happened from an Ubuntu 20.04.5 LTS system powered by an AMD Ryzen 9 5950X. On the AMD side was their latest Radeon Software 22.20 ROCm driver stack and the NVIDIA 515.65.01 driver for the GeForce GPUs.
As written about earlier this week, AMD is looking at having HIP ray-tracing support for Blender 3.5 next year while for now is just their AMD HIP version going up against NVIDIA's mature CUDA back-end and then the newer OptiX back-end that allows making use of the RT cores on the modern GeForce RTX graphics cards.