NVIDIA vs. AMD GPU Workstation Performance For Blender 4.3

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
  • tenchrio
    Senior Member
    • Sep 2022
    • 173

    #21
    Originally posted by sbivol View Post

    These benchmarks are not only valid, but a faithful representation of the state of AMD's software stack.
    The fixed function bits on the GPU are useless if the software that can use them doesn't exist or doesn't work.
    It's like those "Raspberry Pi killers" from AliExpress which have GPUs 3 times more powerful on paper but no working drivers for them.
    It really isn't, the article has been unclear what it means with HIP-RT not working. I've been testing with it "on" throughout the Blender 4.3 Beta while using an RX 7900XTX and ROCM 6.1 and easily ran it again after upgrading ROCM to 6.2 for the full 4.3 release to compare the results with the RTX 3090 on latest CUDA/OPTIX.
    And what error did Michael even get? I can find no issue on the issue tracker of HIP-RT not working for Linux, there are issues related to rendering volumetric objects but none of the Benchmarks in the article make use of this. There is no way around it but to call it what it is, this is a rushed article and one that obviously favors Nvidia.

    There is also the matter of Fishy Cat being present but no mention on whether or not GPU Compositing was used. I've detailed before in a comment on the Blender 4.2 release how GPU composite reduced the time to 2/3th of what it originally was, even using Fishy Cat as the example as it has quite the compositing setup but every node is compatible with GPU compositing, what I don't know is if this differs all that much between AMD and Nvidia. GPU compositing is done through Vulkan, so Optix/HIP-RT won't matter, but the cards Vulkan capabilities will and we've seen cases where AMD manages to gain victories because they put more work in there (still doubt it will here but it might have some effect on the gap).

    Speaking of Vulkan, where are the Eevee(-rt) benchmarks? How many more movies have to release with Eevee like the recent Gints Zilbalodis's Flow before Eevee is included in Blender benchmarking by tech journalists? Hollywood is already abandoning offline-type rendering engines like Cycles for Real Time render engines like Lumen in a multitude of shows, with Eevee(-next) being the Blender Foundation's real-time attempt to keep rendering in Blender.

    We have artists like Worthikids and Dedouze sticking to Eevee and using a blend of traditional animation with 3D environments creating very distinct styles and gaining quite some popularity and yet despite that every article on Blender performance must stick to Cycles and only Cycles and to really piss me off call it "Blender Performance", a lot of GPU accelerated parts of Blender do not rely on HIP and CUDA.

    in the upper echelons of the industry they don't use Cycles but generally any type of Universal Screen Descriptor workflow which tends to be executed with render engines that are almost entirely CPU based (e.g. Pixar's Renderman, Dreamwork's MoonRay), what matters in programs like Blender here is viewport performance (so Workbench) and basic output with something like Hydra Storm, the 4th Blender native render engine, nothing in that process relies on CUDA or HIP, hell even on the independent creator side this can ring true, neither Worthikids or Dedouze use Cycles for their work flow so why is Cycles constantly shoved forwards as if it is the be all and end all for Blender Performance? Because honestly it seems like the answer is that it's because it tends to heavily favor Nvidia, and skews the perspective so it seems like only Nvidia is viable for Blender workflows while in reality those workflows vary and in some of them Cuda and HIP hold 0 relevance.

    Comment

    • Svyatko
      Senior Member
      • Dec 2020
      • 211

      #22
      Originally posted by NeoMorpheus View Post
      I dont think its entirely apples to apples here since those AMD gpus are based on RDNa, which we know are not great at computing loads.

      But i dont know which AMD gpu uses CDNa, besides the Radeon VII.
      GCN is dropped, CDNA is not supported:

      This commit removes support for Vega GPUs from the AMD HIP backend of Cycles. This is being done as: - AMD no longer provides official support for Vega GPUs in their ROCm software. - Vega GPUs have rendering artifacts on all supported platforms, and as a result of the reduction of support from A...

      Comment

      • Svyatko
        Senior Member
        • Dec 2020
        • 211

        #23
        Originally posted by tenchrio View Post

        It really isn't, the article has been unclear what it means with HIP-RT not working. I've been testing with it "on" throughout the Blender 4.3 Beta while using an RX 7900XTX and ROCM 6.1 and easily ran it again after upgrading ROCM to 6.2 for the full 4.3 release to compare the results with the RTX 3090 on latest CUDA/OPTIX.
        And what error did Michael even get? I can find no issue on the issue tracker of HIP-RT not working for Linux, there are issues related to rendering volumetric objects but none of the Benchmarks in the article make use of this. There is no way around it but to call it what it is, this is a rushed article and one that obviously favors Nvidia.

        There is also the matter of Fishy Cat being present but no mention on whether or not GPU Compositing was used. I've detailed before in a comment on the Blender 4.2 release how GPU composite reduced the time to 2/3th of what it originally was, even using Fishy Cat as the example as it has quite the compositing setup but every node is compatible with GPU compositing, what I don't know is if this differs all that much between AMD and Nvidia. GPU compositing is done through Vulkan, so Optix/HIP-RT won't matter, but the cards Vulkan capabilities will and we've seen cases where AMD manages to gain victories because they put more work in there (still doubt it will here but it might have some effect on the gap).
        AMD HIP is not compatible with AMD iGPU. Michael, please test AMD HIP (HIP RT) with disabled iGPU in AMD Ryzen AM5 CPUs (which are APUs, except those with F suffix - 7500F/8400F/8700F).

        Edit: On Linux AMD ROCm glitches when AMD's iGPU is available. On Windows AMD HIP might work with AMD's iGPU.
        Last edited by Svyatko; 09 December 2024, 02:05 AM.

        Comment

        • tenchrio
          Senior Member
          • Sep 2022
          • 173

          #24
          Originally posted by Svyatko View Post

          AMD HIP is not compatible with AMD iGPU. Michael, please test AMD HIP (HIP RT) with disabled iGPU in AMD Ryzen AM5 CPUs (which are APUs, except those with F suffix - 7500F/8400F/8700F).
          This could actually be related, my setup has a 5900X (so no igpu).
          I can't find an issue in the tracker (specifically for HIP-RT and igpu, it actually seems like igpu are supported to some extend) but it wouldn't be the first time that the igpu led to problems during dGPU rendering (OIDN in 4.1 for example), can't hurt to try to disable it and test.

          Comment

          • Jabberwocky
            Senior Member
            • Aug 2011
            • 1211

            #25
            Originally posted by cb88 View Post

            ZLUDA for graphics is a dead project. The current completel rewrite is focused mainly on AI which is generally pure compute (without image stuff though it could get added it tends to be buggier with more work arounds apparently).
            Isn't Blender's use-case pure compute as well, what kind of image things would be required?

            I've been using lshqqytiger's fork and haven't experienced any bugs with compute. I've seen many Blender benchmarks with really good results so I'm always curious about adding that comparison into the mix when benchmarking. It doesn't have to be a bug free certified professional driver. The project exists even if it's not actively developed.

            It's a nice placeholder until AMD get's around to optimizing their software stack

            Comment

            • Svyatko
              Senior Member
              • Dec 2020
              • 211

              #26
              Originally posted by tenchrio View Post

              This could actually be related, my setup has a 5900X (so no igpu).
              I can't find an issue in the tracker (specifically for HIP-RT and igpu, it actually seems like igpu are supported to some extend) but it wouldn't be the first time that the igpu led to problems during dGPU rendering (OIDN in 4.1 for example), can't hurt to try to disable it and test.
              Users can run Blender at iGPU on Windows.
              AMD ROCm - which is for Linux - has no support for iGPU, glitches when using AMD's dGPU if AMD's iGPU is working.
              IRL iGPUs are too slow for Blender, useful only for basic tasks - DE, browsers, playing video.
              Good option is in using iGPU for such simple jobs, and dedicate dGPU for calculations.
              But on Linux this scenario is not available for AMD iGPU + AMD dGPU, while being OK with AMD iGPU + non-AMD's dGPUs, or Intel's iGPU + any other dGPUs.

              Comment

              • bridgman
                AMD Linux
                • Oct 2007
                • 13188

                #27
                Originally posted by Jabberwocky View Post
                Isn't Blender's use-case pure compute as well, what kind of image things would be required?
                It seems to be compute plus a tiny bit of graphics, enough to fail when running on parts that don't have texture filtering. I thought we had software logic in place to handle the common use cases (which seem to involve performing image operations that call for a filter but don't actually use filtering), not sure what is happening there.
                Test signature

                Comment

                • Panix
                  Senior Member
                  • Sep 2007
                  • 1563

                  #28
                  Originally posted by schmidtbag View Post
                  That, and performance-per-watt.
                  It appears Nvidia is wiping the floor with AMD but that maybe isn't true when factoring in cost and efficiency.
                  Nvidia is probably leading there too - if you factor in the performance - you can see in the benchmarks - even the most power hungry nvidia gpus aren't that much more than the AMD W7900 card - which is the only amd gpu that is even somewhat competitive in performance.

                  Comment

                  • Panix
                    Senior Member
                    • Sep 2007
                    • 1563

                    #29
                    Originally posted by Svyatko View Post

                    AMD HIP is not compatible with AMD iGPU. Michael, please test AMD HIP (HIP RT) with disabled iGPU in AMD Ryzen AM5 CPUs (which are APUs, except those with F suffix - 7500F/8400F/8700F).

                    Edit: On Linux AMD ROCm glitches when AMD's iGPU is available. On Windows AMD HIP might work with AMD's iGPU.
                    Right - should be tested with it disabled - also, with Windows vs Linux - to see if there's a major performance difference/ or a problem with OS vs OS.

                    I don't think HIP-RT will ever work or be stable - there's been no progress - you can check the Blender user/dev forums - they stopped discussing it - and the last comments was about it not working or being stable - including not working in Linux.

                    Comment

                    Working...
                    X