NVIDIA vs. AMD GPU Workstation Performance For Blender 4.3

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
  • scottishduck
    Senior Member
    • Jun 2011
    • 499

    #11
    Embarrassing from AMD. With the rumours around RTX5000 perf, AMD is going to have a really rough time. Maybe even feel the squeeze from Intel battlemage.

    Comment

    • sophisticles
      Senior Member
      • Dec 2015
      • 2617

      #12
      Michael

      If you want a really interesting article, do a similar test using one of the newer AMD APUs, with the Blender tests running on the iGPU, one of the newer Intel processors where the Blender tests run on the iGPU and compare that to the fastest TR or EPYC system you have on hand running the Blender tests in software, i.e. on the CPU.

      It would be interesting what would be faster for rendering using Blender, an iGPU or a monster TR.

      Comment

      • DiamondAngle
        Junior Member
        • Oct 2017
        • 46

        #13
        I mean using optx but not hiprt makes these benchmarks kinda invalid, hiprt makes a huge difference here.

        just eyeballing it and assuming that hiprt on rdna3 scales the same as on my rdna2 card nvidia would still win, but the gap would shrink quite a bit makeing the rdna cards maybe cost efficant.

        Comment

        • NeoMorpheus
          Senior Member
          • Aug 2022
          • 605

          #14
          I dont think its entirely apples to apples here since those AMD gpus are based on RDNa, which we know are not great at computing loads.

          But i dont know which AMD gpu uses CDNa, besides the Radeon VII.

          Comment

          • aviallon
            Senior Member
            • Dec 2022
            • 294

            #15
            Originally posted by pinguinpc View Post

            Sadly however i dont know if amd have any response to optix


            It's called HIP RT

            Comment

            • DiamondAngle
              Junior Member
              • Oct 2017
              • 46

              #16
              Originally posted by NeoMorpheus View Post
              I dont think its entirely apples to apples here since those AMD gpus are based on RDNa, which we know are not great at computing loads.

              But i dont know which AMD gpu uses CDNa, besides the Radeon VII.
              the Radeon VII is NOT a CDNA card, its GCN altho CDNA is just a continuation of GCN with a few more instructions.

              CDNA cards are bad a hip rt, CDNA lacks ray intersection hardware that RDNA has. My mi100 is mutch slower than my rdna2 gpu in hip-rt for this reason, even though its a mutch bigger gpu.

              The benchmarks in this article are invalid precisely because optix was used, which uses fixed function rt hardware found in nv gpus, but hip-rt was not enabled leaving the fixed function rt hardware in rdna cards unused.

              Comment

              • NeoMorpheus
                Senior Member
                • Aug 2022
                • 605

                #17
                Originally posted by DiamondAngle View Post

                the Radeon VII is NOT a CDNA card, its GCN altho CDNA is just a continuation of GCN with a few more instructions.

                CDNA cards are bad a hip rt, CDNA lacks ray intersection hardware that RDNA has. My mi100 is mutch slower than my rdna2 gpu in hip-rt for this reason, even though its a mutch bigger gpu.

                The benchmarks in this article are invalid precisely because optix was used, which uses fixed function rt hardware found in nv gpus, but hip-rt was not enabled leaving the fixed function rt hardware in rdna cards unused.
                Thanks for the clarification.

                Comment

                • HEL88
                  Senior Member
                  • Oct 2020
                  • 412

                  #18
                  Originally posted by varikonniemi View Post
                  Why no performance per watt graphs? Looks like AMD is holding steady in that aspect.
                  Holds steady 2-3 times worse than Nvidia.

                  Comment

                  • sbivol
                    Junior Member
                    • Apr 2016
                    • 26

                    #19
                    Originally posted by DiamondAngle View Post
                    The benchmarks in this article are invalid
                    These benchmarks are not only valid, but a faithful representation of the state of AMD's software stack.
                    The fixed function bits on the GPU are useless if the software that can use them doesn't exist or doesn't work.
                    It's like those "Raspberry Pi killers" from AliExpress which have GPUs 3 times more powerful on paper but no working drivers for them.

                    Comment

                    • cb88
                      Senior Member
                      • Jan 2009
                      • 1352

                      #20
                      Originally posted by Jabberwocky View Post
                      Where zluda?
                      ZLUDA for graphics is a dead project. The current completel rewrite is focused mainly on AI which is generally pure compute (without image stuff though it could get added it tends to be buggier with more work arounds apparently).

                      Comment

                      Working...
                      X