Announcement

Collapse
No announcement yet.

NVIDIA GeForce RTX 4090/4080 Linux Compute CUDA & OpenCL Benchmarks, Blender Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Volta View Post
    X Server? This nvidia trash doesn't run on Wayland, yet?
    Does anything serious cares about Wayland, yet?

    Comment


    • #12
      Originally posted by Volta View Post
      X Server? This nvidia trash doesn't run on Wayland, yet?
      TBH for compute, you want a cheap AMD card running the primary display, and an Nvidia card dedicated to doing all the actual work.
      Last edited by brucethemoose; 21 February 2023, 08:15 PM.

      Comment


      • #13
        Originally posted by mbriar View Post

        Like which ones that are actually relevant?
        Some ML workloads (like ESRGAN and Stable Diffusion) are ported to Vulkan frameworks like Tencent's NCNN and Nod-AI's SHARK... but it does not paint AMD in a favorable light, even compared to Intel Arc running OpenVINO.


        You would have to be kinda crazy to buy AMD purely for linux compute these days. I'm trying to think of a good niche for the 7900, and I'm coming up blank .
        Last edited by brucethemoose; 21 February 2023, 08:13 PM.

        Comment


        • #14
          The 4090 was impressive on release date but I swear it aged pretty well in the past few months. The generational leap in performance is huge.

          Comment


          • #15
            Originally posted by schmidtbag View Post
            The 4090 was impressive on release date but I swear it aged pretty well in the past few months. The generational leap in performance is huge.
            Yeah, it wasnt really ready on old versions of CUDA/drivers and other libraries. The 3090 was faster in Stable Diffusion for a long time.​​​​

            Comment


            • #16
              Originally posted by tildearrow View Post

              ...because Blender has first-class NVIDIA support, whereas the AMD support is still new.
              Not only that, but nVidia uses TSMC's 4nm node on a monolithic die, whereas AMD's current flagship is a MCM design, which can never be as power-efficient as a monolithic chip by design.

              Similar story to AMD's multi-CCX/CCD CPUs, really.

              BTW, how's Mesa 19.0 doing on your AMDGPU these days?

              Sent from my AMD R9 380 running Mesa 22.3.5...

              Comment


              • #17
                Really sad observing how everyone is happy that amd failed and everyones darling nvidia continues to reduce the open standards options with their proprietary crap.

                That said, i also wonder why its never mentioned the philosophy difference between the two?

                AMD has been clear that cDNA is the gpu to use for compute tasks, rDNA for rendering/games so a more proper test should be with such gpus.

                AMD really screwed up given the failure experienced by Michael though, since at the very least, it should had worked, even if slower, but not failing as it did.

                Comment


                • #18
                  Originally posted by zexelon View Post
                  This is what I have been waiting for and literally why I pay money to Phoronix to support what Michael does!

                  Very well done, thank you!

                  Unfortunately AMD could not even show up, which is to bad. ROCm has seen a LOT of effort in the last 8 months or so... but its still years behind CUDA. AMD strategy seriously dropped the ball on this front and are not appearing to be able to pick it back up.

                  Nvidia clearly invested the majority of their Ada dev budget into compute and it is awesomely evident here. Kudos to team green for utterly dominating here! Guess my future servers are going to have to be liquid cooled after all to keep thermals in line...
                  ROCM is a joke.

                  Comment


                  • #19
                    Originally posted by schmidtbag View Post
                    The 4090 was impressive on release date but I swear it aged pretty well in the past few months. The generational leap in performance is huge.
                    It is. It's the pricing over lower tier cards that is just insulting. Nvidia might as well call this gen "3000 redux" since it's the same price performance as two years ago.

                    Comment


                    • #20
                      Originally posted by NeoMorpheus View Post
                      Really sad observing how everyone is happy that amd failed and everyones darling nvidia continues to reduce the open standards options with their proprietary crap.

                      That said, i also wonder why its never mentioned the philosophy difference between the two?

                      AMD has been clear that cDNA is the gpu to use for compute tasks, rDNA for rendering/games so a more proper test should be with such gpus.

                      AMD really screwed up given the failure experienced by Michael though, since at the very least, it should had worked, even if slower, but not failing as it did.
                      Many reviewers and gamers laughed at Nvidia for the release of their first full compute architecture called Fermi. In the end that paid off, people used to say AMD’s Tahiti architecture was AMD’s Fermi but I disagree, Vega was AMD’s Fermi. Now we’re looking at a 7 year gap between the two companies.

                      They lost Raja Koduri, with his team at Intel has done an incredible job with the open source OneAPI. Their first desktop GPUs have feature parity to Turing which AMD didn’t achieve until RDNA3.

                      cDNA isn’t afford for workstations, that’s why AMD released the Radeon Pro VII to fill that gap. I’m certain we’ll see slides from AMD in regards to compute workloads accelerated by their AI accelerators when their Radeon Pro RDNA3 card drops.

                      GeForce cards are sold to these markets: gamers, content creators, AI and data science. Nvidia’s Turing revolutionized the later markets with RT cores/Tensor cores.

                      Comment

                      Working...
                      X