Originally posted by Volta
View Post
Announcement
Collapse
No announcement yet.
NVIDIA GeForce RTX 4090/4080 Linux Compute CUDA & OpenCL Benchmarks, Blender Performance
Collapse
X
-
Originally posted by Volta View PostX Server? This nvidia trash doesn't run on Wayland, yet?Last edited by brucethemoose; 21 February 2023, 08:15 PM.
Comment
-
Originally posted by mbriar View Post
Like which ones that are actually relevant?
You would have to be kinda crazy to buy AMD purely for linux compute these days. I'm trying to think of a good niche for the 7900, and I'm coming up blank .Last edited by brucethemoose; 21 February 2023, 08:13 PM.
- Likes 2
Comment
-
Originally posted by schmidtbag View PostThe 4090 was impressive on release date but I swear it aged pretty well in the past few months. The generational leap in performance is huge.
Comment
-
Originally posted by tildearrow View Post
...because Blender has first-class NVIDIA support, whereas the AMD support is still new.
Similar story to AMD's multi-CCX/CCD CPUs, really.
BTW, how's Mesa 19.0 doing on your AMDGPU these days?
Sent from my AMD R9 380 running Mesa 22.3.5...
- Likes 1
Comment
-
Really sad observing how everyone is happy that amd failed and everyones darling nvidia continues to reduce the open standards options with their proprietary crap.
That said, i also wonder why its never mentioned the philosophy difference between the two?
AMD has been clear that cDNA is the gpu to use for compute tasks, rDNA for rendering/games so a more proper test should be with such gpus.
AMD really screwed up given the failure experienced by Michael though, since at the very least, it should had worked, even if slower, but not failing as it did.
- Likes 1
Comment
-
Originally posted by zexelon View PostThis is what I have been waiting for and literally why I pay money to Phoronix to support what Michael does!
Very well done, thank you!
Unfortunately AMD could not even show up, which is to bad. ROCm has seen a LOT of effort in the last 8 months or so... but its still years behind CUDA. AMD strategy seriously dropped the ball on this front and are not appearing to be able to pick it back up.
Nvidia clearly invested the majority of their Ada dev budget into compute and it is awesomely evident here. Kudos to team green for utterly dominating here! Guess my future servers are going to have to be liquid cooled after all to keep thermals in line...
- Likes 1
Comment
-
Originally posted by schmidtbag View PostThe 4090 was impressive on release date but I swear it aged pretty well in the past few months. The generational leap in performance is huge.
- Likes 1
Comment
-
Originally posted by NeoMorpheus View PostReally sad observing how everyone is happy that amd failed and everyones darling nvidia continues to reduce the open standards options with their proprietary crap.
That said, i also wonder why its never mentioned the philosophy difference between the two?
AMD has been clear that cDNA is the gpu to use for compute tasks, rDNA for rendering/games so a more proper test should be with such gpus.
AMD really screwed up given the failure experienced by Michael though, since at the very least, it should had worked, even if slower, but not failing as it did.
They lost Raja Koduri, with his team at Intel has done an incredible job with the open source OneAPI. Their first desktop GPUs have feature parity to Turing which AMD didn’t achieve until RDNA3.
cDNA isn’t afford for workstations, that’s why AMD released the Radeon Pro VII to fill that gap. I’m certain we’ll see slides from AMD in regards to compute workloads accelerated by their AI accelerators when their Radeon Pro RDNA3 card drops.
GeForce cards are sold to these markets: gamers, content creators, AI and data science. Nvidia’s Turing revolutionized the later markets with RT cores/Tensor cores.
- Likes 3
Comment
Comment