320 Watt is 70 Watts above my 1080 TI-like ITX power budget... Sorry Jensen, let's see what AMD brings.
Announcement
Collapse
No announcement yet.
NVIDIA GeForce RTX 3080 Offers Up Incredible Linux GPU Compute Performance
Collapse
X
-
Originally posted by reavertm View Post320 Watt is 70 Watts above my 1080 TI-like ITX power budget... Sorry Jensen, let's see what AMD brings.
I am so waiting for newer AMD cards that take minutes to compile shaders in Blender to start rendering or displaying a preview.
Comment
-
Originally posted by reavertm View Post320 Watt is 70 Watts above my 1080 TI-like ITX power budget... Sorry Jensen, let's see what AMD brings.
Comment
-
Originally posted by Imout0 View Post
Cards that can't run software in 90% of use cases and are steamrolled by their Nvidia counterparts in 9%?
I am so waiting for newer AMD cards that take minutes to compile shaders in Blender to start rendering or displaying a preview.
They rock!
.. but also 300++ Watt for a consumer card?!
Prepare the thermonuclearreactor and burn the amazon forest!
the power consumption is gone completely out of hand here and I am not sure AMD will do much better (i hope they do though)
(note that the steamrolling or not-running software part is a bit of a nonsense tbh)
It would have been great to see performance by power consumption and maybe dollar/euro/pound/.....
Comment
-
Originally posted by caligula View Post
At least on Windows it's possible to reduce the voltages a bit to conserve power. https://wccftech.com/undervolting-am...ncy-potential/
I think 275Watt would still fit. It seems these cards are factory overvolted like AMD ones jus to be on stability side.. Windows-only solutions however won't suit me, maybe when alternative bioses are released..
- Likes 1
Comment
-
Originally posted by reavertm View Post
Interesting results indeed. Just mild undervolting with negligible performance loss for considerate power consumption decrease.
I think 275Watt would still fit. It seems these cards are factory overvolted like AMD ones jus to be on stability side.. Windows-only solutions however won't suit me, maybe when alternative bioses are released..
Comment
-
Originally posted by Teggs View PostI think AdoredTV has a good synopsis on this over on Youtube. First, as many can see, Ampere is a 'great deal' on price only compared to the horribly inflated 2080 Ti and Titan RTX of Turing. Nvidia continues to boil the frog slowly there. Second that Ampere's gains in performance are nowhere near as impressive as they seem, because the cards are using ~40% more power to achieve those numbers. The 3080 is better and cheaper than its predecessors, but the graphs are misleading unless the viewer keeps all the variables in mind.These are things anyone can see, but Nvidia has been successful in getting people not to think about.
Why they chose to drive the power so hard is fun speculation territory. My guesses are:
1. They want to drive RTX performance to a place that convinces a certain number of people it's worthwhile.
2. AMD is gaining over time in absolute terms on performance, and Nvidia can't stand that. Like with Intel, pushing power is at least a temporary solution.
3. They can. Ampere will scale to non-rediculous degrees at those power levels, and whatever his other failings, Jensen Huang has demonstrated love for creating and selling hardware that enables people to view better and better graphics over time, and always wants to produce 'the best' hardware on the planet.
So the only way that NVidia could get such performance in their cards was by feeding them more power, its the same reason the cards are so hard to overclock. NVidia is basically doing the same thing as Intel, only difference is that NVidia unlike Intel doesn't have their own fab.
Comment
-
Originally posted by mdedetrich View Post
Much more likely is that NVidia was trying to get TSMC to manufacture the 7nm die for their 3000 series GPU's but because they were so bullish/aggressive they didn't get any capacity from TSMC (also to note that NVidia doesn't have the best history with TSMC, they used them in the past and didn't get the best results). So instead they had to settle with inferior 8nm node from Samsung (which tbh is probably closer to a 10nm node).
So the only way that NVidia could get such performance in their cards was by feeding them more power, its the same reason the cards are so hard to overclock. NVidia is basically doing the same thing as Intel, only difference is that NVidia unlike Intel doesn't have their own fab.
Comment
-
Originally posted by Imout0 View Post
Cards that can't run software in 90% of use cases and are steamrolled by their Nvidia counterparts in 9%?
I am so waiting for newer AMD cards that take minutes to compile shaders in Blender to start rendering or displaying a preview.
However, I am waiting for my RTX 3080 to replace by old dual GTX 970's, I mainly use my PC for a small amount of gaming, and using LuxCore with Blender, which supports Optix and CUDA as of 2.5 so these benchmarks are what I want to see! And that's from LuxCore being an OpenCL supporter initially. I do have a RX570 as a placeholder GPU since my old build died, not going to miss it.
Comment
Comment