Announcement

Collapse
No announcement yet.

NVIDIA GeForce RTX 3080 Offers Up Incredible Linux GPU Compute Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    320 Watt is 70 Watts above my 1080 TI-like ITX power budget... Sorry Jensen, let's see what AMD brings.

    Comment


    • #22
      Originally posted by reavertm View Post
      320 Watt is 70 Watts above my 1080 TI-like ITX power budget... Sorry Jensen, let's see what AMD brings.
      Cards that can't run software in 90% of use cases and are steamrolled by their Nvidia counterparts in 9%?

      I am so waiting for newer AMD cards that take minutes to compile shaders in Blender to start rendering or displaying a preview.

      Comment


      • #23
        Originally posted by reavertm View Post
        320 Watt is 70 Watts above my 1080 TI-like ITX power budget... Sorry Jensen, let's see what AMD brings.
        At least on Windows it's possible to reduce the voltages a bit to conserve power. https://wccftech.com/undervolting-am...ncy-potential/

        Comment


        • #24
          Originally posted by Imout0 View Post

          Cards that can't run software in 90% of use cases and are steamrolled by their Nvidia counterparts in 9%?

          I am so waiting for newer AMD cards that take minutes to compile shaders in Blender to start rendering or displaying a preview.
          Absolutely!
          They rock!

          .. but also 300++ Watt for a consumer card?!
          Prepare the thermonuclearreactor and burn the amazon forest!

          the power consumption is gone completely out of hand here and I am not sure AMD will do much better (i hope they do though)
          (note that the steamrolling or not-running software part is a bit of a nonsense tbh)

          It would have been great to see performance by power consumption and maybe dollar/euro/pound/.....

          Comment


          • #25
            Originally posted by caligula View Post

            At least on Windows it's possible to reduce the voltages a bit to conserve power. https://wccftech.com/undervolting-am...ncy-potential/
            Interesting results indeed. Just mild undervolting with negligible performance loss for considerate power consumption decrease.
            I think 275Watt would still fit. It seems these cards are factory overvolted like AMD ones jus to be on stability side.. Windows-only solutions however won't suit me, maybe when alternative bioses are released..

            Comment


            • #26
              Originally posted by reavertm View Post

              Interesting results indeed. Just mild undervolting with negligible performance loss for considerate power consumption decrease.
              I think 275Watt would still fit. It seems these cards are factory overvolted like AMD ones jus to be on stability side.. Windows-only solutions however won't suit me, maybe when alternative bioses are released..
              I've also considered undervolting, but not really sure if it can be done with Linux AMD/Linux drivers. Possibly with AMDGPU? There are these rather recent 5300 / 5500 cards.

              Comment


              • #27
                Originally posted by Teggs View Post
                I think AdoredTV has a good synopsis on this over on Youtube. First, as many can see, Ampere is a 'great deal' on price only compared to the horribly inflated 2080 Ti and Titan RTX of Turing. Nvidia continues to boil the frog slowly there. Second that Ampere's gains in performance are nowhere near as impressive as they seem, because the cards are using ~40% more power to achieve those numbers. The 3080 is better and cheaper than its predecessors, but the graphs are misleading unless the viewer keeps all the variables in mind.These are things anyone can see, but Nvidia has been successful in getting people not to think about.

                Why they chose to drive the power so hard is fun speculation territory. My guesses are:

                1. They want to drive RTX performance to a place that convinces a certain number of people it's worthwhile.

                2. AMD is gaining over time in absolute terms on performance, and Nvidia can't stand that. Like with Intel, pushing power is at least a temporary solution.

                3. They can. Ampere will scale to non-rediculous degrees at those power levels, and whatever his other failings, Jensen Huang has demonstrated love for creating and selling hardware that enables people to view better and better graphics over time, and always wants to produce 'the best' hardware on the planet.
                Much more likely is that NVidia was trying to get TSMC to manufacture the 7nm die for their 3000 series GPU's but because they were so bullish/aggressive they didn't get any capacity from TSMC (also to note that NVidia doesn't have the best history with TSMC, they used them in the past and didn't get the best results). So instead they had to settle with inferior 8nm node from Samsung (which tbh is probably closer to a 10nm node).

                So the only way that NVidia could get such performance in their cards was by feeding them more power, its the same reason the cards are so hard to overclock. NVidia is basically doing the same thing as Intel, only difference is that NVidia unlike Intel doesn't have their own fab.

                Comment


                • #28
                  Originally posted by mdedetrich View Post

                  Much more likely is that NVidia was trying to get TSMC to manufacture the 7nm die for their 3000 series GPU's but because they were so bullish/aggressive they didn't get any capacity from TSMC (also to note that NVidia doesn't have the best history with TSMC, they used them in the past and didn't get the best results). So instead they had to settle with inferior 8nm node from Samsung (which tbh is probably closer to a 10nm node).

                  So the only way that NVidia could get such performance in their cards was by feeding them more power, its the same reason the cards are so hard to overclock. NVidia is basically doing the same thing as Intel, only difference is that NVidia unlike Intel doesn't have their own fab.
                  They didn't get capacity because Apple and AMD were given first priority due to both having long standing status and the fact Apple uses AMD's cards in their systems, it made more sense for them not to lose the Apple contract which is the largest in the industry and will balloon even more so with Apple Silicon fab orders.

                  Comment


                  • #29
                    Originally posted by Imout0 View Post

                    Cards that can't run software in 90% of use cases and are steamrolled by their Nvidia counterparts in 9%?

                    I am so waiting for newer AMD cards that take minutes to compile shaders in Blender to start rendering or displaying a preview.
                    To be fair, in LuxCore 2.5 both nVidia (using CUDA) and AMD (OpenCL) GPUs compile the kernel once upon first run. Often the CUDA kernels are pre-compiled and shipped with the software, hence why Blender needs to be compiled to use the new GPUs.

                    However, I am waiting for my RTX 3080 to replace by old dual GTX 970's, I mainly use my PC for a small amount of gaming, and using LuxCore with Blender, which supports Optix and CUDA as of 2.5 so these benchmarks are what I want to see! And that's from LuxCore being an OpenCL supporter initially. I do have a RX570 as a placeholder GPU since my old build died, not going to miss it.

                    Comment


                    • #30
                      Not buying another Nvidia hardware in future. Very disappointed since they blocked Nouveau development with signed firmaware.

                      Comment

                      Working...
                      X