Announcement

Collapse
No announcement yet.

NVIDIA GeForce RTX 3080 Offers Up Incredible Linux GPU Compute Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Shnatsel View Post
    It would be interesting to see a comparison of x86_64 vs POWER9 performance for GPU compute, Nvidia especially. POWER9 has cache coherency features that might push performance even further; or it might have some driver deficiencies that destroy performance.
    I would be very interested in seeing a POWER9 vs x86_64 comparison on GPU compute workloads. To be honest I havent even been able to find concise proof that its possible.

    Comment


    • #12
      I really hope that some of the big Navi GPUs will come with HBM2(E) to kick Nvidia's butt on the performance / watt level.
      Or at least restart Radeon VII production.

      Comment


      • #13
        They made the 3080 320 watt insted of ~300 watts.
        I guess they have their reasons.

        Comment


        • #14
          FAH recently enabled Cuda for Nvidia gpu's providing a huge performance boost atleast on Windows, not sure about Linux. They currently own AMD performance wise.
          Those who would give up Essential Liberty to purchase a little Temporary Safety,deserve neither Liberty nor Safety.
          Ben Franklin 1755

          Comment


          • #15
            Originally posted by DarkFoss View Post
            FAH recently enabled Cuda for Nvidia gpu's providing a huge performance boost atleast on Windows, not sure about Linux. They currently own AMD performance wise.
            No FAHbench updates though for the CUDA support.
            Michael Larabel
            http://www.michaellarabel.com/

            Comment


            • #16
              What is interesting is Nvidia running OpenCL 2.3, without the need of pcie atomics operations, and still rocking..
              With AMD you are out of game even with OpenCL2.0 if you don't have pcie atomics..

              Comment


              • #17
                ~52% more transistors for a ~30% increase in performance on a smaller node that uses more power? That's not bad, but it's not good either in the grand scheme of things. The only good thing about it is that performance is decent, but I would have expected more from both a smaller process and a 50% increase in transistor count over the 2080 Ti.

                Comment


                • #18
                  I think AdoredTV has a good synopsis on this over on Youtube. First, as many can see, Ampere is a 'great deal' on price only compared to the horribly inflated 2080 Ti and Titan RTX of Turing. Nvidia continues to boil the frog slowly there. Second that Ampere's gains in performance are nowhere near as impressive as they seem, because the cards are using ~40% more power to achieve those numbers. The 3080 is better and cheaper than its predecessors, but the graphs are misleading unless the viewer keeps all the variables in mind.These are things anyone can see, but Nvidia has been successful in getting people not to think about.

                  Why they chose to drive the power so hard is fun speculation territory. My guesses are:

                  1. They want to drive RTX performance to a place that convinces a certain number of people it's worthwhile.

                  2. AMD is gaining over time in absolute terms on performance, and Nvidia can't stand that. Like with Intel, pushing power is at least a temporary solution.

                  3. They can. Ampere will scale to non-rediculous degrees at those power levels, and whatever his other failings, Jensen Huang has demonstrated love for creating and selling hardware that enables people to view better and better graphics over time, and always wants to produce 'the best' hardware on the planet.

                  Comment


                  • #19
                    Originally posted by jrdoane View Post
                    ~52% more transistors for a ~30% increase in performance on a smaller node that uses more power? That's not bad, but it's not good either in the grand scheme of things. The only good thing about it is that performance is decent, but I would have expected more from both a smaller process and a 50% increase in transistor count over the 2080 Ti.
                    You do realize this is just a consumer GPU, Nvidia's compute-oriented cards are Teslas (or whatever they're called these days)
                    Whatever optimizations 3080 may have received, OpenCL/CUDA performance was certainly not the main concern.

                    Comment


                    • #20
                      Originally posted by tuxd3v View Post
                      With AMD you are out of game even with OpenCL2.0 if you don't have pcie atomics..
                      PCIe atomics is a ROCm requirement, the proprietary OpenCL driver doesn't require them. And even ROCm doesn't require them for Vega or newer.

                      Comment

                      Working...
                      X