Announcement

Collapse
No announcement yet.

NVIDIA GeForce GTX 1080 On Linux: OpenGL, OpenCL, Vulkan Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Hi Michael. One loyal subscriber here Thanks for the detailed review. The card does look like a beast!

    I think there is another deficiency compared to Maxwell cards: the lack of over-clocking. I have my GTX 980Ti OC'd at 18/20% GPU/memory, giving me a performance increase of 15% in Uningine Heaven at 3440x1440. Nothing extreme here, this is all on air using the stock Nvidia-settings tool and coolbits. At that point, my guess is the two cards are pretty close in performance. Without OC potential, the 1080 isn't much of an upgrade.

    I'm grateful to hear that AMD pulled the Vega forward as that means we should see the GP102 "mother of all graphics cards" (aka the GTX 1080 Ti) in September or October. Now that should offer 50-75% increase over the GM200, and that's my next GPU if it stays below $1k.

    I noticed today that the Vega still only has 64 ROPs vs. 96 on the GM200, and stock frequency is pegged at 1.2GHz. Extrapolating from the Polaris numbers, it appears this chip could have a hard time beating the GTX 1080, and will almost certainly be out-classed by the 1080 Ti.

    Comment


    • #12
      Looks like there's still some work to do in VDPAU to support HEVC.

      Comment


      • #13
        I'll be getting one of these for my Windows box I think when they start selling non founders editions. For my Linux box though, that 480 definitely looks like the card I've been waiting for.

        Comment


        • #14
          Originally posted by deppman View Post
          Hi Michael. One loyal subscriber here Thanks for the detailed review. The card does look like a beast!

          I think there is another deficiency compared to Maxwell cards: the lack of over-clocking. I have my GTX 980Ti OC'd at 18/20% GPU/memory, giving me a performance increase of 15% in Uningine Heaven at 3440x1440. Nothing extreme here, this is all on air using the stock Nvidia-settings tool and coolbits. At that point, my guess is the two cards are pretty close in performance. Without OC potential, the 1080 isn't much of an upgrade.
          This will get worse over time. With increased efficiency the range you can clock up with stable voltage decreases, because the driver/hardware handles the situation much better. With Kepler the situation was already bad enough, so that 50mV undevolting could cause your GPU to crash pretty often. I expect this to be much worse on Pascal. Also the voltage regulator may have gotten a smaller step size (12.5mV GPIO-kepler and older, 6.25mV pwm kepler/maxwell, 3.125mV pwm-pascal).

          And that basically meanse in the end you can't do much about the voltage/clock ratio and heat becomes a _real_ issue with overclocking.

          Comment


          • #15
            but anyhow given the current state of the RadeonSI Gallium3D driver even with say a Fury X it wouldn't have added too much value to this comparison.
            Unfortunate that you didn't test against the pre-release AMDGPU based proprietary drivers.
            Applets for applels, with open-source driver the GTX1080 would score 0 points

            Comment


            • #16
              GTX1000 series doesn't accelerate async_compute, again as Ashes benchmark shows. Thanks I will pass. Its far more interesting (even for windowz developers) to invest on free FX from Amd.

              Comment


              • #17
                Michael
                Thanks for the review

                But one question, did you do the benchmarks in an open or closed rig?
                I'm aware that GTX 1080 runs into throttling quicker than it's predecessors, and usually quickly falls down to it's base frequency within 2-4 minutes of load. So to get reproducible results the card should have a "warm up" before the benchmark itself, or run multiple times.

                Comment


                • #18
                  So this should be cheaper than a 980? Was only marketing?

                  Comment


                  • #19
                    No double precision specific benchmarks? Because in scientific computing at least double precision is the norm as single precision is generally considered too imprecise.
                    "Why should I want to make anything up? Life's bad enough as it is without wanting to invent any more of it."

                    Comment


                    • #20
                      Just curious, why the open drivers rather than hybrid stack ? Using the hybrid stack would have given OpenCL and Vulkan support, and a more apples-to-apples comparison.

                      You don't have to follow the review guide when you buy your own card, do you ?

                      LOL @ Linuxhippy's post
                      Last edited by bridgman; 04 June 2016, 12:08 PM.
                      Test signature

                      Comment

                      Working...
                      X