Announcement

Collapse
No announcement yet.

NVIDIA GeForce GTX 1070 On Linux: Testing With OpenGL, OpenCL, CUDA & Vulkan

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by bridgman View Post
    I do have to gently protest that AMD hardware seems to have been excluded from most of the tests where it would have performed well, eg Luxmark scores are not displayed
    290X@1,1GHz 8GiB @ Ubuntu 16.04, Kernel 4.7 RC3, AMDGPU Pro whichever version (I guess the newest)

    Luxmark 3.1 OpenCL GPUonly- Test
    • Hotel: 2310 Points (slightly below 980Ti- level)
    • Mic: 8038 Points
    • LuxBall: 13482 Points

    Edit: You wouldn't even want to know about the Perf/Watt for my card because this monster consumes over 400W in Furmark which is no wonder due to the heat. But I have created another bios as silent mode with 900MHz that consumes about 200W@ Furmark and about 130 to 140 in games which is really amazing with <20% performance decrease and I use it normally for gaming because the silence and the cool air in my room are really enjoyable.
    BTW: I expected the RX 480 to have a lower TDP due to the amazing Perf/Power scaling you could squeeze out of a 290X/390X with optimal settings... I mean the die is much bigger than the Ellesmere die. I think that the esentially amazing AMD GPUs are generally clocked a bit over the spot by default although it looks like it's getting much better with the RX 480.

    Originally posted by Michael View Post
    Luxmark on AMDGPU-PRO crashed for the scenes used in my main article but where I was doing the performance-per-Watt run in a separate result file it happened to be with a LuxMark scene that worked on AMDGPU-PRO.
    Are you serious?
    In terms of Dota 2, well I started to like this site more and more but it looks a bit strange to post the same crappy FPS counts for Dota 2 when we had a discussion about it few days ago . And you finally already have the true results. I really hope that you will be more careful about the accuracy of the results in future articles.

    I can't test it in UHD but currently I get about 100 FPS in Dota 2 in WQHD@everything maxed, which is not a bad result in my opinion and far away from something like 12FPS or 20 due to the lower resolution...
    -> Here are some Screenshots:
    Settings
    Ingame 1
    Ingame 2

    Though Dota 2 currently crashes too frequently with Vulkan and my perhaps too recently driver versions.
    And xrandr -r >60 causes extremely flickering artifacts with the AMDGPU driver so I can't make use of the 144Hz my screen is capable of.
    Last edited by oooverclocker; 14 June 2016, 11:18 PM.

    Comment


    • #22
      Originally posted by bridgman View Post

      Interesting... so sounds like it crashes on Hotel but runs well on Microphone and Luxball HDR ?

      Just curious, why different scenes used for performance and performance/watt ?
      Nothing intentional, entirely coincidental in this case. Simply when doing different benchmark runs and not particularly caring, namely in cases like LuxMark where they are different scenes, if not wantign to test them all, usually just testing a subset or randomly.
      Michael Larabel
      https://www.michaellarabel.com/

      Comment


      • #23
        Originally posted by Qaridarium
        what we see here in my point of view is the difference between (1070)GDDR5 and (1080)GDDR5x
        all other parameters are just because of that the 1070 does not need more shader cores because the GDDR5 is the bottle neck limit.

        And if the 256bit GDDR5 interface is the limit the Radeon RX480 will bring nearly the same performance than the 1070. good for saving ~150€ and supporting open drivers.
        lol... you're hilarious.

        Comment


        • #24
          Interesting idea but most likely with CPU limited benchmarks only, maybe you find some 1080p ones. Linux is generally more CPU limited, so especially with AMD CPUs you might be right that a generally faster GPU is not faster in some games. AMDGPU-PRO is definitely not fully optimized and even lacks the FireGL mode of fglrx for professional cards. Mesa is certainly improving but it is always the question if the Linux speed/money rating is good enough for you. More mainstream cards like GTX 1060 will come for sure, maybe 40-50% more expensive than the RX480 4 GB but with 6 GB. I think those will be a better choice - and if you need to save money there will be most likely many used GTX 900 cards available.

          Comment


          • #25
            Originally posted by rabcor View Post
            Why are the cards so physically big though? I mean in the early days of boasting about pascal's feats one of the key boasts that their CEO made was that the cards were only about the size of two credit cards... this is clearly a lot bigger than that on any axis. More like 10 credit cards, or one maxwell gpu.
            Yeah they're huge right? You would think they would have shrunk by now but it seems they've gotten bigger. I mean Computers went from the size of a large room, to the size of a finger tip =p

            Comment


            • #26
              Originally posted by schmidtbag View Post
              Really gets me to wonder why anyone would buy a GTX 1080. This GPU is roughly 40% cheaper and at worst, roughly 25% slower while being as much as about 20% more power efficient. If you do 2K gaming or lower, the 1070 is more than enough power. The only reason you should pick a 1080 over a 1070 is 4k gaming. And even then, it's not quite good enough for all titles.

              Sure, you could argue the 1080 is a bargain compared to the Titan X, but the Titan series are easily the worst-valued products Nvidia has ever made.
              1080 is aimed at enthousiast, like top end i7 are. They just want the best available, 10% more perf for 50% more $ is not a problem for them.

              Titan series are NOT aimed for gaming or benchmarking, they are mainly used by professionnal and even a 1080 cannot beat them in complex scenes calculation.
              A classical benchmark running at 60 fps may be misleading, when you calculate a really complex scene with a mass of effects at 4k and get 1 frame every 5mn, Titans work harder.

              Back to the main subject : that's f**** impressive, what a powerfull beast for 399$ ! Definitly the best power/$ ratio for several month.

              Does anyone knows if a "tiny" 1060 will exist?

              Comment


              • #27
                No chance for at least one test on FP64 computing?

                Thanks though for this really nice piece of bench...1070 might be the replacement for my Titan Black...


                By the way, no hint on any OpenCL 2.0 support for the Pascal serie?
                Last edited by adakite; 15 June 2016, 05:45 AM.

                Comment


                • #28
                  Why do you assume that the bandwidth of the VRAM scales linearly??? It affects more the MinFPS than the AVG FPS and as well scales more with higher VRAM usages. So basically faster VRAM gives you a smoother gaming experience as well as you get more FPS in CPU bottleneck situations with faster RAM.

                  Originally posted by Passso View Post
                  1080 is aimed at enthousiast, like top end i7 are.
                  This is a 300mm² totally midrange mainstream card a bit bigger than the 230mm² RX 480. Enthusiast cards like the Fury X or Titan X have 500-600mm² dice. After the release of Vega or GP-102 the price can only drop like stones. Though there might be some GP-100 scraps that they perhaps transform into GP-102 in a mutilated form like the 980Ti vs Titan X.

                  Originally posted by Passso View Post
                  Titan series are NOT aimed for gaming or benchmarking
                  980Ti is just a partly deactivated(perhaps defective) GM200 by about 7% of the compute units. The only situation the Titan X will perform significantly better is when the VRAM is totally in use. And of course in the same situation the 1080 8GiB would perform worse as well. And for professional computing the stability requirements totally differ from the consumer market where it is not uncommon that you get cards with slightly unstable clocks - so you usually would still prefer a professional FirePro or Quadro card over the Titan X except you generally set up the clocks on your own so you can make 100% sure that you don't compute crap.

                  Originally posted by Passso View Post
                  Back to the main subject : that's f**** impressive, what a powerfull beast for 399$
                  It's roughly just what happens when you shrink a 980Ti to half its size, overclock it by a great extend due to the power savings and cut away some parts - which is totally what you could have expected - Nothing impressive at all in terms of 16nm. But selling this cheapo for the price of a high class card is just cheeky.
                  And if you want it for smooth and serious gaming you can add +100$ for the G- Sync display + more power consumtion of your monitor due to the G-Sync module which makes it twice as expensive as an RX 480 where you can have the same sync experience for free(And I really hope that we will see support in Kernel 4.8 or something soon). If you want SLI, you can also add an Nvidia tax for the mainboard's SLI licence, you have mostly no access to API resources to build your own programs as you can do with AMD cards. And so on.

                  Comment


                  • #29
                    Originally posted by Qaridarium

                    Thx most people do not know that something runs as fast as the bottleneck is .
                    Most people are misguided at this point.
                    If someone thinks GDDR5@256bit=256gb/s is fast and no bottleneck so why AMD is spending money in inventing HBM 512GB/s or even HBM2 1024 GB/s?

                    and most people make illogical conclusions like: the Nvidia 1070 with 256gb/s is faster than the AMD fury 512gb/s so it can not be ŧhe "real" bottleneck but they do not think about how fast would the nvidia 1070 be if the card would have 512gb/s ... sure we will never know this
                    Misc fact of the day: AMD did not invent HBM. But don't let that faze you.

                    Comment


                    • #30
                      Originally posted by Qaridarium
                      If someone thinks GDDR5@256bit=256gb/s is fast and no bottleneck so why AMD is spending money in inventing HBM 512GB/s or even HBM2 1024 GB/s?
                      Are all your 5672 posts as fun as this one?

                      Comment

                      Working...
                      X