Announcement

Collapse
No announcement yet.

NVIDIA GeForce GTX 1080 On Linux: OpenGL, OpenCL, Vulkan Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by artivision View Post
    GTX1000 series doesn't accelerate async_compute, again as Ashes benchmark shows. Thanks I will pass. Its far more interesting (even for windowz developers) to invest on free FX from Amd.
    That's not true, AotS has disabled async shaders for Nvidia hardware. You should know that (crappy) game was made for AMD hardware and then ported to Direct3D 12. The only reason why people care about it it's because it's the "first" Direct3D 12 game.

    Comment


    • #22
      Originally posted by efikkan View Post
      That's not true, AotS has disabled async shaders for Nvidia hardware. You should know that (crappy) game was made for AMD hardware and then ported to Direct3D 12. The only reason why people care about it it's because it's the "first" Direct3D 12 game.
      Yes, that is true, nvidia has to emulate async compute in the drivers, since the hardware don't support it.
      Cry all you want about AotS, but, it does show what is possible with async compute.

      The real problem here is, the 1080 is way overpriced, and doesn't bring that much better performance over the 980TI compared to the price. So, you end up paying more, for a smaller performance gain than the price.

      Comment


      • #23
        Graphics Card Information is a bit lacking. It does not show the name of the card
        Graphics Processor: GeForce GTX 1080

        Comment


        • #24
          Originally posted by bridgman View Post
          Just curious, why the open drivers rather than hybrid stack ?
          Out of the 5 tested AMD cards 2 are GCN 1.0, not supported by the closed stack. He could have just not even include AMD cards at all...

          Comment


          • #25
            Originally posted by vortex View Post
            nvidia has to emulate async compute in the drivers, since the hardware don't support it.
            No, the hardware supports it. The problem is a game written specificly for AMD hardware.

            Originally posted by vortex View Post
            Cry all you want about AotS, but, it does show what is possible with async compute.
            The reason why AMD get's a gain from doing compute async is because their scheduler is unable to utilize their GPU efficiently. The sole purpose of async shaders is to utilize different hardware resources simultainiously, while AMD uses it to compensate for their inefficient architecture. Nvidia is at near 100% efficiency in their scheduler, so nobody can blaim them for not gaining performance on top of that.

            Comment


            • #26
              Originally posted by artivision View Post
              GTX1000 series doesn't accelerate async_compute, again as Ashes benchmark shows. Thanks I will pass. Its far more interesting (even for windowz developers) to invest on free FX from Amd.
              Any compute device that has preemptive multithreading has asynchronous capabilities by definition. In the Pascal generation, NVidia has introduced much finer-grained preemption, so it's ability to handle asynchronous instructions has improved substantially. Is the hardware support as effective as AMD's implementation with their ACE's? Well, that's another argument for another day, and I don't know the answer. My guess in some cases it will be better, and other cases, like an-AMD-funded-and-biased-benchmark-posing-as-real-game-aka-AOTS, it will not. So it's fair to say the AMD have used a different means to support asynchronous compute, and even to argue that it's better. But to say NV doesn't "have" it is incorrect.

              Also, remember that AMD may have paid a huge price for those ACE engines. Notice how the latest NV hardware gets 1.8GHz on air, while the Vega is pegged at 1.2GHz. It's like CISC vs. RISC. Yes, AMD may have better hardware implementation of complex logic, but all that overhead may require sacrifices in clock speed, ROPs[1], etc. that in the end result in poorer performance for most use cases. Look at the apps you want to run on your OS and then the performance of your card for those apps, *then* buy your card. If I were to use simple-minded metrics like "it ain't got no async_compute like AMD" I would have a Fury X which provides less than half the performance of my GTX980Ti at the same price and many times the number of headaches. Yeah, no thanks.

              EDIT:
              [1] It appears Vega has only 64 ROPs, same as the Fury X. The GM200 products have 96 ROPs; the GP102 products (1080Ti, Titan P?) will probably have a similar number.
              Last edited by deppman; 05 June 2016, 09:06 AM.

              Comment


              • #27
                Originally posted by eydee View Post
                Out of the 5 tested AMD cards 2 are GCN 1.0, not supported by the closed stack. He could have just not even include AMD cards at all...
                *bridgman scratches head, tries to understand why a comparison with a new high end card requires the inclusion of older AMD cards to be relevant, or why every card has to run the same driver. Not succeeding yet.

                Originally posted by efikkan View Post
                The reason why AMD get's a gain from doing compute async is because their scheduler is unable to utilize their GPU efficiently. The sole purpose of async shaders is to utilize different hardware resources simultainiously, while AMD uses it to compensate for their inefficient architecture. Nvidia is at near 100% efficiency in their scheduler, so nobody can blaim them for not gaining performance on top of that.
                Nothing to do with schedulers AFAIK, more to do with balance between ALU count and gfx pipeline resources (eg RB's). Recent generations of GCN have included extra ALUs relative to the gfx pipeline *because* async compute will allow them to be used in parallel with graphics.

                You might be thinking of the HW scheduler used for HSA compute, but that's a different (and much coarser-grained) scheduler than the ones used for determining which shader thread runs next on a SIMD or which pipeline queues new waves to the shader core.
                Last edited by bridgman; 04 June 2016, 01:52 PM.
                Test signature

                Comment


                • #28
                  Hey Mike,

                  Great review. I just sent in a tip too since I know this has been expensive and a lot of work crammed into a short time period.
                  It's pretty exciting to see the performance improvements of the GTX-1080 and the power efficiency is quite the boost.

                  Comment


                  • #29
                    Originally posted by bridgman View Post
                    Just curious, why the open drivers rather than hybrid stack ? Using the hybrid stack would have given OpenCL and Vulkan support, and a more apples-to-apples comparison.

                    You don't have to follow the review guide when you buy your own card, do you ?

                    LOL @ Linuxhippy's post
                    No real reason besides when I did the recent beta 2 vs. Linux 4.6 / Mesa 11.3-dev tests the OpenGL results were rather close... And then in some past comparisons when I've tested with Catalyst vs. NVIDIA, people complained I should have used the open-source driver instead. So now I decided to use open-source. But I guess people will complain either way. If there's enough interest, happy to run a GTX 1080 comparison with AMDGPU-PRO, or will certainly use both drivers when it comes to RX 480 testing.
                    Michael Larabel
                    http://www.michaellarabel.com/

                    Comment


                    • #30
                      Originally posted by Qaridarium
                      I did install closed source software on amd hardware for years and it was painful. so why not just drop it?
                      This is exactly what AMD is trying to improve right now - get everything which does not contain licensed code (e.g. the closed-source opengl library) out into mainline as open-source and make the remaining binary-only blobs compatible with their open-source interfaces. So the old interface-issues with the kernel and Xorg should soon be a thing of the past.

                      If AMD want to improve their performance why not just focus to improve the OS driver?
                      They do, AMD employs several developers working on the OSS drivers - and if youcompare the performance of the OSS driver over the last 2-3 years you'll see incredible improvements in both performance as well as supported standards. Intel does the same. NVidia does not (except for their embedded chips where industry basically forces them to do so).

                      Br

                      Comment

                      Working...
                      X