Announcement

Collapse
No announcement yet.

NVIDIA GeForce GTX 1060 Offers Great Performance On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    nouveau is already used by Nvidia for Android (look at Pixel C) with binary userspace. It would be good if Nvidia could add the missing parts for reclocking and maybe vsync (not sure if that works yet) for desktop cards. Maybe the demand is still not huge enough, most ppl who buy Nvidia cards use those for gaming and are happy with binary drivers. For HTPC use the card choices could be much more interesting, there 3d performance is not important but video decoding. But i did not even saw one article showing the Polaris mesa vdpauinfo - and if there are interesting outputs: basic tests with Kodi, mpv and vlc.

    Comment


    • #82
      Guess which card is selling the best? link.

      Comment


      • #83
        The one that's in stock this week ?
        Test signature

        Comment


        • #84
          All this "Nvidia is better" vs "AMD is better" is getting a bit old. I think, the situation is quite clear. AMD's RX480 seems to offer more raw performance than the GTX1060 at a somewhat higher power draw. Whenever a game or compute workload manages to keep all the execution units busy (such as Doom or Ashes) it is quite a bit faster. Unfortunately, AMD appears to have more limited resources when it comes to the software side of things. Many games are based on Nvidia GameWorks-Stuff and Nvidia offers a far superior binary driver. One reason Nvidia doesn't show that much of a performance jump when switching to DX12/Vulkan is that their driver is already pretty good at keeping the GPU busy - at times at the expense of standard conformity (remember the snow in ashes for instance?). They also have a lot of application specific driver profiles. This makes the GTX1060 perform pretty well in comparison. Fortunately, at least my limited understanding indicates that in the future the low level APIs will allow the developers to basically do the application specific optimization in their app instead of the driver, which seems pretty reasonable. It'll be interesting to see, how well both card perform further down the road.
          One point I've been missing in the discussion, however, is the process node. For the first time in ages, both company's GPUs are not build by the same manufacturers anymore. I wonder, if Nvidia is able to clock their GPUs so high because of the TSMC 16nm process. At least in the iPhone (which dual-sources its SoC), the power draw was quite different under load. For a device that's mostly idle like a phone this doesn't matter much. In fully loaded scenarios (like a GPU when playing a game) at least the SoC power draw figures were quite heavily favouring the TSMC process over GloFo's.

          Comment


          • #85
            Originally posted by GruenSein View Post
            All this "Nvidia is better" vs "AMD is better" is getting a bit old. I think, the situation is quite clear. AMD's RX480 seems to offer more raw performance than the GTX1060 at a somewhat higher power draw. Whenever a game or compute workload manages to keep all the execution units busy (such as Doom or Ashes) it is quite a bit faster. Unfortunately, AMD appears to have more limited resources when it comes to the software side of things. Many games are based on Nvidia GameWorks-Stuff and Nvidia offers a far superior binary driver. One reason Nvidia doesn't show that much of a performance jump when switching to DX12/Vulkan is that their driver is already pretty good at keeping the GPU busy - at times at the expense of standard conformity (remember the snow in ashes for instance?). They also have a lot of application specific driver profiles. This makes the GTX1060 perform pretty well in comparison. Fortunately, at least my limited understanding indicates that in the future the low level APIs will allow the developers to basically do the application specific optimization in their app instead of the driver, which seems pretty reasonable. It'll be interesting to see, how well both card perform further down the road.
            One point I've been missing in the discussion, however, is the process node. For the first time in ages, both company's GPUs are not build by the same manufacturers anymore. I wonder, if Nvidia is able to clock their GPUs so high because of the TSMC 16nm process. At least in the iPhone (which dual-sources its SoC), the power draw was quite different under load. For a device that's mostly idle like a phone this doesn't matter much. In fully loaded scenarios (like a GPU when playing a game) at least the SoC power draw figures were quite heavily favouring the TSMC process over GloFo's.
            Whatever happened to UMC?

            Comment


            • #86
              Originally posted by efikkan View Post
              Guess which card is selling the best? link.
              I've heard good things about them, especially when compared to more expensive cards. Only concern is 2 year warranty and the fact that I've never really heard of them as a brand before, did they rebrand? yeah, 2 years feels a bit stingy, that's why I prefer Zotac with 5, I don't think they...


              at overclocker.co.uk the fury was the bestseller. It sold at the same price as the 1060 and has more punch at 4k.
              I just got my first 4k monitor and can confirm that my fury x is indeed rocking with oss drivers on opensuse tumbleweed.
              Constant 30fps in witcher 2 at 4k (no AA needed) everything else maxed.

              Why only 30 fps? Cause the box was missing a display-port 1.2 cable... :/
              However this is a huge step forward against my HD7950 12fps at 2k with ubersampling
              Finaly I can play this game on linux.

              AMD you rock oss land

              Comment


              • #87
                Originally posted by GruenSein View Post
                All this "Nvidia is better" vs "AMD is better" is getting a bit old. I think, the situation is quite clear. AMD's RX480 seems to offer more raw performance than the GTX1060 at a somewhat higher power draw. Whenever a game or compute workload manages to keep all the execution units busy (such as Doom or Ashes) it is quite a bit faster.
                What anybody thinks is irrelevant, only the facts matter, and the fact is that GTX 1060 clearly outperforms RX 480 across the board in real world applications.

                Granted, the RX 480 have 33% more GFlop/s, is a bigger chip and has more memory bandwidth, so on paper it appears to be more powerful, yet GTX 1060 is 8-10% faster. But it's pretty much the same relation as with previous rivals from both camps, Fury X has 53% more processing power than GTX 980 Ti, yet they perform roughly the same. Theoretical numbers might be interesting, but it's not what matters for 99% of the buyers. AMD fans desperately insists the AMD cards are better buys in the long run, because they will yield better performance gains over time, despite having no real evidence to support that. The latest big hype is that AMD's architectures are superior in Direct3D 12 and Vulkan, which of course is based on a handful of games, including AofS and Doom which were written for AMD hardware exclusively. Currently the only unbiased benchmark is the new 3DMark Time Spy, which of course all AMD fans hate and discredit because it doesn't show the "expected" AMD favor.

                Originally posted by GruenSein View Post
                Unfortunately, AMD appears to have more limited resources when it comes to the software side of things.
                Actually no, the next batch of games are heavily co-developed with AMD, partly due to console integration.

                Originally posted by GruenSein View Post
                Many games are based on Nvidia GameWorks-Stuff and Nvidia offers a far superior binary driver. One reason Nvidia doesn't show that much of a performance jump when switching to DX12/Vulkan is that their driver is already pretty good at keeping the GPU busy - at times at the expense of standard conformity (remember the snow in ashes for instance?). They also have a lot of application specific driver profiles. This makes the GTX1060 perform pretty well in comparison.
                And so does AMD, just as much as Nvidia.

                Originally posted by GruenSein View Post
                Fortunately, at least my limited understanding indicates that in the future the low level APIs will allow the developers to basically do the application specific optimization in their app instead of the driver, which seems pretty reasonable. It'll be interesting to see, how well both card perform further down the road.
                Emphasis on limited understanding. Why do you make assumptions about technical matters which requires thorough knowledge and years of experience to even grasp? (Rhetorical question) This is why misconseptions spread like wildfire.
                What do you mean by application specific optimization in the application?

                Originally posted by GruenSein View Post
                One point I've been missing in the discussion, however, is the process node. For the first time in ages, both company's GPUs are not build by the same manufacturers anymore. I wonder, if Nvidia is able to clock their GPUs so high because of the TSMC 16nm process. At least in the iPhone (which dual-sources its SoC), the power draw was quite different under load. For a device that's mostly idle like a phone this doesn't matter much. In fully loaded scenarios (like a GPU when playing a game) at least the SoC power draw figures were quite heavily favouring the TSMC process over GloFo's.
                I would never base assumptions about high power nodes on comparisons of low power nodes.
                I know the yields of TSMC 16 nm HP are good, but that's not what makes a GTX 1060 outperform a RX 480.

                Comment


                • #88
                  Originally posted by efikkan View Post
                  What anybody thinks is irrelevant, only the facts matter, and the fact is that GTX 1060 clearly outperforms RX 480 across the board in real world applications.
                  No, the 480 does indeed win in a couple. Very rarely, but it's not quite a clean sweep.

                  I would never base assumptions about high power nodes on comparisons of low power nodes.
                  Yeah, at this point we really don't know.

                  I know the yields of TSMC 16 nm HP are good, but that's not what makes a GTX 1060 outperform a RX 480.
                  There's no possible way you can know that, though, unless you are actually an engineer who works for AMD or NVidia (or TSMC/GloFo).
                  The 480 is power-limited, so if it was just 15% more power-efficient, it would likely allow AMD to make it equally fast. Now, for all we know the GloFo process is actually better than what TSMC provides, so it could easily go the other way as well. But as an outsider there's no possible way to know for sure.

                  Comment


                  • #89
                    Originally posted by efikkan View Post
                    What anybody thinks is irrelevant, only the facts matter, and the fact is that GTX 1060 clearly outperforms RX 480 across the board in real world applications.

                    Granted, the RX 480 have 33% more GFlop/s, is a bigger chip and has more memory bandwidth, so on paper it appears to be more powerful, yet GTX 1060 is 8-10% faster. But it's pretty much the same relation as with previous rivals from both camps, Fury X has 53% more processing power than GTX 980 Ti, yet they perform roughly the same. Theoretical numbers might be interesting, but it's not what matters for 99% of the buyers. AMD fans desperately insists the AMD cards are better buys in the long run, because they will yield better performance gains over time, despite having no real evidence to support that. The latest big hype is that AMD's architectures are superior in Direct3D 12 and Vulkan, which of course is based on a handful of games, including AofS and Doom which were written for AMD hardware exclusively. Currently the only unbiased benchmark is the new 3DMark Time Spy, which of course all AMD fans hate and discredit because it doesn't show the "expected" AMD favor.


                    Actually no, the next batch of games are heavily co-developed with AMD, partly due to console integration.


                    And so does AMD, just as much as Nvidia.


                    Emphasis on limited understanding. Why do you make assumptions about technical matters which requires thorough knowledge and years of experience to even grasp? (Rhetorical question) This is why misconseptions spread like wildfire.
                    What do you mean by application specific optimization in the application?


                    I would never base assumptions about high power nodes on comparisons of low power nodes.
                    I know the yields of TSMC 16 nm HP are good, but that's not what makes a GTX 1060 outperform a RX 480.
                    Fortunately, we have such well-informed and unbiased people like you on the forums who can set things straight. I especially like how you lead with "what anybody thinks is irrelevant" and afterwards go nitpicking for half a page. Try to keep calm. It's better for your health and everyone else.

                    You might not like the points I raised but the basics can't be disputed with. The raw performance is there in AMD hardware and there are applications that manage to extract it. The question is why. Since the software that is able to do it, uses low level APIs, it is only reasonable to assume that the OpenGL-Drivers which did a lot of this low level stuff internally aren't as well optimised as a developer of one specific workload.

                    And as to the process node: What I said was that I wondered if that had an impact and that I had been missing this in the evaluation of each GPUs architecture. Then I gave an example for two superficially comparable processes that have very different power draw characteristics. At no point did I state that this absolutely has to be the same here.

                    You can still feel free to prefer Nvidia and throw your money at them. I don't have a problem with it. Just stop flipping out when people raise the question if everything team green is doing is actually much better. Looking at the current state of things, you might actually be better of with a 1060, I'll give you that, so chill.

                    Comment


                    • #90
                      It is a bit weird that AMD's PR mentioned all the time how power efficient Polaris is and then failed to stay within the official power limits of PCI-E and a 6-pin-connector. Nvidia managed to do that and looks much better in idle mode too. It does not really interest gamers much I guess, there performance/price is more important.

                      ​​​​But GF did certainly not develop the 14 nm FinFET alone, the process was licenced from Samsung. TSMC uses 16 nm FinFET for the new Nvidia chips and from power consumption it does not look so bad. If you could assume the same yield then GF could produce 30 ℅ cheaper for the same size. But this figure is most likely top secret. It is also important how many chips are available, the market looks like it waited for this new generation and maybe AMD is in a good position this time.


                      ​​​

                      Comment

                      Working...
                      X