Announcement

Collapse
No announcement yet.

A Closer Look At The GeForce GTX 1060 vs. Radeon RX 580 In Thrones of Britannia

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • A Closer Look At The GeForce GTX 1060 vs. Radeon RX 580 In Thrones of Britannia

    Phoronix: A Closer Look At The GeForce GTX 1060 vs. Radeon RX 580 In Thrones of Britannia

    As it stands right now the most competitive graphics card battle on the Linux gaming front is the Radeon RX 580 against the GeForce GTX 1060. NVIDIA continues with their first-rate performant drivers while the Polaris hardware on the open-source RADV/RadeonSI drivers is mature enough now that it's competing with the GTX 1060 like it should be and in some cases even performing much better than the NVIDIA Pascal part. With this week's release of Thrones of Britannia and powered by Vulkan, here is an extensive look at the two competing GPUs and their performance...

    http://www.phoronix.com/scan.php?pag...X-580-GTX-1060

  • #2
    It seems, that some feature in 'extreme' settings is causing major speed drop on AMD. I am guessing if we switch offending feature off, if two will be on pair in 'Extreme' mode? Would be interesting to know what that feature is.

    Disclaimer: Nvidia owner.

    Comment


    • #3
      Michael

      Good test for cpu

      Both cards run similar

      Only bad thing in RX580 is consume so much (185w) compared GTX 1060 (120w)

      Comment


      • #4
        Michael

        Very interesting head-to-head benchmark.
        Though I always wonder whether you find the time for yourself to enjoy those games as well

        Cheers

        Comment


        • #5
          Originally posted by Ignatiamus View Post
          Michael

          Very interesting head-to-head benchmark.
          Though I always wonder whether you find the time for yourself to enjoy those games as well

          Cheers
          I haven't 'played' any games regularly now in like a decade.
          Michael Larabel
          http://www.michaellarabel.com/

          Comment


          • #6
            Oh my god, the performance vs power consumption is SO MUCH in favor of Nvidia, it hurts me I had to stop buying Nvidia cards due to their drivers (i.e. : no proper OSS driver / bad desktop performance / several bugs under KDE). Even my shitty current RX 560 makes much more noise than my former 1060...

            Comment


            • #7
              What bothers me personally (which is also the case on Windows) is that specs wise the 580 should be more comparable to a GTX 1070, not 1060.

              Comment


              • #8
                Yeah. It consumes more power because it's theoretically way stronger. But being on a par with a GTX 1060 is not bad when you consider that this Vulkan driver is the community driver on a less optimized LLVM and just two years of age. Luckily the RX 580 doesn't really differ in price.

                It was a tough fight for the RX 580 to get on this level with RADV and Vega should see several improvements to make people satisfied with the result compared to RadeonSI. I haven't expected the RX 580 to be significantly ahead of the GTX 1060 anyway - the driver does just get too many regular performance improvements to be considered near mature level. And currently we have to live with a not so optimal LLVM branch.

                Comment


                • #9
                  Originally posted by msotirov View Post
                  What bothers me personally (which is also the case on Windows) is that specs wise the 580 should be more comparable to a GTX 1070, not 1060.
                  I don't think so. If you only look at one aspect (single precision compute) that is true (~6TF vs ~4TF) but there are other cases where the 1060 is ahead of the 580, eg pixel fill rate (~70 GP/s for 1060 vs ~40 GP/s for the 580). The 580 die size (232 mm^2) is also much closer to 1060 (200) than 1070 (314) on similar processes.

                  There are a dozen or so different factors that contribute to overall performance, and one of the important design decisions for every new part is where to invest your silicon area, ie which of those factors gets the most emphasis each year and in each product positioning. We generally provision our GPUs to be a bit more forward-looking (eg relatively more compute and relatively less pixel-pushing), but each vendor has its own view of how quickly and how significantly compute shaders will displace fixed-function graphics.

                  Sometimes that difference works well for us (when the gaming industry follows closer to our forecasts than to NVidia's) and sometimes it doesn't. Unfortunately you have to make your best guess as to what games are going to look like 3-4 years from now and use that estimate to guide how you design the parts for 2 or 3 generations from today.

                  One place we have differed significantly from NVidia over the last several years is the amount of money we were able to spend influencing game developers to follow one "vision" instead of the other, but AFAICS the need to spend cubic megadollars on that is gradually going away as programming models and hardware models continue to converge.
                  Last edited by bridgman; 06-09-2018, 03:57 PM.

                  Comment


                  • #10
                    Originally posted by bridgman View Post
                    One place we have differed significantly from NVidia over the last several years is the amount of money we were able to spend influencing game developers to follow one "vision" instead of the other, but AFAICS the need to spend cubic megadollars on that is gradually going away as programming models and hardware models continue to converge.
                    Can you elaborate on this?

                    Comment

                    Working...
                    X