Announcement

Collapse
No announcement yet.

A Closer Look At The GeForce GTX 1060 vs. Radeon RX 580 In Thrones of Britannia

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • bridgman
    replied
    Originally posted by Panix View Post
    Prove it. I went by the benchmarks.
    Which benchmarks ? All the benchmarks I looked at showed 580 and 1060 performance on Windows to be pretty close. Here are a couple of examples - first two links below cover a variety of games, while the third link is specifically Thrones of Britannia, both on Windows.

    Many of you looking to spend $230-$280 on a new graphics card have asked whether that money would be better off going toward a Radeon RX 580...




    Introduction  Total War Saga: Thrones of Britannia is not your average Total War game, focusing on a specific moment in history where things can change rapidly. This game is not Medieval III: Total War, but a tightly focused experience based in the time when a United Anglo Saxon Kingdom was first established, and England emerged as a nation.  With […]


    The relative performance on Windows and Linux seem pretty comparable from these tests.

    Leave a comment:


  • Panix
    replied
    Originally posted by bridgman View Post

    That doesn't sound even faintly correct. On average the 1060 and 580 perform pretty similarly on Windows; if anything the 1060 is a bit faster but also a bit more expensive. The 1070 and 1080 are getting up into the Vega price range.
    Prove it. I went by the benchmarks.

    Leave a comment:


  • bridgman
    replied
    Originally posted by Panix View Post
    GTX 1060 looked way better and indicates more efficient hardware.The RX 580's main competitor would be the gtx 1070 or 1080, btw, in Windows - at least? Not good.
    That doesn't sound even faintly correct. On average the 1060 and 580 perform pretty similarly on Windows; if anything the 1060 is a bit faster but also a bit more expensive. The 1070 and 1080 are getting up into the Vega price range.
    Last edited by bridgman; 29 July 2018, 04:27 PM.

    Leave a comment:


  • Panix
    replied
    GTX 1060 looked way better and indicates more efficient hardware. The RX 580's main competitor would be the gtx 1070 or 1080, btw, in Windows - at least? Not good.

    Leave a comment:


  • bridgman
    replied
    Originally posted by bridgman View Post
    but AFAICS the need to spend cubic megadollars on that is gradually going away as programming models and hardware models continue to converge.
    Originally posted by humbug View Post
    Can you elaborate on this?
    I guess the two main aspects are:

    1. GPU hardware architectures have converged on "scalar SIMD" as a consequence of needing to support compute as well as graphics. For graphics-only the VLIW SIMD model was arguably more efficient since essentially all of the work involved short vectors (typically 3- or 4-element) plus a scalar or two, but the vector size for compute varied widely and was usually very large, so a scalar instruction set ended up being more versatile although it did require relatively more control logic (program counters etc..) for a given number of ALUs.

    2. The biggest one IMO is the move away from older graphics APIs to newer ones like Vulkan and DX12 (probably Metal too, although I haven't looked at it much). OpenGL had become both sufficiently large and sufficiently old that there were just too many different ways to use the API, particularly with NVidia encouraging application developers to use compatibility profiles where the lack of standards more or less ensured a degree of vendor lock-in.

    Leave a comment:


  • humbug
    replied
    Originally posted by bridgman View Post
    One place we have differed significantly from NVidia over the last several years is the amount of money we were able to spend influencing game developers to follow one "vision" instead of the other, but AFAICS the need to spend cubic megadollars on that is gradually going away as programming models and hardware models continue to converge.
    Can you elaborate on this?

    Leave a comment:


  • bridgman
    replied
    Originally posted by msotirov View Post
    What bothers me personally (which is also the case on Windows) is that specs wise the 580 should be more comparable to a GTX 1070, not 1060.
    I don't think so. If you only look at one aspect (single precision compute) that is true (~6TF vs ~4TF) but there are other cases where the 1060 is ahead of the 580, eg pixel fill rate (~70 GP/s for 1060 vs ~40 GP/s for the 580). The 580 die size (232 mm^2) is also much closer to 1060 (200) than 1070 (314) on similar processes.

    There are a dozen or so different factors that contribute to overall performance, and one of the important design decisions for every new part is where to invest your silicon area, ie which of those factors gets the most emphasis each year and in each product positioning. We generally provision our GPUs to be a bit more forward-looking (eg relatively more compute and relatively less pixel-pushing), but each vendor has its own view of how quickly and how significantly compute shaders will displace fixed-function graphics.

    Sometimes that difference works well for us (when the gaming industry follows closer to our forecasts than to NVidia's) and sometimes it doesn't. Unfortunately you have to make your best guess as to what games are going to look like 3-4 years from now and use that estimate to guide how you design the parts for 2 or 3 generations from today.

    One place we have differed significantly from NVidia over the last several years is the amount of money we were able to spend influencing game developers to follow one "vision" instead of the other, but AFAICS the need to spend cubic megadollars on that is gradually going away as programming models and hardware models continue to converge.
    Last edited by bridgman; 09 June 2018, 03:57 PM.

    Leave a comment:


  • oooverclocker
    replied
    Yeah. It consumes more power because it's theoretically way stronger. But being on a par with a GTX 1060 is not bad when you consider that this Vulkan driver is the community driver on a less optimized LLVM and just two years of age. Luckily the RX 580 doesn't really differ in price.

    It was a tough fight for the RX 580 to get on this level with RADV and Vega should see several improvements to make people satisfied with the result compared to RadeonSI. I haven't expected the RX 580 to be significantly ahead of the GTX 1060 anyway - the driver does just get too many regular performance improvements to be considered near mature level. And currently we have to live with a not so optimal LLVM branch.

    Leave a comment:


  • Guest
    Guest replied
    What bothers me personally (which is also the case on Windows) is that specs wise the 580 should be more comparable to a GTX 1070, not 1060.

    Leave a comment:


  • torturedutopian
    replied
    Oh my god, the performance vs power consumption is SO MUCH in favor of Nvidia, it hurts me I had to stop buying Nvidia cards due to their drivers (i.e. : no proper OSS driver / bad desktop performance / several bugs under KDE). Even my shitty current RX 560 makes much more noise than my former 1060...

    Leave a comment:

Working...
X