Announcement

Collapse
No announcement yet.

A Closer Look At The GeForce GTX 1060 vs. Radeon RX 580 In Thrones of Britannia

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by bridgman View Post
    but AFAICS the need to spend cubic megadollars on that is gradually going away as programming models and hardware models continue to converge.
    Originally posted by humbug View Post
    Can you elaborate on this?
    I guess the two main aspects are:

    1. GPU hardware architectures have converged on "scalar SIMD" as a consequence of needing to support compute as well as graphics. For graphics-only the VLIW SIMD model was arguably more efficient since essentially all of the work involved short vectors (typically 3- or 4-element) plus a scalar or two, but the vector size for compute varied widely and was usually very large, so a scalar instruction set ended up being more versatile although it did require relatively more control logic (program counters etc..) for a given number of ALUs.

    2. The biggest one IMO is the move away from older graphics APIs to newer ones like Vulkan and DX12 (probably Metal too, although I haven't looked at it much). OpenGL had become both sufficiently large and sufficiently old that there were just too many different ways to use the API, particularly with NVidia encouraging application developers to use compatibility profiles where the lack of standards more or less ensured a degree of vendor lock-in.

    Comment

    Working...
    X