Announcement

Collapse
No announcement yet.

The-Forge 1.26 Offers Up Vulkan-Powered Ray-Tracing On Windows & Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by theriddick View Post
    Yes well, the special thing about them is they are capable of doing 120 odd FP16 TFLOPs on top of the FP32 performance of the main core, at least how I understand it.
    Actually, tensore cores tend to be even lower precision than that. TPU are mostly designed to operate on 8bit numbers.
    Completely useless for any complex scientific computation (e.g.: utterly useless for doing physics/chemistry simulations) but plenty enough for the kind of matrix operations done in deep neural nets.

    Originally posted by theriddick View Post
    Also the tensor only calculates geometry or something for ray tracing, the actual ray tracing is still done on the GPU...
    To be more precise :

    - the tensore cores (The thing that do AI neural nets on enterprise servers) actually do post-processing.
    Modern ray tracing tries to achieve better speeds with a lower amoung of rays per pixel (which would give you a grainier and lower quality picture, like the intermediate preview steps that a 3d software like blenders gives while it is adding rays) but where you could still kind of make out the picture. The "making the real picture out of the grainier preview" is the part handled by the tensore core.
    It's kind of "artist's impression of what the grainy preview should represent". Neural nets are very good at "artist's impression" and other things where a human mind could see the subject of a picture, but it's complicated to write a precise mathematical algorithme to rebuild the picture. (I mean a better algorithm than "just blur it together and have a low-res picture".

    - the rest of what Nvidia puts under the "RTX Cores" umbrella are small things to help a bit with the rays-interesection computations.

    - indeed the largest part of ray themselves are still computed on the general purpose CUDA cores a.k.a. unified shaders.

    Originally posted by starshipeleven View Post
    That said, do AMD consumer cards have similar hardware that could be used to do the same job?
    From what I've gathered, AMD card don't rely on separate cores for AI NN computation.
    So "yes, they can also run ray-tracing is NN post-filtering", but "no, just like NN jobs in the datacenter, AMD Vega / VII cards don't use separate different cores".
    It's basically "discrete pixel and vertex shader vs. unified shaders" debate all over again.

    AMD's slightly more multi-purpose cores means that they can easily be repurposed depending of needs (but at the cost of slightly more complex hardware), without needing to find the perfect mix of the different type of cores to set in stone in advance - see all the debate about number of pixels vs. vertex shader before unification.
    (Though AMD specializing in custom hardware means that a large client *could* ask for a large batch of differently build chips. They could make some NN-specific cards that excell in low-precision numbers for neural nets only if suddenly some company needed it for their voice assistant).

    Nvidia has 2 diffferent subtypes unit specializing at two different precision (FP32 and optionally but driver-limited-for-enterprise FP64 on one side and 8bits on the other side), but at the cost of not having any choice of repurposing the shaders in a situation where 8bit aren't needed, like in the gaming market.
    "Unneeded": that's unless Nvidia's marketing departement suddenly start making a big fuss about "RTX Cores" and ray-tracing acceleration, that is. Hence the whole drama.
    (Even more so I you keep in mind that Nvidia doesn't do custom hardware as much as AMD does. RTX 2080 are basically their data-center product, with a couple more monitor connectors soldered on the board, and packaged in a card-board box with a nice semi-naked lady in order to suddenly pass as "gamer gear" now).

    Originally posted by starshipeleven View Post
    Would a rendering framework based on Vulkan be able to operate this hardware so the card can do a better job than doing raytracing on shaders?
    Google having chosen AMD as a provider of GPUs for Stadia tends to show that (at least Google thinks that) AMD card could deliver decent ray tracing for current games (in the real world as opposed to tech demo).

    Apparently, Google has estimated that AMD will give them a decent-enough bang-for-the-buck at the scale their are looking.
    (and both analyst and rumors point out that Google's announcement look pretty close to Vega / VII specs, so they are very likely using only slightly customized AMD graphic cards, not something totally custom build completely from the ground up).

    Comment

    Working...
    X