Announcement

Collapse
No announcement yet.

NVIDIA's Work On Adding Ray-Tracing To Vulkan

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by VikingGe View Post
    1. There's no Vulkan<->OpenCL interop.
    2. Drivers (especially on AMD) are all over the place.
    3. I think OpenGL<->OpenCL interop is pretty inefficient.
    It would seem they have some sort of interoperability at AMD:

    Comment


    • #12
      Although it's interesting, I'm just getting out of this article that vulkan is already positioned to follow directX, like OpenGL did
      ​​​​​​

      Comment


      • #13
        Originally posted by tpruzina
        3. Not necessarily in most cases, typical use case would be fluid dynamics simulation, where CL would compute "waves" and GL would render them using GPU shared memory (eg. CL would modify vertex buffers for GL)
        And why would that be preferable over using GL compute shaders when you'd use them in a Vulkan version anyway? I highly doubt that the CL approach that has to go through a different state tracker would be nearly as efficient as just using compute shader.

        Comment


        • #14
          Originally posted by GunpowaderGuy View Post
          This renwed interest in Ray tracing comes too late , neural networks can already be used for ilumination , and soon enough will be capable of reliably hallucinating high quality video in real time
          Neural_Networks and DSP/Tensor_Cores the maximum they can do is to run a De-noising filter so that way the real Shader_Cores can calculate fewer rays, but than can be done by an ASIC as well.

          Comment


          • #15
            That reminds me of the good old days of povray. Installed it on xubuntu but is unusable. Project seems half-dead and not widely used on linux.

            Comment


            • #16
              Originally posted by tpruzina

              Basically "Tensor core" is a glorified FPU, it can't decode instructions, it can't execute instructions.

              Here is rough GPU vendor marketing translation with CPU (near) equivalents:

              SM/CU -> CPU core + L1 cache + stuff
              TMU -> Load/Store pipeline
              Cuda Core/Streaming Processor -> 1x Lane of vector/SIMD execution unit (FMA*)
              Tensor core -> Accelerated "Cuda core" that works on 4x4 matrix

              *FMA - fused multiply add, aka "a*b+c" on matrices (weights[i]*inputs[i]+biases[i] in neural networks)

              https://youtu.be/KHa-OSrZPGo (GPU vs CPU hw)




              Reusing known and working code that you got somewhere is almost always preferable to rewriting it from scratch, especially in the example case that I used before.
              Or if you wanna support < 4.2 GL revision (or whenever was compute shader added, don't recall exactly) for example.
              Those middleware things were often written for CL, especially for simulations where realtime performance doesn't matter since you are not trying to hit next frame under 16ms (or 6.9ms).
              Nvidia's Tensor_Core is 640*4x4x4*FMA(mixed_32b<-16b*16b+32b), It isn't 4x4, it's 4x4x4. The other known definition for all Tensor_Core technologies is DSP, it's just what they are. Also they cannot calculate Rays or anything similarly complex, they may use them for a De-noise filter as i wrote above or we may never see them on Gamer GPUs, don't wait for them it's futile.

              Comment


              • #17
                Originally posted by GunpowaderGuy View Post
                This renwed interest in Ray tracing comes too late , neural networks can already be used for ilumination , and soon enough will be capable of reliably hallucinating high quality video in real time
                This is the kind of outlandish thinking that makes people shell out thousands for a useless "self driving" upgrade for their Tesla that gets canceled a year later. Repeat after me: neural networks are not magical.

                Comment


                • #18
                  Originally posted by VikingGe View Post
                  One that is based on OpenCL and therefore not usable for games. Or did I miss something here?

                  I think Nvidia just want to wire up their RTX stuff which they are already using as a DXR backend to Vulkan. Nothing wrong with that, although I have my doubts that anybody is actually going to use it any time soon.
                  Well, it seems that Unity 3D does not agree with you:
                  As developer marketing manager at AMD, my mission is to enable developers to create ground-breaking experiences. That's why I am so passionate about GPUOpen -- AMD's open-source initiative to supply game and professional graphics developers with powerful tools to design better GPU-powered applications. In the blog below, I'll dive into AMD's Radeon Rays integration with Unity and how you can learn more on GPUOpen. Revolutionizing render times and workflows for realistic light effects has been one of the dominant themes at GDC 2018. The announcement of AMD’s Radeon Rays integration in Unity’s GPU Progressive Lightmapper is particularly exciting to game developers looking to boost the visual fidelity of their games assisted by an interactive baking workflow. Powering the GPU Progressive Lightmapper is a full integration with AMD’s Radeon Rays – a fully open source high performance GPU-accelerated ray tracing engine for low level engine developers and supporting OpenCL, Vulkan and C++ backends. Radeon Rays can be used as an important building block of a renderer supporting global illumination rendering, sound rendering (through True Audio Next) and AI. Radeon Rays can be used for lightmap baking and light probe calculation using ray tracing and is being integrated by a number of developers to improve the lighting effects in their games. For a deeper dive into how Radeon Rays are used in a gaming rendering workflow, check out this presentation from GDC 2017.  Previous lightmapping solutions would take hours to compute even moderate sized scenes. Expansive outdoor environments could take days.


                  I am very much looking forward to 2018.2 release with a preview of this functionality.

                  Comment


                  • #19
                    Originally posted by vein View Post
                    Well, it seems that Unity 3D does not agree with you:
                    They explicitly mention that they are going to use it to bake light maps, not for real-time rendering which the current Dx12/Vulkan raytracing hype is all about.

                    Comment


                    • #20
                      Originally posted by VikingGe View Post
                      They explicitly mention that they are going to use it to bake light maps, not for real-time rendering which the current Dx12/Vulkan raytracing hype is all about.
                      They do? Maybe I did read it too fast...but I did a search on the page for "real-time" and only found (Quote):
                      "The Real-time Ray Tracing with GPU Progressive Lightmapper is expected to be released later this year."

                      I'll read it again when I got the time (at work now) so tha tI can be sure.

                      Comment

                      Working...
                      X