Announcement

Collapse
No announcement yet.

Ray-Tracing Is All The Rage At This Year's Game Developers Conference

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    As much as I'd like this go succeed and work well, I'm sceptical and don't want even worse optimized games
    On the other hand it could be useful for rendering more realistic movies and such however which are not real-time.

    Comment


    • #12
      Originally posted by log0 View Post

      I expect future GPUs to include dedicated ray tracing hardware, like the PowerVR RTU. So the demos should become more impressive over time.
      Not sure, what kind of dedicated ray tracing hardware one might need. Ray tracing is basically a lot of linear algebra at which GPUs are already pretty damn good. The only reason why Nvidia's tensor cores are apparently very suitable for this is that instead of a simple FMA operation that would normally be done by a stream processor they fuse many of those operations into a 4-by-4-matrix operation, which would otherwise require multiple FMA operations. In principle, they are simply adding lots of execution units which can only do one thing.

      Comment


      • #13
        Considering how proprietary this is, both in terms of hardware and software, I expect this will flop just like Physx has. I question how much excitement is actually there.

        The benefits of this have "good enough" substitutes that have already existed for years. There are plenty of demos in engines like UE4 that have visual effects that are strikingly similar to what MS and Nvidia have accomplished, all without needing cutting-edge hardware or anything proprietary. IMO, the performance overhead of this outweigh the pros of the added realism.

        Comment


        • #14
          I'm not sure that what we see is just raytracing implementation on GPU and not some complex combination of rasterization and raytracing. From my point of view it is little sense to ignore one of main advantages of GPU - ability to rasterize. After all with prior rasterization pass it is possible to generate initial data for lightning, reflections and other passes. In a sense it would be deffered rendering.

          Comment


          • #15
            Ray-tracing.. Man the 80s are back with a vengeance.

            Comment


            • #16
              Instead of just dropping / progressively replaced rasterization with raytracing they should have gone a little further , by using the much more efficient distance fields as the primitive to be rendered

              Comment


              • #17
                But this stuff is nothing new .. watch this -> https://www.youtube.com/watch?v=Xcf35d3z890
                Best sentence in here is "it all works on a low power mobile architecture" + it was already presented on GDC 2017 and uses Vulkan

                Ok the part that is new is that DirectX 12 Raytracing will do this stuff without a special PowerVR chip.
                Last edited by Naquatis; 20 March 2018, 10:51 AM.

                Comment


                • #18
                  Originally posted by L_A_G View Post
                  I hope that I'm not the only one who wasn't all that impressed with the demo video in the article...

                  All the shadows and reflections in that scene were pretty damn grainy with the general grain typical of ray-tracing, thus indicating that we're taking about a low ray count with a lot of interpolation to make up the difference. Not even Remedy's demo of this tech avoids the problem with an overall grainy image and particularly grainy shadows.

                  When I read about this the day before yesterday when they just talked about it I was reminded of how people have been doing real time ray tracer demos for years in places like the demoscene and they've all suffered from being limited to a low number of rays and thus suffered from pretty grainy images, particularly in parts of the image with shadows and reflections. I personally hoped that this would be a drastic improvement over those previous efforts with better interpolation that actually gets rid of the grainy look to practically everything in the scene and maybe even using less expensive rendering techniques to achieve that grain-free interpolation.

                  However it seems like they haven't really succeeded in getting rid of the fundamental problem with all the real-time ray tracer implementations seen so far. Don't get me wrong, I do still believe John Carmack is right when he says that ray-tracing will eventually win and become the dominant rendering technique. However when it still provides visibly worse end results than considerably cheaper rendering techniques I don't see it being used outside of games and demos doing experimental things.
                  I could live with grain to get such realistic reflections and lighting as in the Remedy demo. You could just squint and pretend it's "film grain"

                  Comment


                  • #19
                    Originally posted by GruenSein View Post

                    Not sure, what kind of dedicated ray tracing hardware one might need. Ray tracing is basically a lot of linear algebra at which GPUs are already pretty damn good. The only reason why Nvidia's tensor cores are apparently very suitable for this is that instead of a simple FMA operation that would normally be done by a stream processor they fuse many of those operations into a 4-by-4-matrix operation, which would otherwise require multiple FMA operations. In principle, they are simply adding lots of execution units which can only do one thing.
                    Dedicated hardware would be fixed function ray intersection units optimized for the task, just like the dedicated rasterisation units every graphics card has.

                    Comment


                    • #20
                      Originally posted by log0 View Post

                      Dedicated hardware would be fixed function ray intersection units optimized for the task, just like the dedicated rasterisation units every graphics card has.
                      I'd rather prefer some additional programmable stage, fixed function might work for triangles, but would suck for other possible things, i.e. NURBS, parametric objects, CSG, etc.

                      Comment

                      Working...
                      X