Announcement

Collapse
No announcement yet.

Quake II RTX Performance For AMD Radeon 6000 Series vs. NVIDIA On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by sophisticles View Post
    Your silly rejoinder in no way offended me, what offends me is that lack of thought you employed in crafting your response.

    Hoe Nvidia describes the game:

    Quake II RTX Available Now: Download The Ray-Traced Remaster Of The Classic Quake II For Free (nvidia.com)

    So Nvidia considers it ray-traced.

    Future of Gaming : Rasterization vs Ray Tracing vs Path Tracing | by Junying Wang | Medium


    I can find nothing that says Quake II RTX uses anything other than simple ray-tracing and even if it did, there is nothing "mind blowing", it just tells me that someone really wanted to use the most inefficient rendering method possible.

    I also don't know what benefit you think going from simple ray-tracing to path-tracing had/has on the "creative process of the film industry".
    Ray tracing is an overloaded term and if you would have cared to learn something before replying instead of searching for the first words that statisfied your confirmation bias you would have known, it's true that path tracing is a form of ray tracing but with ray tracing is generally indicated a technique named Whitted (or classical) ray tracing. Both ray tracing and path tracing involve the tracing of rays, both backwards starting from the eye/camera to save on resources but they are quite different in how they do it.
    Path tracing take a unified approach, it shoot a ray and it keep bouncing until it reach a light source and a sample is generated each time it hit a surface based on its properties, reflections, shadows, GI, everything is naturally generated by the same algorithm.
    In ray tracing each effect is done separately, for shadows for example a ray is shoot and when it hit a surface another ray is shoot towards each light source to check if that point is directly illuminated or in shadow.

    If you actually cared enough to read something I wouldn't had to explain

    Comment


    • #42
      Originally posted by Stefem View Post
      Imagine the impact on VR
      Yeah, with lenses like Magic Leap's, that are mini lightfield displays!

      One problem with conventional VR displays is that you're focusing at a single distance and everything is always sharp. Lightfield displays try to replicate the depth-of-field effect of real life, where your eyes have to focus on objects, based on their depth.

      Comment


      • #43
        Originally posted by coder View Post
        Yeah, with lenses like Magic Leap's, that are mini lightfield displays!

        One problem with conventional VR displays is that you're focusing at a single distance and everything is always sharp. Lightfield displays try to replicate the depth-of-field effect of real life, where your eyes have to focus on objects, based on their depth.
        That would be another step

        Comment


        • #44
          Originally posted by mdedetrich View Post
          Nothing new, its been well known now that AMD's RayTracing in the 6000 series is one generation (and such details was leaked well before the cards released). The RTX 3xxx has just a lot more dedicated hardware for ray tracing, you can't really fix this with drivers.
          This just isn't true. The have about the same amount of hardware. AMD has lower intersection performance and higher raybox performance. Please stick to the facts. Both AMD and Nvidia provide fully accelerated RT functions that are basically equivalent. AMD wen't for an implementation that is more die area efficient that keeps more silicon active all the time by dual purposing the texture unit which frankly is ingenious... this means Nvidia won't be able to scale their implementation as well next generation as it will require many more transistors than AMD's. 6900XT is often faster than the 3090 for graphics, even though its 108mm sq smaller GPU.... this is because even though Nvidia has a huge GPU it has very poor silicon utilization.

          Comment


          • #45
            Originally posted by cb88 View Post
            6900XT is often faster than the 3090 for graphics, even though its 108mm sq smaller GPU.... this is because even though Nvidia has a huge GPU it has very poor silicon utilization.
            It should be noted that AMD's is made on TSMC N7 and Nvidia's uses Samsung's "8 nm" node, which is also less power-efficient than the former. Not to invalidate your points, but it's not totally an apples-to-apples comparison, when talking about area-efficiency in terms of mm^2.

            Comment


            • #46
              Originally posted by cb88 View Post

              This just isn't true. The have about the same amount of hardware. AMD has lower intersection performance and higher raybox performance. Please stick to the facts. Both AMD and Nvidia provide fully accelerated RT functions that are basically equivalent. AMD wen't for an implementation that is more die area efficient that keeps more silicon active all the time by dual purposing the texture unit which frankly is ingenious... this means Nvidia won't be able to scale their implementation as well next generation as it will require many more transistors than AMD's. 6900XT is often faster than the 3090 for graphics, even though its 108mm sq smaller GPU.... this is because even though Nvidia has a huge GPU it has very poor silicon utilization.
              Which has nothing got to do with the current generation performance of Ray Tracing. Even with the differences that you mention between AMD having higher raybox performance, overall with all things considered its still less capable than NVidia's Raytracing for current generation of GPU's. For example, AMD's does not hardware accelerate the BVH tree traversal where as NVidia's does (AMD uses shaders to calculate BVH tree traversal which slows down the general GPU pipeline). Furthermore AMD's tree traversal is also offloaded to SIMD stream processors which also needs to be used by other parts of the game for stream processing.

              Going by your logic your argument would be that CPU's are better at encoding/decoding for video's then having dedicated hardware because it scales better since its part of the CPU (which is hogwash). Having a dedicated piece of silicon which is designed to do one thing and do it well will win every time and thats what Ampere is in context of ray tracing. When AMD does Raytracing, its using the same part of the die which is required for rendering the game, which of course takes a bigger toll on the rendering pipeline.

              There are games out there have dedicated Ray Tracing implementations for both Ampere and AMD and Ampere wins every time.

              So please just stop spreading misleading bullshit that AMD's current generation of Raytracing is as competitive as Ampere, its simply not true and no one was even expecting (or arguing) for it to be so since its AMD's first ever implementation of it.
              Last edited by mdedetrich; 04 June 2021, 11:01 AM.

              Comment

              Working...
              X