Announcement

Collapse
No announcement yet.

Radeon Vulkan Driver Adds Option Of Rendering Less For ~30% Greater Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Radeon Vulkan Driver Adds Option Of Rendering Less For ~30% Greater Performance

    Phoronix: Radeon Vulkan Driver Adds Option Of Rendering Less For ~30% Greater Performance

    If your current Vulkan-based Radeon Linux gaming performance isn't cutting it and a new GPU is out of your budget or you have been unable to find a desired GPU upgrade in stock, the Mesa RADV driver has added an option likely of interest to you... Well, at least moving forward with this feature being limited to RDNA2 GPUs for now...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    This RADV addition is inspired by the likes of NVIDIA DLSS for trading rendering quality for better performance but in its current form is a "baby step" before being comparable to DLSS quality and functionality.
    Sorry, Michael , you're way out in left field, on this one. I actually had to go and read the blog post you referenced, because I couldn't believe the author had characterized the feature like that.

    VRS is a completely different thing than DLSS. They're only unified by the idea of trading off rendering quality for speed. However, they're entirely independent and use entirely different mechanisms. I could go on, but anyone interested can read the abundance that's been written on each. Suffice to say that this is not the first step towards implementing DLSS in RADV. It might be that they have intentions on also doing something DLSS-like, but this is not any sort of work that you would build on, for doing that.

    It's not even accurate to say the work was inspired by DLSS, as it's just one of 3 other techniques the author referenced that trade quality for speed. My take is that the author was concerned about push-back from people about sacrificing accuracy, and was trying to place this in a broader context of similar work, within the industry, as a sort of justification or preemptive defense.

    Comment


    • #3
      Originally posted by phoronix View Post
      at least moving forward with this feature being limited to RDNA2 GPUs for now
      I think this feature requires hardware support which was only added recently so for AMD will only be available for RDNA2 or later.

      Comment


      • #4
        So, after using all the nasty tactics such as forcing GA104 8GB to swap at 1440P, pricing a 192bit GPU at 256bit competitor's price, forcing media to not talk about raytracing performance or lack of dedicated matrixmul units, or lack of ROCm compute support, it still doesn't solve the problem that the 3060's MSRP is nearly 30% less. So now the shift of 30% burden needs to come somewhere. I guess 30% less image quality is a lot harder to detect. Great work David and Scott. It's a shame that all the 16Gb Samsung GDDR6 16Gbps dies are wasted on this piece of shit lineup.

        If Nvidia would realize that ECC/ P2P RDMA is the barrier to professional AI computing, and start putting the samsung 16Gb dies on GA104/GA102 chips, and make those cards unavailable to miners and available to university CUDA compute people, it would do the greatest good to society.
        Last edited by phoronix_is_awesome; 10 April 2021, 02:23 AM.

        Comment


        • #5
          Originally posted by anth View Post
          I think this feature requires hardware support which was only added recently so for AMD will only be available for RDNA2 or later.
          The blog post's author indeed states that they're relying on the hardware for this functionality:

          VRS is a hardware capability that allows us to reduce the number of fragment shader invocations per pixel rendered. So you could say configure the hardware to use one fragment shader invocation per 2x2 pixels. The hardware still renders the edges of geometry exactly, but the inner area of each triangle is rendered with a reduced number of fragment shader invocations.

          I hadn't realized AMD still lacked it until RDNA2. It seems both Nvidia and Intel actually got ahead of them, on this one:

          Comment


          • #6
            Originally posted by phoronix_is_awesome View Post
            So now the shift of 30% burden needs to come somewhere. I guess 30% less image quality is a lot harder to detect.
            Your other grievances notwithstanding, they are simply trying to catch up with the rest of the industry on this one, as the original blog post tried so hard (yet apparently without much success) to state.

            I do find it ironic that people are buying 4k monitors, then turning around and enabling a bunch of features that degrade image quality, just to get their framerates back up. But, I see the point that these techniques each try to sacrifice quality in ways and places that you're less likely to notice. So, you could say it's a similar concept as video compression.

            Comment


            • #7
              Originally posted by phoronix_is_awesome View Post
              MSRP
              But what do they actually sell for? MSRP right now is pretty much a meaningless number.

              Comment


              • #8
                Originally posted by phoronix_is_awesome View Post
                It's a shame that all the 16Gb Samsung GDDR6 16Gbps dies are wasted on this piece of shit lineup.
                You start to sound like an alternative birdie's account. RDNA2 is perhaps not competitive on all fronts, but these GPUs are not PoS. Not even slightly.

                Comment


                • #9
                  i'm more excited about vrs just working. WoW - Shadowlands uses it in DX12 mode, if supported, if you enable targeted frame rate. helps out a lot in raids to keep the frame rate high.

                  edit:
                  anyone know how development for mesh shaders is going in radv? that's another big performance boosting one.
                  Last edited by fafreeman; 10 April 2021, 04:07 AM.

                  Comment


                  • #10
                    Originally posted by anth View Post

                    I think this feature requires hardware support which was only added recently so for AMD will only be available for RDNA2 or later.
                    Still, the premise sort of invalidate the usefulness of this feature, since nobody but miners own RDNA2 cards.

                    Comment

                    Working...
                    X