Announcement

Collapse
No announcement yet.

AMD FidelityFX Super Resolution 2.0 Source Code Published

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by Davonious View Post
    I suspect it's only 'fan bois' who'd really care.
    Or it's only visually impaired people who don't care.

    Massive ghosting with FSR 2.0 in Tiny Tina Wonderlands:
    Bei Abload.de kostenlos Bilder hosten und in Foren, ebay oder anderen Auktionsplattformen usw. nutzen. Die Benutzeroberfläche ermöglicht einfaches bearbeiten deiner Bilder!


    There are some reviewers that know about the weaknesses and where to look (even though they might be biased), and there apparently are incompetent ones who don't (the majority?).

    Comment


    • #42
      Originally posted by WannaBeOCer View Post

      Again the performance difference is between 5-8% between FSR and DLSS. I don’t use any of them since they both degrade graphics.
      I would agree with that, but where we seem to differ is that I think 5% is pretty minimal when you are talking about this kind of technology. 5% isn't going let you stay at 4k resolution instead of dropping down to 1440p on your gpu. 5% isn't going to let you buy a 3060 instead of a 3080. Dropping resolution in DLSS (or FSR) is what's going to do that, because it's going to save you 30-40%.

      Now, I'm not saying each extra fps is pointless or doesn't matter. It's a nice bonus for DLSS that it's slightly faster and lets you game at 76 fps instead of 72 fps. I just don't think it's going to be the deal-breaker about choosing which technique to use, when there are other factors that I think are a lot more important, like image quality and at least for the moment, whether or not a game even supports DLSS or FSR, since FSR is currently mostly unsupported.

      The release of DLSS 1 didn’t utilize Tensor cores was horrible. When ever there was motion you could visually see the AI scrambling trying to restructure the image causing flickering. It’s the reason why everyone bashed DLSS when it was released.
      DLSS 1 did use tensor cores, I believe, but there was a special v1.9 that didn't. Then 2.0 came out and went back to tensor cores. We'll never know exactly how dependent it really is on them since the code isn't open source, but I'm skeptical. It feels like NVidia went searching for a way to use their tensor cores in a gaming situation, rather than really needing them.
      Last edited by smitty3268; 24 June 2022, 12:19 AM.

      Comment


      • #43
        Originally posted by Davonious View Post
        Uh, no. Youtube's "Hardware Unboxed" channel blows that claim out of the water with their video "FSR 2.0, How Do Old GPUs Perform? 8 GPU Generations Benchmarked". Lots of benchmark data to look at, and definitely shows quite clearly, across a number of GPU's, that your claim is utter unjustified. For example, on a 2060 Super (a Nvidia card), the difference between DLSS and FSR 2 on the highest quality is a whopping 5 FPS (85 to 90) in nVidia's favor. Hardly something to be tooting the performance horn about. On even older cards, like the 1650 Super and the RX570, there are gains to be had. Not outstanding, but they are there.

        As far as FSR's relative benefits when compared to DLSS; again Hardware Unboxed does a fair job (but so have other channels) of detailed 300% zoom comparisons between FSR 2 and DLSS. Again, while DLSS certainly has *some* edge, I suspect it's only 'fan bois' who'd really care.

        I'd certainly agree FSR 2 isn't the best thing since sliced bread, but to deny it's benefits is the worst kind of platform elitism. Many, many independent reviewers have shown that FSR 2 has obvious benefits for a large number of gamers, and the HU reviewer (final section of video) says it well:
        "Despite this, even 5 year old gpu's do still run and benefit from FSR 2.0...."
        I was not talking about RTX2060. I was talking about old GPUs like RX580/590 (and older) and older GPUs then Pascal. All those GPUs seems to not scale that well with FSR2.0 comparing to RDNA1/Turing (or newer).

        For example FSR2.0 quality on RX570 only gives 9% more performance comparing to native. Meanwhile RX5500XT gains on same resolution 24% performance from using FSR 2.0 quality.

        Overall in Hardware unboxed test, both RTX570 and 1650super doesn't benefit enough from FSR2.0 at 1440p to make good use of it. You are better running native and dropping one setting a bit lower. What GPUs does benefit well? Vega64, GTX1070Ti. So you need certain amount of minimum raw power to make good use of it.

        Comment


        • #44
          Originally posted by smitty3268 View Post
          I would agree with that, but where we seem to differ is that I think 5% is pretty minimal when you are talking about this kind of technology. 5% isn't going let you stay at 4k resolution instead of dropping down to 1440p on your gpu. 5% isn't going to let you buy a 3060 instead of a 3080. Dropping resolution in DLSS (or FSR) is what's going to do that, because it's going to save you 30-40%.

          Now, I'm not saying each extra fps is pointless or doesn't matter. It's a nice bonus for DLSS that it's slightly faster and lets you game at 76 fps instead of 72 fps. I just don't think it's going to be the deal-breaker about choosing which technique to use, when there are other factors that I think are a lot more important, like image quality and at least for the moment, whether or not a game even supports DLSS or FSR, since FSR is currently mostly unsupported.



          DLSS 1 did use tensor cores, I believe, but there was a special v1.9 that didn't. Then 2.0 came out and went back to tensor cores. We'll never know exactly how dependent it really is on them since the code isn't open source, but I'm skeptical. It feels like NVidia went searching for a way to use their tensor cores in a gaming situation, rather than really needing them.
          1.9 had much worse quality. Control used 1.9 for a while with CUDA, and after moved to 2.0 and quality improved a ton. So i suspect in case of 1.9 it didn't do as much work as 2.0. Anyway even if DLSS wouldn't need tensor cores you should still use them as they are extremly efficient at doing their job (rtx 3090ti has 320Tflops for tensors, and only 40Tflops for classic operations). Thanks for that tremendous tensor cores power you can simply implement more complex algorithms in terms of computational complexity.
          Last edited by piotrj3; 24 June 2022, 10:34 AM.

          Comment


          • #45
            Originally posted by piotrj3 View Post

            ... And when RTX2060/3050 user can use DLSS performance and will be really happy with results most of the time. Can you use 6600XT/5600XT with FSR2.0 performance? Answear is not really.
            I tested DLSS in Cyberpunk 2077 on RTX 2070S and the Performance level was so ugly it was unwatchable.

            PS: It's nice people talk how NVidia is better, because the tensor cores give few percents more of performance. But don't forget what price is paid - many more tranzistors, much bigger power consuption. Does it correspond to the little improvement? (I don't say the tenzor cores are not useful for other stuff, for people who need them.)
            Last edited by Ladis; 24 June 2022, 09:41 PM.

            Comment


            • #46
              Originally posted by Ladis View Post
              But don't forget what price is paid - many more tranzistors, much bigger power consuption. Does it correspond to the little improvement? (I don't say the tenzor cores are not useful for other stuff, for people who need them.)
              People always act like TCs would require massive die area, but that's nonsense. GA106 is only marginally bigger than Navi 10, with both produced in a similar process (at least with regard to transistor density), 256 bit interface, and the 3060 also has RT, mesh shaders, better video block etc.
              Even for just DLSS, it is well invested transistors. Though AMD's approach with RPM acceleration for TAAU isn't dumb either, hopefully they'll be able to generally improve quality.

              Comment


              • #47
                Originally posted by aufkrawall View Post
                People always act like TCs would require massive die area, but that's nonsense. GA106 is only marginally bigger than Navi 10, with both produced in a similar process (at least with regard to transistor density), 256 bit interface, and the 3060 also has RT, mesh shaders, better video block etc.
                Even for just DLSS, it is well invested transistors. Though AMD's approach with RPM acceleration for TAAU isn't dumb either, hopefully they'll be able to generally improve quality.
                You're comparing a last-gen GPU design (5700XT) which was the first GPU under an entirely new architecture coming off a long string of completely non-competitive AMD GPU models vs a current gen (3060) GPU which was a refinement to an already successful architecture. I don't think that's a good comparison in quite a few ways. And the 3060 has a 192-bit memory bus, not 256.
                Last edited by smitty3268; 25 June 2022, 01:27 PM.

                Comment


                • #48
                  Originally posted by smitty3268 View Post

                  You're comparing a last-gen GPU design (5700XT) which was the first GPU under an entirely new architecture coming off a long string of completely non-competitive AMD GPU models vs a current gen (3060) GPU which was a refinement to an already successful architecture. I don't think that's a good comparison in quite a few ways. And the 3060 has a 192-bit memory bus, not 256.
                  Right, it's 192 bit.
                  But RDNA2 cards are harder to compare due to their Infinity Cache (it's an ingenious hardware feature, I know) and their much higher clocks (again, well optimized vs. RDNA1, I know).
                  The Navi 10 comparison works well enough to show that the TCs aren't die area hogs. And they btw. also don't increase power consumption.

                  Comment

                  Working...
                  X