Announcement

Collapse
No announcement yet.

AMD Posts FidelityFX Super Resolution Source Code

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by aufkrawall View Post
    Except that FSR produces an over-sharpened look and is useless apart from 4k displays with UQ preset, whereas DLSS 2.x is usable with 1440p display and 960p rendering resolution (might even look better than native, depending on the content and implementation).
    People should really inform themselves about the limitations of upscaling (FSR) vs. reconstruction (TAAU/DLSS). I often also sense a lot of "I want to believe" vibes...
    Big friggin deal... AMD is almost certainly going to make a variation of the technology combined with temporal data for lower resolutions.... probably to similar effect it may not run on quite as many GPUs though.

    DLSS isn't magic... in fact its a brute force solution that can definitely be improved uppon, Neural Networks are almost always a poor use of energy where you don't need a system do be learning as it goes (and you dont' want that in an upscaler as it will learn artifacts as it goes).

    Comment


    • #22
      Originally posted by cb88 View Post

      Big friggin deal... AMD is almost certainly going to make a variation of the technology combined with temporal data for lower resolutions.... probably to similar effect it may not run on quite as many GPUs though.

      DLSS isn't magic... in fact its a brute force solution that can definitely be improved uppon, Neural Networks are almost always a poor use of energy where you don't need a system do be learning as it goes (and you dont' want that in an upscaler as it will learn artifacts as it goes).
      Actually no! NN have learning phase, where learning is done on CPUs mostly, taking days.
      Then there is Inference phase, which just applies learned stuff, and doesn't learn anything new. This is done on GPUs or Tensor processors, taking milliseconds.

      We are way off, from Terminator 2, self learning AI. For the consumer space, though.

      Comment


      • #23
        i am wondering wherher they are just dumping code in the name of "opensource" or "opensource friendly"

        Comment


        • #24
          Originally posted by Drago View Post

          Actually no! NN have learning phase, where learning is done on CPUs mostly, taking days.
          Then there is Inference phase, which just applies learned stuff, and doesn't learn anything new. This is done on GPUs or Tensor processors, taking milliseconds.

          We are way off, from Terminator 2, self learning AI. For the consumer space, though.
          You missed the point entirely... DLSS 2.0 does have a NN trained to sharpen with temporal hints... but it wasn't trained on any game in particular like DLSS 1.x required. It's trained to generically sharpen. It's probably not even that complicated of an AI model.... if it were it would be to slow to run per frame anyway.

          Comment

          Working...
          X