Announcement

Collapse
No announcement yet.

AMD Posts FidelityFX Super Resolution Source Code

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    So basically:
    People: Interpolation always makes blurry terrible mess.
    nVidia: But we've done this with our Magical Artificial Intelligence using pseudopetaflops of tensor power!
    People: OMG so incredible!
    AMD: What nVidia did looks sorta like normal tweaked sharpened interpolation, which can be a simple pixel shader; let's try that...
    People: OMG so crisp so works everywhere!

    Comment


    • #12
      Two header files, full explanation at the start of each file. Excellent. Good job AMD folks.

      Comment


      • #13
        Originally posted by Hi-Angel View Post
        So, I looked at this project to see how hard would it be to make the build system work on Linux (since it's Cmake), and here's the thing: it relies on another AMD project called "Cauldron", which also doesn't support Linux. Then the question is: how hard would it be to add support there?

        So, I looked at PRs in Cauldron, and I found out it has some legitimate Windows fixes waiting there since 2019 year! So, this is an unmaintained project. There's no point in adding any features, fixes, including Linux support, unless you want to also fork and maintain it.

        It's really not the first one. Every time GPUOpen AMD initiative gets brought up, it's always yet another unmaintained project. Like, what is the point.
        The samples use cauldron....not the basecode

        Comment


        • #14
          It's not just a "tweaked" lanczos. It's parametric lanczos extended with directionality. The directionality is chosen according to contrast around the scaled pixel. Edge-guided interpolation is nothing new at all (stuff like NNEDI2), the real value of FSR seems to be the efficiency. That's probably also the reason for the convoluted, strange code, it's optimized to death.

          Comment


          • #15
            Originally posted by kvuj View Post
            Holy crap, what the hell is that source code haha.

            I have never seen C code that was so weird. I guess it makes sense considering its purpose, but still...

            "AH4 ACpySgnH4(AH4 d,AH4 s){return AH4_AW4(AW4_AH4(d)|(AW4_AH4(s)&AW4_(0x8000u)));}"
            Man I need to go relearn C haha.
            All the macros are there so that the single source file can not only be compiled for the CPU, but also be used on all of OpenGL, Vulkan and Direct3D.

            Comment


            • #16
              Originally posted by mb_q View Post
              So basically:
              People: Interpolation always makes blurry terrible mess.
              nVidia: But we've done this with our Magical Artificial Intelligence using pseudopetaflops of tensor power!
              People: OMG so incredible!
              AMD: What nVidia did looks sorta like normal tweaked sharpened interpolation, which can be a simple pixel shader; let's try that...
              People: OMG so crisp so works everywhere!
              Honestly, that's almost exactly the story behind FSAA.

              Comment


              • #17
                Good news: it seems the problem with GPUOpen projects looking unmaintained is not something AMD developers are fine with. To quote a comment of an AMD employee on GPUOpen/Cauldron Linux porting work:

                Originally posted by rys
                First of all, I need to apologise that PRs for Cauldron have been ignored during the project's life and our dev process means it can look abandoned while we develop it internally, rather than develop it out in the open.

                It's not our intention to release a project as open source code and then not support it properly when the open source community interacts with it, and we want to do better there with Cauldron and all of the GPUOpen open source projects.

                If the Linux port is brought up-to-date with current HEAD (v1.4.1) and looks good, and I can add support for building it with our internal CI, then we will work to try and find a good way to bring it into the project. I can't promise anything at this stage.

                I'll work with you in my personal spare time on this effort when I can. Let's open a new issue against Cauldron and take it from there.

                Comment


                • #18
                  Originally posted by mb_q View Post
                  So basically:
                  People: Interpolation always makes blurry terrible mess.
                  nVidia: But we've done this with our Magical Artificial Intelligence using pseudopetaflops of tensor power!
                  People: OMG so incredible!
                  AMD: What nVidia did looks sorta like normal tweaked sharpened interpolation, which can be a simple pixel shader; let's try that...
                  People: OMG so crisp so works everywhere!
                  Except that FSR produces an over-sharpened look and is useless apart from 4k displays with UQ preset, whereas DLSS 2.x is usable with 1440p display and 960p rendering resolution (might even look better than native, depending on the content and implementation).
                  People should really inform themselves about the limitations of upscaling (FSR) vs. reconstruction (TAAU/DLSS). I often also sense a lot of "I want to believe" vibes...

                  Comment


                  • #19
                    Originally posted by Hi-Angel View Post
                    Good news: it seems the problem with GPUOpen projects looking unmaintained is not something AMD developers are fine with. To quote a comment of an AMD employee on GPUOpen/Cauldron Linux porting work:
                    only downside - rys has to do the support in his free time according to his post

                    Comment


                    • #20
                      Originally posted by mb_q View Post
                      So basically:
                      People: Interpolation always makes blurry terrible mess.
                      nVidia: But we've done this with our Magical Artificial Intelligence using pseudopetaflops of tensor power!
                      People: OMG so incredible!
                      AMD: What nVidia did looks sorta like normal tweaked sharpened interpolation, which can be a simple pixel shader; let's try that...
                      People: OMG so crisp so works everywhere!
                      Yep... turns out DLSS is basically just learning when where edges meet should be pointy or not.

                      Comment

                      Working...
                      X