Announcement

Collapse
No announcement yet.

AMD Introduces FidelityFX Super Resolution, NVIDIA Announces DLSS For Steam Play

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by coder View Post
    I introduced that idea, actually, based on clearly less knowledge of the issue than you seem to have.

    I do wonder if we really know that Nvidia didn't negotiate a deal with Valve. Also, doesn't Steam just take a cut of the price, like other app stores?

    And what's to stop Nvidia from going around Valve/Steam and cutting a deal directly with the game's publisher to run it outside of Steam, in GeForce Now? My guess is that's probably not currently worth the effort, due to the relatively low volume of GeForce Now players. If GeForce Now would take off, then I think it would represent more of a competitive threat to Steam.
    It's simply their idea of cloud gaming, since you are just accessing to your Steam library from remote Valve take the fee from any game you purchase, exactly the same way it happen on local on your PC, for Valve GeForce Now is just like an additional platform.
    Their vision for cloud gaming is you renting a virtual PC that run in a server, they run in some problem because some publisher didn't like that model and wanted the user to buy the game again or that NVIDIA gave them a slice of the cake they earned (by renting the hardware not the software) by leveraging dubious validity (at least in EU) license terms. I don't see any reason why they would want to dump Steam, they two services in symbiosis

    Comment


    • #42
      Originally posted by artivision View Post
      DLSS is the trashy analytics based version of Temporal Upsampling. I already have this on my UE4.19 games:

      Engine.ini

      [/Script/Engine.RendererSettings]
      r.TemporalAA.Upsampling=1
      r.ScreenPercentage=50
      Trashy? Actually even TAAU developers themselves thinks DLSS is way ahead of it.

      Comment


      • #43
        Originally posted by Stefem View Post
        Freesync owned by AMD with the open industry standard Adaptive Sync of the VESA DP (no, it's not Freesync adopted by VESA) and HDMI 2.1 VRR. Everyone that want to have a good working VRR on a Freesync branded DisplayPort monitor have to create a software that manage it (that's what NVIDIA call G-Sync Compatible) while all HDMI Freesync monitors before the HDMI 2.1 revision (practically every) cannot be used with NVIDIA because it's a proprietary standard of AMD (I think they even patented it if I remember well). When NVIDIA introduced G-Sync on the market (let alone while they where in development) there wasn't any standard available on desktop to sync the refresh to the framerate.
        This is some hardcore revisionist history.

        Nvidia's G-Sync is 100% proprietary and implementable only by monitor manufacturers integrating a chip, sold by Nvidia, into their monitors. Those proprietary ICs did not support standardized DisplayPort VRR and Nvidia intentionally locked-out their GPU users from enabling VRR on standard VRR DisplayPort monitors, which includes all FreeSync monitors!

        What AMD did was champion VRR through VESA, to get it into DisplayPort. Then, they created a certification program to establish a set of parameters that VRR displays should meet, in order to be suitable for gaming. That certification is what formed the original FreeSync. And because it was cheap and standards-based, it became very popular with monitor manufacturers and eventually forced Nvidia to give up on their lock-in strategy by enabling their products to support VRR monitors and developing their own certification program (known as "GSync-Compatible"). And, just to illustrate how artificial it was that Nvidia locked out their users, the unlock came in the form of a driver update.

        In contrast to GSync, Freesync monitors were always standard Displayport VRR monitors, and would enable VRR functionality with any GPU that merely supported VRR.

        Nvidia is like Apple. They have the best tech, charge the highest prices, and try very hard to lock-in their customers. It's almost a little surprising they don't get along better, but then I guess they're both so power-hungry that each would always play for dominance over the other.

        Originally posted by Stefem View Post
        monitor with cheap and slow scaler will impact gaming with a substantial latency penalty (like 60ms or more),
        It's difficult to see how the latency penalty could possibly be more than a couple frame periods.

        Originally posted by Stefem View Post
        To solve this with the launch of Freesync 2 (which introduced HDR support) they created first a proprietary API and later an addon (now known as FidelityFX LPM) that goes outside the standard HDR pipeline to do the tone map for the display on the GPU (initially they claimed the second tone map wasn't necessary but it was simply preposterous). The problem is that it must be implemented by the game developer and AMD must keep track of each monitor, leaving that to the monitor is much more elegant and effective solution in my mind, monitor makers could just upgrade the scaler if they don't want to use the G-Sync module.
        It really is better to do tone-mapping just once. And the GPU would be the best place to do it, though it does need to know the monitor's gamut. I don't see why gamut information cannot be queried from the display, in the same way as other capabilities.

        Comment


        • #44
          Originally posted by gardotd426 View Post
          Huh? Yeah I really think you don't know anything about GeForce Now or how it works or what it even is. Which is fine, but you should probably try and actually research a topic before making wild claims about it
          Well, I just mentioned it as an off-hand remark that wasn't even the point of my original post. Anyway, thanks for the details.

          Comment


          • #45
            Originally posted by Stefem View Post

            Trashy? Actually even TAAU developers themselves thinks DLSS is way ahead of it.
            What are you talking about, its the same thing. Compensation from previous frames using motion vector data to find out what changed and what remained. Also didn't start like that, the original AI thing failed because to many possible outcomes and astral processing power needed. This one uses only one of the few TAAU algorithms that is analytics based for simple cores that cannot do any other job. Lets see what happens when you strafe fast and frames don't match or when particles don't have motion vector data. Fake AI.

            Comment


            • #46
              Just use the TAAU or CAS, true image upgrade. I use VKbasalt with CAS on 0.5 via config file for all DXVK - VKD3D - VK games by default. When i press the default key "Home" off and on, i see the difference and its huge especially for older games like ACIII. DLSS inferior trash, Linux could be contaminated.

              Comment


              • #47
                Originally posted by artivision View Post
                What are you talking about, its the same thing. Compensation from previous frames using motion vector data to find out what changed and what remained.
                It's not, though. They use inferencing to effectively postprocess TAA, so its artifacts can be mitigated. Deep learning basically figures out when TAA is unreliable and down-weights its contribution.

                Originally posted by artivision View Post
                Also didn't start like that, the original AI thing failed because to many possible outcomes and astral processing power needed.
                It didn't use TAA. It was simply intra-frame, and that turned out to be insufficient. The reported blurring also could've been caused by a poorly designed loss function. We don't really know everything they changed between 1.0 and 2.0, but that's one of my hunches.

                Originally posted by artivision View Post
                This one uses only one of the few TAAU algorithms that is analytics based for simple cores that cannot do any other job.
                Tensor cores aren't cores, in the sense that you're probably thinking. They're like specialized matrix-multiply pipelines that are still driven by normal SM warps. So, using them does eat into general-purpose shader capacity. In that sense, it's not a freebie.

                Originally posted by artivision View Post
                Lets see what happens when you strafe fast and frames don't match or when particles don't have motion vector data.
                If you screen-grab frames of fast motion and look at parts where motion-vectors don't exist, I expect you'll simply see more blurring. Just a hunch, but the acuity of human vision also degrades in the case of fast motion, so users are less likely to notice.

                Originally posted by artivision View Post
                Fake AI.
                Yeah, more or less. It's trying to construct a higher-resolution than the input, and only so much can be gleaned from temporal data. So, there's some amount of extrapolation and filtering to make it look good that won't exactly match what native rendering would do.

                But, guess what? The entire field of computer graphics is basically trying to come up with computationally "cheap" techniques to approximate visual imagery. In a way, it's all "fake". And, as long as it's fast and looks good, why does it matter whether it uses deep learning or any other technique? Isn't the end result what counts?

                BTW, I think you seem to be getting hung up on the "AI" part, but just think of it as a specially-optimized chain of numerical filters. Traditionally, someone would try to create these sorts of filters by hand and test them on some inputs to see what they think looks good. In deep learning, you create a formal definition of how to compare the result with a natively-rendered version, and then use that as the error term that the training framework tries to minimize through a process of iterative numerical optimization of the filter parameters.
                Last edited by coder; 04 June 2021, 09:46 PM.

                Comment


                • #48
                  Originally posted by artivision View Post
                  When i press the default key "Home" off and on, i see the difference and its huge especially for older games like ACIII.
                  I don't see anyone saying you shouldn't do what looks good to you. However, when you start calling DLSS "inferior trash", I have to wonder if you've seriously looked at it since 2.0.

                  I posted a bunch of links to reviews, but the post got held for moderator approval due to too many links. So, maybe you missed it? It's been released and you can see it now:

                  Comment


                  • #49
                    Originally posted by artivision View Post

                    What are you talking about, its the same thing. Compensation from previous frames using motion vector data to find out what changed and what remained. Also didn't start like that, the original AI thing failed because to many possible outcomes and astral processing power needed. This one uses only one of the few TAAU algorithms that is analytics based for simple cores that cannot do any other job. Lets see what happens when you strafe fast and frames don't match or when particles don't have motion vector data. Fake AI.
                    So a Ferrari is the same thing as a Toyota Yaris because they have 4 wheel and an engine? TAAU and DLSS are different and produce quite different results, try for yourself in UE4.

                    Originally posted by artivision View Post
                    Just use the TAAU or CAS, true image upgrade. I use VKbasalt with CAS on 0.5 via config file for all DXVK - VKD3D - VK games by default. When i press the default key "Home" off and on, i see the difference and its huge especially for older games like ACIII. DLSS inferior trash, Linux could be contaminated.
                    If you call TAAU and CAS (two completely different things bdw) "true image upgrades" but DLSS "inferior trash" it's clear you don't want to look at the reality

                    Comment


                    • #50
                      Originally posted by Stefem View Post

                      So a Ferrari is the same thing as a Toyota Yaris because they have 4 wheel and an engine? TAAU and DLSS are different and produce quite different results, try for yourself in UE4.


                      If you call TAAU and CAS (two completely different things bdw) "true image upgrades" but DLSS "inferior trash" it's clear you don't want to look at the reality
                      I have around 30 titles - image quality upgraded via CAS and its very serious stuff that go beyond the available Ultra Options. I'm also testing TAAU. DLSS? Not the same with TAAU? Last time i checked v2 was a subset of TAAU, i never said similar, i said exactly the same and not NV technology. I said that when the original and actual deep learning filter didn't make it, was replaced with a version of TAAU in v2. Unreal 5 will also have an even more advanced version with newer algorithms and better than all the today's solutions, there is no reason to adopt DLSS.

                      Comment

                      Working...
                      X