Announcement

Collapse
No announcement yet.

Intel Publishes Xe Super Sampling "XeSS" 1.0 SDK

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by WannaBeOCer View Post
    Are you using TAAU as a generic term or referring to Unreal engines old upscaler named TAAU?
    They are all TAAU implementations with different balancing for performance etc.

    Originally posted by WannaBeOCer View Post
    Since TSR looks superior to FSR2.
    Says who? TSR looks worse than UE5 TAA, and UE5 TAA already introduces a ton of motion blur vs. FSR 2. And TSR even has way more motion blur than native TAA. Same goes for pixel jittering of thin lines etc.

    Originally posted by WannaBeOCer View Post
    FSR2 does look oversharpened which is why I don’t use it.
    And this is just plain wrong.

    Originally posted by WannaBeOCer View Post
    Then when I want to use it for FPS for example in Godfall using performance mode fences are clipped.
    Godfall isn't even FSR 2, it's FSR 1. Which would explain why you come to those faulty conclusions, as FSR 1 usually makes use of absurd sharpen strengths to counter the massive blur introduced by its spatial scaling.

    Comment


    • #22
      Originally posted by WannaBeOCer View Post

      TSR isn’t TAAU… Game engine developers aren’t just sitting on their ass. Even then all of the normal temporal based upscalers like FSR2 and game engine implementations all look aliased and oversharpened.
      DLSS is a "normal" temporal upscaler as well. It just uses ML to guide its mode decisions (i.e. which samples from past frames shall be used for constructing the current frame). XeSS seems to use a similar approach. It's a good idea, but heuristics aren't that much worse. That is why FSR 2.x is pretty competitive without any ML.

      Everything else, like sharpening and the like, is a matter of tuning and tweaking. FSR and XeSS are probably mostly worse than DLSS because they are somewhat immature. DLSS 2.x is old and went through a bunch of revisions to reduce ghosting and other artefacts. DLSS 2.0 was also plagued by pretty bad ghosting and other artefacts.

      This is why a neural network like XeSS and DLSS will continue to look better since they try to replicate a 16K sample of the frame.
      This isn't how it works at all. DLSS 1.x tried to go that route (sorta), and failed.
      Last edited by brent; 07 October 2022, 11:56 AM.

      Comment

      Working...
      X