Announcement

Collapse
No announcement yet.

Intel Publishes Xe Super Sampling "XeSS" 1.0 SDK

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • brent
    replied
    Originally posted by WannaBeOCer View Post

    TSR isn’t TAAU… Game engine developers aren’t just sitting on their ass. Even then all of the normal temporal based upscalers like FSR2 and game engine implementations all look aliased and oversharpened.
    DLSS is a "normal" temporal upscaler as well. It just uses ML to guide its mode decisions (i.e. which samples from past frames shall be used for constructing the current frame). XeSS seems to use a similar approach. It's a good idea, but heuristics aren't that much worse. That is why FSR 2.x is pretty competitive without any ML.

    Everything else, like sharpening and the like, is a matter of tuning and tweaking. FSR and XeSS are probably mostly worse than DLSS because they are somewhat immature. DLSS 2.x is old and went through a bunch of revisions to reduce ghosting and other artefacts. DLSS 2.0 was also plagued by pretty bad ghosting and other artefacts.

    This is why a neural network like XeSS and DLSS will continue to look better since they try to replicate a 16K sample of the frame.
    This isn't how it works at all. DLSS 1.x tried to go that route (sorta), and failed.
    Last edited by brent; 07 October 2022, 11:56 AM.

    Leave a comment:


  • aufkrawall
    replied
    Originally posted by WannaBeOCer View Post
    Are you using TAAU as a generic term or referring to Unreal engines old upscaler named TAAU?
    They are all TAAU implementations with different balancing for performance etc.

    Originally posted by WannaBeOCer View Post
    Since TSR looks superior to FSR2.
    Says who? TSR looks worse than UE5 TAA, and UE5 TAA already introduces a ton of motion blur vs. FSR 2. And TSR even has way more motion blur than native TAA. Same goes for pixel jittering of thin lines etc.

    Originally posted by WannaBeOCer View Post
    FSR2 does look oversharpened which is why I don’t use it.
    And this is just plain wrong.

    Originally posted by WannaBeOCer View Post
    Then when I want to use it for FPS for example in Godfall using performance mode fences are clipped.
    Godfall isn't even FSR 2, it's FSR 1. Which would explain why you come to those faulty conclusions, as FSR 1 usually makes use of absurd sharpen strengths to counter the massive blur introduced by its spatial scaling.

    Leave a comment:


  • WannaBeOCer
    replied
    Originally posted by aufkrawall View Post
    Yes, it is. But go ahead and explain why it wouldn't be.


    TAA of most of recent games still looks like trash vs. that found in Anvil Next engine, e.g. Assassin's Creed Origins from 2017.
    And FSR 2 doesn't produce oversharpened look, its sharpen is optional and without it, it's still very blurry vs. ground truth 64xSSAA.
    Are you using TAAU as a generic term or referring to Unreal engines old upscaler named TAAU? Since TSR looks superior to FSR2.

    FSR2 does look oversharpened which is why I don’t use it. Then when I want to use it for FPS for example in Godfall using performance mode fences are clipped. At the end of the day neural networks will learn and improve quicker.

    Leave a comment:


  • aufkrawall
    replied
    Originally posted by WannaBeOCer View Post
    TSR isn’t TAAU…
    Yes, it is. But go ahead and explain why it wouldn't be.

    Originally posted by WannaBeOCer View Post
    Game engine developers aren’t just sitting on their ass. Even then all of the normal temporal based upscalers like FSR2 and game engine implementations all look aliased and oversharpened. This is why a neural network like XeSS and DLSS will continue to look better since they try to replicate a 16K sample of the frame.
    TAA of most of recent games still looks like trash vs. that found in Anvil Next engine, e.g. Assassin's Creed Origins from 2017.
    And FSR 2 doesn't produce oversharpened look, its sharpen is optional and without it, it's still very blurry vs. ground truth 64xSSAA.

    Leave a comment:


  • WannaBeOCer
    replied
    Originally posted by aufkrawall View Post
    Except games' TAA usually looks already bad at native res, and their TAAU even worse accordingly vs. FSR 2/DLSS 2. And no need for DL "to fix fences".
    TSR isn’t TAAU… Game engine developers aren’t just sitting on their ass. Even then all of the normal temporal based upscalers like FSR2 and game engine implementations all look aliased and oversharpened. This is why a neural network like XeSS and DLSS will continue to look better since they try to replicate a 16K sample of the frame.

    Leave a comment:


  • aufkrawall
    replied
    Originally posted by WannaBeOCer View Post
    Practically every game engine already has a temporal upscaler similar to FSR 2.0. AMD just likes to reinvent the wheel and adds a catchy name to it. For example Halo Infinite has a temporal upscaler with the option “Resolution Scale” but you’ll still see idiots asking for FSR/DLSS. At least with AIs like DLSS/XeSS they can be trained to fix a problem like wires/gates which AA and regular temporal upscalers like FSR, Unreal’s TSR, etc fail at keeping clarity and break apart the wires and gates.
    Except games' TAA usually looks already bad at native res, and their TAAU even worse accordingly vs. FSR 2/DLSS 2. And no need for DL "to fix fences".

    Leave a comment:


  • olielvewen
    replied
    Michael
    Intel is making a big effort and playing the open-source card fully, at least as far as their arch alchemist gpu is concerned and not only.

    On the other hand, I would like to know if they intend, in addition to drivers, to provide Intel Arc Control under Linux and in addition in an open-source way.

    This is the last point that I would like to know apart from the availability of these cards in Europe and more notably in France. Apart from the Arc 380, which has only been available for about ten days in Germany and on a single site, we feel a little alone. Me who wanted to do like you buy two, ........ it's not won.

    Leave a comment:


  • WannaBeOCer
    replied
    Originally posted by NeoMorpheus View Post
    Unless AMD somehow makes FSR proprietary to their hardware, i don’t understand why waste time and effort on this and dlss.

    yes, yes, nvidiots cant accept such a concept and will claim that dlss is so superior, that fsr games looks like a 2600 game compared to a game on a 5090 super ti @16K.

    Yes, no wrong numbers, done on purpose.

    really disappointed in foss followers that turn a blind eye to nvidia lock-in tech.
    Practically every game engine already has a temporal upscaler similar to FSR 2.0. AMD just likes to reinvent the wheel and adds a catchy name to it. For example Halo Infinite has a temporal upscaler with the option “Resolution Scale” but you’ll still see idiots asking for FSR/DLSS. At least with AIs like DLSS/XeSS they can be trained to fix a problem like wires/gates which AA and regular temporal upscalers like FSR, Unreal’s TSR, etc fail at keeping clarity and break apart the wires and gates.

    Leave a comment:


  • Linuxxx
    replied
    Originally posted by tenchrio View Post
    As XeSS seems to be vendor independent I can actually tolerate it. Considering XeSS is on the basis of AI-trained algoritms, meaning unlike FSR it first need to be trained for the game specifically before it can be used, it should in theory mean that with XeSS better performance or quality can be achieved over FSR. I'd argue for games that FSR should be there on release while XeSS can be added afterwards for an additional boost.


    DLSS seems to be the worst of them all. DLSS 1 and 2 were locked to the RTX series but now DLSS3 is locked to the RTX 4000 series. Why even implement it at that point? As a marketing gimmick for the 5 people that will play your game with an RTX 4000 card? Considering the rumors that Nvidia is deliberately keeping RTX 4000 prices high to sell off the RTX 3000 cards stock there is even less of an incentive to bother with DLSS3.​
    Except for the fact that DLSS 3 will work on both Turing & Ampere, with only the optical flow based new frame generation part missing, because only Lovelace will ship the hardware unit necessary for adequate performance.

    So no, DLSS 3 will continue to work on all RTX GPUs, but only the 4000 series will benefit the most from it.

    Leave a comment:


  • skeevy420
    replied
    Originally posted by tenchrio View Post
    As XeSS seems to be vendor independent I can actually tolerate it. Considering XeSS is on the basis of AI-trained algoritms, meaning unlike FSR it first need to be trained for the game specifically before it can be used, it should in theory mean that with XeSS better performance or quality can be achieved over FSR. I'd argue for games that FSR should be there on release while XeSS can be added afterwards for an additional boost.


    DLSS seems to be the worst of them all. DLSS 1 and 2 were locked to the RTX series but now DLSS3 is locked to the RTX 4000 series. Why even implement it at that point? As a marketing gimmick for the 5 people that will play your game with an RTX 4000 card? Considering the rumors that Nvidia is deliberately keeping RTX 4000 prices high to sell off the RTX 3000 cards stock there is even less of an incentive to bother with DLSS3.​
    XeSS isn't much better than DLSS. From an end-user perspective it currently stands somewhere between FSR2 and DLSS & FSR3. By that I mean that unlike FSR2 it isn't technically open source and like FSR3 and DLSS it can use special hardware. Unlike DLSS, that hardware isn't a hard requirement for FSR3 and XeSS. Until we get XeSS and FSR3 benchmarks between compariable GPUs from Intel, NVIDIA, and AMD we won't know how much of an impact AI-helping hardware will matter. It may matter a lot. It may only matter with higher upscaler presets. We don't know yet.

    XeSS is implemented using open standards to ensure wide availability on many games and across a broad set of shipping hardware, from both Intel® and other GPU vendors2.

    Additionally, the XeSS algorithm can leverage the DP4a and XMX hardware capabilities of Xe GPUs for better performance.
    I'm also not a fan of the phrase "using open standards". Using open standards ≠ Is open source

    Anyone can use open standards when developing proprietary products....and open source for that matter -- that's basically why licensees like MIT and BSD exists.
    Last edited by skeevy420; 28 September 2022, 07:39 AM.

    Leave a comment:

Working...
X