Announcement

Collapse
No announcement yet.

AMD FidelityFX Super Resolution 2.0 Debuts

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • TNZfr
    replied
    Originally posted by Melcar View Post
    Setting my desktop to 125% zoom on a Wayland session also caps the resolution used by games to below my monitor's native. I always assumed it was a bug and not how things are supposed to work. Yeah, fps numbers are impressive but the drop in quality is noticeable.
    RSR is officially out within the last Adrenalin (windows drivers) ... and guess what ? 3072*1728 resolution appears in games settings and is automaticly managed by RSR subsystem.

    Leave a comment:


  • linuxgeex
    replied
    Originally posted by aufkrawall View Post
    Are you aware that video isn't undersampled by definition and TAAU for realtime graphics doesn't actually work like scaling, but more like resampling? Just because there are some analogies to some video stuff doesn't mean it'd be comparable...
    I think you're mixing up XeSS with FSR 2.

    AMD has now released more info on FSR 2. It works (as I predicted) with post-render raster buffers plus a depth map and a motion map. It's explicitly after the render pipeline, but before the UI. It doesn't re-project pixels using the scene like TAAU.

    Intel's XeSS on the other hand does operate more like an ML-enhanced TAAU, and as such is between the scene render and post-effects filtering. It can have higher fidelity as a result (they advertise acceptable scaling factor up to 3x) but it doesn't give the same degree of FPS speedup that lighter scaling filters like FSR can achieve.
    Last edited by linuxgeex; 25 March 2022, 05:43 PM.

    Leave a comment:


  • artivision
    replied
    Originally posted by aufkrawall View Post
    Are you aware that video isn't undersampled by definition and TAAU for realtime graphics doesn't actually work like scaling, but more like resampling? Just because there are some analogies to some video stuff doesn't mean it'd be comparable...
    What are you talking about? Resampling of what? Sample term has broad meaning. There is nothing different from the hackish way Svp4 uses motion vectors for another reason than default. Temporal Scaling means that more 2D pixels can be added base on previous 2DDDDDD frames. The current frame doesn't reevaluated by previous 3D data, there is no way to save 3D data. And doesn't reevaluated at all except adding new pixels. Only sharpening does some reevaluation and that is why some Amd Cas algorithms saved all those upscalers (Intel's including) from killer blurriness.
    Last edited by artivision; 24 March 2022, 06:11 PM.

    Leave a comment:


  • guglovich
    replied
    I will wait for the vkBasalt update

    Leave a comment:


  • brent
    replied
    Comparison with video upscaling is not very fitting. In video games you have much more data to work with. Most importantly, depth buffer and accurate motion vectors for instance. The motion data is much more accurate than motion compensation based on just color data could ever be! And games can be tailored to some degree to produce output more suitable for temporal scalers, too.

    Leave a comment:


  • yump
    replied
    Originally posted by linuxgeex View Post
    Sadly if you watch the linked demo on Youtube you can see there's a 1 frame input lag on the FSR 2.0 side of the image, which equates to an extra 16ms input lag when you're playing games. That will be OK for RPG games but it will be terrible for platformers, shmups and FPS.
    The videos aren't even frame synced, which you can see if you step through frame-by-frame (with the period key on YouTube). I don't think you can infer anything by frame counting here.

    Leave a comment:


  • linuxgeex
    replied
    Originally posted by aufkrawall View Post
    Will do so after having it seen rendered in realtime on my own screen. In the meantime, I expect it to not behave vastly differently than other TAAU implementations, as long as I don't see real evidence telling otherwise (which a YT video is not). In the end it is just another postprocessing effect in the pipeline with low render time.
    I agree that should be the case. They should be able to merge with the prior frame, so no rendering lag, just processing lag. They might need to do motion compensation to align portions of the image, which would require processing similar to the search steps of a video codec, with coarse and fine-grained FFTs of the two fields they want to combine, so they can merge the related data to improve the quality. Or maybe they can use some sort of geometry comparison to avoid that cost. Don't know. That would be cool though.

    However as presented, there's a 1 frame lag. If that's not representative, then that's their error. I look forward to them correcting/improving it, one way or the other.

    Leave a comment:


  • aufkrawall
    replied
    Are you aware that video isn't undersampled by definition and TAAU for realtime graphics doesn't actually work like scaling, but more like resampling? Just because there are some analogies to some video stuff doesn't mean it'd be comparable...

    Leave a comment:


  • artivision
    replied
    Originally posted by aufkrawall View Post
    Guess what FSR 2.0 will do. And of course DLSS 2.x also takes motion vectors into account, you can't do effective TSSAA without them without introducing monstrous ghosting artifacts. You can't re-invent the wheel without making it round, just the tweaking is different...
    Motion Vectors are background tag data of a 2D image. They show where individual items on a 2D screen moved from the positions of a previous 2D frame. Reading Mpeg / Svp4 documents will help. Those data are only useful if you start moving so frames are inconsistent, as less those data are used the better the upscaled image is, but that requires not moving at all.

    Leave a comment:


  • aufkrawall
    replied
    Originally posted by linuxgeex View Post
    Have you observed something different? By all means, share your own findings.
    Will do so after having it seen rendered in realtime on my own screen. In the meantime, I expect it to not behave vastly differently than other TAAU implementations, as long as I don't see real evidence telling otherwise (which a YT video is not). In the end it is just another postprocessing effect in the pipeline with low render time.

    Leave a comment:

Working...
X