Announcement

Collapse
No announcement yet.

Intel Publishes Xe Super Sampling "XeSS" 1.0 SDK

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    As XeSS seems to be vendor independent I can actually tolerate it. Considering XeSS is on the basis of AI-trained algoritms, meaning unlike FSR it first need to be trained for the game specifically before it can be used, it should in theory mean that with XeSS better performance or quality can be achieved over FSR. I'd argue for games that FSR should be there on release while XeSS can be added afterwards for an additional boost.


    DLSS seems to be the worst of them all. DLSS 1 and 2 were locked to the RTX series but now DLSS3 is locked to the RTX 4000 series. Why even implement it at that point? As a marketing gimmick for the 5 people that will play your game with an RTX 4000 card? Considering the rumors that Nvidia is deliberately keeping RTX 4000 prices high to sell off the RTX 3000 cards stock there is even less of an incentive to bother with DLSS3.​

    Comment


    • #12
      Originally posted by jrch2k8 View Post
      well with that price tag i think Xess will stay in the theorical realm for a while, really 3060 performance for 320$+ when the RX 6600 goes around 230$ new and is like 90% of the performance.

      The only reason it would make sense is if somehow the arc 770 raytracing performance is out of this world and i mean better than ampere good to make sense taking the risk when you can go with nvidia for few bucks more that is battle tested or even a next tier with RX 67 series for like 50$ more
      For starters XeSS is hardware agnostic, it will run on AMD and Nvidia cards that fulfill the requirements.

      Also depends on what type of performance, in gaming the RX 6600XT beats the RTX 3060 on linux.
      But when it comes to productivity like Blender or anything that relies on HIP, AMD falls incredibly short.
      This isn't exclusive to Linux either, nor to ray tracing support, even in Blender's Eevee AMD cards are doing abysmal, the only time AMD beats Nvidia is when CPU compute is required for the scene but even then the numbers don't necessarily blow Nvidia out of the water and the difference is a lot less then when there is no CPU involved.

      And it's not just Blender either, for Video editing the RX series also falls short.

      An argument can be made this is mostly due to AMD's drivers but it doesn't really feel like AMD is putting that much effort into their productivity performance.
      Blender 3.0 was released with HIP on December 3rd 2021 but after more than half a year the RX 6800XT still struggles to beat the RTX 3060 when using Optix (at least it beats the RTX 3050 now but come on you would expect it to beat at least the RTX 3060 TI in Eevee with ease).
      And that was only for Windows, Linux didn't see HIP support till Blender 3.2 so Blender 3.0 and 3.1 was awkward if you had an AMD card on Linux.
      AMD might do better in pricing and open source release but their graphic driver support is terrible and quite frankly always has been.

      Whether Intel will be better in this regard remains to be seen. But honestly I have more hope for Intel graphics cards right now than AMD.
      Blender 3.3 saw support for OneAPI on both Linux and Windows, so it already beats AMD there (all though I guess the bar was set pretty low).
      If Intel can fill the niche for similar or better productivity and gaming to Nvidia but for a lower price than I don't care if AMD is even cheaper.
      Who knows maybe with Intel around AMD will finally start taking their productivity performance serious.

      Comment


      • #13
        Originally posted by tenchrio View Post
        As XeSS seems to be vendor independent I can actually tolerate it. Considering XeSS is on the basis of AI-trained algoritms, meaning unlike FSR it first need to be trained for the game specifically before it can be used, it should in theory mean that with XeSS better performance or quality can be achieved over FSR. I'd argue for games that FSR should be there on release while XeSS can be added afterwards for an additional boost.


        DLSS seems to be the worst of them all. DLSS 1 and 2 were locked to the RTX series but now DLSS3 is locked to the RTX 4000 series. Why even implement it at that point? As a marketing gimmick for the 5 people that will play your game with an RTX 4000 card? Considering the rumors that Nvidia is deliberately keeping RTX 4000 prices high to sell off the RTX 3000 cards stock there is even less of an incentive to bother with DLSS3.​
        XeSS isn't much better than DLSS. From an end-user perspective it currently stands somewhere between FSR2 and DLSS & FSR3. By that I mean that unlike FSR2 it isn't technically open source and like FSR3 and DLSS it can use special hardware. Unlike DLSS, that hardware isn't a hard requirement for FSR3 and XeSS. Until we get XeSS and FSR3 benchmarks between compariable GPUs from Intel, NVIDIA, and AMD we won't know how much of an impact AI-helping hardware will matter. It may matter a lot. It may only matter with higher upscaler presets. We don't know yet.

        XeSS is implemented using open standards to ensure wide availability on many games and across a broad set of shipping hardware, from both Intel® and other GPU vendors2.

        Additionally, the XeSS algorithm can leverage the DP4a and XMX hardware capabilities of Xe GPUs for better performance.
        I'm also not a fan of the phrase "using open standards". Using open standards ≠ Is open source

        Anyone can use open standards when developing proprietary products....and open source for that matter -- that's basically why licensees like MIT and BSD exists.
        Last edited by skeevy420; 28 September 2022, 07:39 AM.

        Comment


        • #14
          Originally posted by tenchrio View Post
          As XeSS seems to be vendor independent I can actually tolerate it. Considering XeSS is on the basis of AI-trained algoritms, meaning unlike FSR it first need to be trained for the game specifically before it can be used, it should in theory mean that with XeSS better performance or quality can be achieved over FSR. I'd argue for games that FSR should be there on release while XeSS can be added afterwards for an additional boost.


          DLSS seems to be the worst of them all. DLSS 1 and 2 were locked to the RTX series but now DLSS3 is locked to the RTX 4000 series. Why even implement it at that point? As a marketing gimmick for the 5 people that will play your game with an RTX 4000 card? Considering the rumors that Nvidia is deliberately keeping RTX 4000 prices high to sell off the RTX 3000 cards stock there is even less of an incentive to bother with DLSS3.​
          Except for the fact that DLSS 3 will work on both Turing & Ampere, with only the optical flow based new frame generation part missing, because only Lovelace will ship the hardware unit necessary for adequate performance.

          So no, DLSS 3 will continue to work on all RTX GPUs, but only the 4000 series will benefit the most from it.

          Comment


          • #15
            Originally posted by NeoMorpheus View Post
            Unless AMD somehow makes FSR proprietary to their hardware, i don’t understand why waste time and effort on this and dlss.

            yes, yes, nvidiots cant accept such a concept and will claim that dlss is so superior, that fsr games looks like a 2600 game compared to a game on a 5090 super ti @16K.

            Yes, no wrong numbers, done on purpose.

            really disappointed in foss followers that turn a blind eye to nvidia lock-in tech.
            Practically every game engine already has a temporal upscaler similar to FSR 2.0. AMD just likes to reinvent the wheel and adds a catchy name to it. For example Halo Infinite has a temporal upscaler with the option “Resolution Scale” but you’ll still see idiots asking for FSR/DLSS. At least with AIs like DLSS/XeSS they can be trained to fix a problem like wires/gates which AA and regular temporal upscalers like FSR, Unreal’s TSR, etc fail at keeping clarity and break apart the wires and gates.

            Comment


            • #16
              Michael
              Intel is making a big effort and playing the open-source card fully, at least as far as their arch alchemist gpu is concerned and not only.

              On the other hand, I would like to know if they intend, in addition to drivers, to provide Intel Arc Control under Linux and in addition in an open-source way.

              This is the last point that I would like to know apart from the availability of these cards in Europe and more notably in France. Apart from the Arc 380, which has only been available for about ten days in Germany and on a single site, we feel a little alone. Me who wanted to do like you buy two, ........ it's not won.

              Comment


              • #17
                Originally posted by WannaBeOCer View Post
                Practically every game engine already has a temporal upscaler similar to FSR 2.0. AMD just likes to reinvent the wheel and adds a catchy name to it. For example Halo Infinite has a temporal upscaler with the option “Resolution Scale” but you’ll still see idiots asking for FSR/DLSS. At least with AIs like DLSS/XeSS they can be trained to fix a problem like wires/gates which AA and regular temporal upscalers like FSR, Unreal’s TSR, etc fail at keeping clarity and break apart the wires and gates.
                Except games' TAA usually looks already bad at native res, and their TAAU even worse accordingly vs. FSR 2/DLSS 2. And no need for DL "to fix fences".

                Comment


                • #18
                  Originally posted by aufkrawall View Post
                  Except games' TAA usually looks already bad at native res, and their TAAU even worse accordingly vs. FSR 2/DLSS 2. And no need for DL "to fix fences".
                  TSR isn’t TAAU… Game engine developers aren’t just sitting on their ass. Even then all of the normal temporal based upscalers like FSR2 and game engine implementations all look aliased and oversharpened. This is why a neural network like XeSS and DLSS will continue to look better since they try to replicate a 16K sample of the frame.

                  Comment


                  • #19
                    Originally posted by WannaBeOCer View Post
                    TSR isn’t TAAU…
                    Yes, it is. But go ahead and explain why it wouldn't be.

                    Originally posted by WannaBeOCer View Post
                    Game engine developers aren’t just sitting on their ass. Even then all of the normal temporal based upscalers like FSR2 and game engine implementations all look aliased and oversharpened. This is why a neural network like XeSS and DLSS will continue to look better since they try to replicate a 16K sample of the frame.
                    TAA of most of recent games still looks like trash vs. that found in Anvil Next engine, e.g. Assassin's Creed Origins from 2017.
                    And FSR 2 doesn't produce oversharpened look, its sharpen is optional and without it, it's still very blurry vs. ground truth 64xSSAA.

                    Comment


                    • #20
                      Originally posted by aufkrawall View Post
                      Yes, it is. But go ahead and explain why it wouldn't be.


                      TAA of most of recent games still looks like trash vs. that found in Anvil Next engine, e.g. Assassin's Creed Origins from 2017.
                      And FSR 2 doesn't produce oversharpened look, its sharpen is optional and without it, it's still very blurry vs. ground truth 64xSSAA.
                      Are you using TAAU as a generic term or referring to Unreal engines old upscaler named TAAU? Since TSR looks superior to FSR2.

                      FSR2 does look oversharpened which is why I don’t use it. Then when I want to use it for FPS for example in Godfall using performance mode fences are clipped. At the end of the day neural networks will learn and improve quicker.

                      Comment

                      Working...
                      X