Announcement

Collapse
No announcement yet.

NVIDIA Releasing DLSS Support For Vulkan API Games On Linux Tomorrow

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    An open implementation that runs on generic hardware (FSR) is unlikely to ever truly rival an implementation that is designed for specific hardware and utilises dedicated cores (DLSS).

    What is good though is that AMD/Intel users will now at least have _something_ because DLSS provides huge advantages to users.

    Comment


    • #32
      Originally posted by scottishduck View Post
      An open implementation that runs on generic hardware (FSR) is unlikely to ever truly rival an implementation that is designed for specific hardware and utilises dedicated cores (DLSS).
      Just because it's an open implementation does not imply that it is designed for generic hardware.

      Just because it doesn't run on _tensor_ cores* does not imply that it is designed for generic hardware.

      * DLSS 2.0's initial release did not even use _tensor_ cores. Nvidia is still busy working on this new technology. It is far from perfect and far from complete.

      Originally posted by scottishduck View Post
      What is good though is that AMD/Intel users will now at least have _something_ because DLSS provides huge advantages to users.
      DLSS causes increased product cost and frustrated employees. FSR provides game studios with a solution that they can understand and support in the long term. Nvidia relies on popularity to force game studios to debug blackbox hardware. This is not something new Nvidia is known for this behavior long before DLSS existed.

      FSR has been in the works for a while. There's an insane amount of game studios and game engines that have already implemented it. Just like DLSS it will take a few years before it will be good enough for the average gamer, but stating that it won't ever truly rival DLSS there's no proof for that yet. It can go either way...

      DLSS has a better chance of coming out on top, not because can use dedicated cores. It's because the gaming industry is driven by marketing not by technical reviews. Understanding DLSS from an end-user's perspective is extremely complicated. There are very few people with influence that have the ability to review this type of technology. Entertainment consumers want to be entertained. Technical reviews are not entertaining. While this statement is applicable for most game related technology it is emphasized with DLSS.

      Comment


      • #33
        Originally posted by Jabberwocky View Post
        * DLSS 2.0's initial release did not even use _tensor_ cores.
        It was (more or less officially) called DLSS "1.9" and it looked worse than "2.0" that succeeded it in Control.

        Originally posted by Jabberwocky View Post
        Nvidia is still busy working on this new technology. It is far from perfect and far from complete.
        It has recently improved a lot again regarding smearing artifacts with version 2.2, which was my biggest point of criticism (well, apart from being a vendor-locked blackbox).
        One shouldn't repeat AMD's mistake to underestimate it. It might (imho it will) turn out to be their kryptonite for the years to come...

        Comment


        • #34
          Originally posted by Alexmitter View Post

          There is a much simpler explanation. He made a wrong buying decision and is simply unable to accept his mistake forcing him to try proving everyone else wrong. Going so far to argue against every technology on the linux graphic stack that is not available with nvidias blobby driver. That, by the way is not a uncommon human behavior.

          When I first came to Linux, I used a GTX 1060 for my first year and back then I thought the driver was great, oh boy was I wrong. The same realization will come to anyone, but it may take a lot longer for bird.
          I had a 1060 Ti without any problems in the past, and I have a 4 year old 1080 ti without any problems. Sounds like you are either exaggerating or you were using a shit distribution (probably combination of both)

          Originally posted by Jabberwocky View Post

          Just because it's an open implementation does not imply that it is designed for generic hardware.

          Just because it doesn't run on _tensor_ cores* does not imply that it is designed for generic hardware.

          * DLSS 2.0's initial release did not even use _tensor_ cores. Nvidia is still busy working on this new technology. It is far from perfect and far from complete.



          DLSS causes increased product cost and frustrated employees. FSR provides game studios with a solution that they can understand and support in the long term. Nvidia relies on popularity to force game studios to debug blackbox hardware. This is not something new Nvidia is known for this behavior long before DLSS existed.

          FSR has been in the works for a while. There's an insane amount of game studios and game engines that have already implemented it. Just like DLSS it will take a few years before it will be good enough for the average gamer, but stating that it won't ever truly rival DLSS there's no proof for that yet. It can go either way...

          DLSS has a better chance of coming out on top, not because can use dedicated cores. It's because the gaming industry is driven by marketing not by technical reviews. Understanding DLSS from an end-user's perspective is extremely complicated. There are very few people with influence that have the ability to review this type of technology. Entertainment consumers want to be entertained. Technical reviews are not entertaining. While this statement is applicable for most game related technology it is emphasized with DLSS.
          You just contradicted yourself, on the one hand you say that DLSS is better because it uses dedicated hardware (and also the approach of deep learning will provide better results) however in the last paragraph you claim that decisions made by game development studios is done by marketing and not by technical considerations (even though DLSS 2.0 is technically much better than FSR even though its closed)

          While it is true that Nvidia does "push" these solutions onto game developers, they are almost always better than the alternative. There is also a good argument to be made that AMD is open source only because of circumstance, since they have been the underdog for so long. (one of the few options that underdogs have to compete is by doing open source). There is a big difference between a company that is "forced" to open source just to compete vs a company that starts with open source values (i.e. System76)
          Last edited by mdedetrich; 22 June 2021, 08:45 AM.

          Comment


          • #35
            I'm not sure why the article acts like it's up in the air if these will be the 470 drivers. When DLSS for Linux was announced, they said that it would be coming in the 470 driver.

            Comment


            • #36
              Amazing what a little competition can do.

              Comment


              • #37
                Originally posted by mdedetrich View Post

                You just contradicted yourself, on the one hand you say that DLSS is better because it uses dedicated hardware (and also the approach of deep learning will provide better results) however in the last paragraph you claim that decisions made by game development studios is done by marketing and not by technical considerations (even though DLSS 2.0 is technically much better than FSR even though its closed)
                It's astonishing too see these conclusions that you've made. I don't know if I've made grammar mistakes but I never said DLSS is better because it uses dedicated hardware. NOR did I say that decisions made by game development studios is done by marketing. Is it that important to prove me wrong?

                While it is true that Nvidia does "push" these solutions onto game developers, they are almost always better than the alternative. There is also a good argument to be made that AMD is open source only because of circumstance, since they have been the underdog for so long. (one of the few options that underdogs have to compete is by doing open source). There is a big difference between a company that is "forced" to open source just to compete vs a company that starts with open source values (i.e. System76)
                I can see how this narrative is considered by many people these days, if I wasn't aware of hardware development for the past few decades I would likely feel the same way. I however would not call it a good argument, it's very far from that.

                I'm not going to disagree on nvidia's solutions being better if you ignore cost. Price over performance I would have a different view on it.

                There's a few examples of AMD leading with open technology for example mantle. While other technologies like FSR does seem like a panicked reaction yet if you're willing to do your homework you should know about VSR that was being worked on in 2015 which is before nvidia poked at this type of resolution scaling. It sucked for demanding games but was useful for old games that did not support high resolutions what are standard today.

                Nobody forced AMD to open up mantle and AMD did not gain anything from doing that. It's not like Nvidia that was forced into supporting adaptive sync after failing to vendor lock-in the gaming display market.

                I'm not saying that AMD will remain open source friendly in the future if they managed to take the lead again in the GPU market, anything is possible. What I am saying that AMD was open source friendly before Nvidia took the lead in the GPU market around 2015/2016 at least with regards to highend gaming cards (not price over performance). There's ample proof of that.

                Comment


                • #38
                  Originally posted by aufkrawall View Post
                  It was (more or less officially) called DLSS "1.9" and it looked worse than "2.0" that succeeded it in Control.


                  It has recently improved a lot again regarding smearing artifacts with version 2.2, which was my biggest point of criticism (well, apart from being a vendor-locked blackbox).
                  One shouldn't repeat AMD's mistake to underestimate it. It might (imho it will) turn out to be their kryptonite for the years to come...
                  You're right. I used wikipedia as source...

                  In 2019, the videogame Control shipped with ray tracing and an improved version of DLSS, which didn't use the Tensor Cores

                  In April 2020, Nvidia advertised and shipped with driver version 445.75 an improved version of DLSS named DLSS 2.0, which was available for a few existing games including Control and Wolfenstein: Youngblood, and would be available later for upcoming games. This time Nvidia said that it used the Tensor Cores again, and that the AI did not need to be trained specifically on each game
                  -- https://en.wikipedia.org/wiki/Deep_l...elease_history

                  It appears that the information has changed overtime. I'm assuming that it happened as Control was being developed.

                  Comment


                  • #39
                    Originally posted by Jabberwocky View Post

                    It's astonishing too see these conclusions that you've made. I don't know if I've made grammar mistakes but I never said DLSS is better because it uses dedicated hardware. NOR did I say that decisions made by game development studios is done by marketing. Is it that important to prove me wrong?
                    The way you communicated before you were implying that FSR is better because its easier to implement (and hence its better). To me this is an odd definition of better, its like saying a filesystem is better than a database for storing relational data atomically (using a filesystem may be easier, but ultimately a proper database like PostGres/SQLite is better).


                    Originally posted by Jabberwocky View Post
                    I can see how this narrative is considered by many people these days, if I wasn't aware of hardware development for the past few decades I would likely feel the same way. I however would not call it a good argument, it's very far from that.

                    I'm not going to disagree on nvidia's solutions being better if you ignore cost. Price over performance I would have a different view on it.

                    There's a few examples of AMD leading with open technology for example mantle. While other technologies like FSR does seem like a panicked reaction yet if you're willing to do your homework you should know about VSR that was being worked on in 2015 which is before nvidia poked at this type of resolution scaling. It sucked for demanding games but was useful for old games that did not support high resolutions what are standard today.

                    Nobody forced AMD to open up mantle and AMD did not gain anything from doing that. It's not like Nvidia that was forced into supporting adaptive sync after failing to vendor lock-in the gaming display market.

                    I'm not saying that AMD will remain open source friendly in the future if they managed to take the lead again in the GPU market, anything is possible. What I am saying that AMD was open source friendly before Nvidia took the lead in the GPU market around 2015/2016 at least with regards to highend gaming cards (not price over performance). There's ample proof of that.
                    Ill put it this way, earlier in time AMD wasn't really that open to open source (pun intended). They didn't start as an open source company and ironically NVidia up until 2010's was arguably overall doing more for open source than AMD (NVidia actually bothered making drivers that worked decently Linux where as for ATI/AMD it was bottom priority). I cannot say this definitely but it wouldn't be out of the ordinary (there are plenty of companies doing this) if AMD start pushing/starting more open source solutions simply because its one of the most effective ways to compete if you aren't a market position leader (i.e. NVidia forced their hand here).

                    Also I don't really agree that AMD forced NVidia to "open up", I mean NVidia did end up having to adopt some open standards such as Vulkan/Freesync but its not like NVidia ended up open sourcing their drivers. Even with open standards, its not that black and white (for example, Vuikan is clearly better than OpenGL/DX11 or under in every conceivable way when we are talking about an API for game engines so there are purely technical reasons for adopting something like Vulkan).

                    Something like gsync was definitely a flop, mainly because it acted more as a standard (i.e. it was fulfilling a similar role like Vesa) rather than being an actual feature
                    Last edited by mdedetrich; 24 June 2021, 06:10 AM.

                    Comment


                    • #40
                      Originally posted by mdedetrich View Post

                      The way you communicated before you were implying that FSR is better because its easier to implement (and hence its better). To me this is an odd definition of better, its like saying a filesystem is better than a database for storing relational data atomically (using a filesystem may be easier, but ultimately a proper database like PostGres/SQLite is better).
                      I could have been more descriptive and spent more time on my phrasing. I agree just because something is more simple doesn't imply that it's better. It's still early days in terms of how much data we have on FSR, so I'll reserve my final judgement until there's more proof. My arguments are based on performance (limited data), product cost and vendor lock-in aspects.

                      There are many people out there that swear by Apple and have the exact same argument as you do about Nvidia releasing superior solutions. With Apple your iphone, ipad, macbook and apple tv works really well together. The products are superior to any other solution available. It easy and fast to move from one device to another device and you can stay more productive while using these products. The problem is that there's a cost to that, not only financial but a lifestyle cost. Apple decides what you should and should not be able to do. Most Apple users don't know what Vulkan is or why it's beneficial to have it on their systems. The same goes for AV1, the latest 4K Apple tv doesn't support AV1 hardware decoding. I have tried using Apple product in a commercial and private environment. It worked really well commercially for non-technical people, however in my opinion the pros don't outweigh the cons in the long term. Today the only Apple product that I own is an old iphone and macbook air that I used to publish apps on the iStore.

                      There are many Apple users that are technical enough to comprehend the big picture impact it has on the industry having to support multiple graphics APIs or codecs. However they choose to stick their heads in the sand because Apple's ethos is so powerful it prevents their users from thinking logically about the impact.

                      DLSS is not important enough to spend so much time thinking on it, but I decided to explain my perspective anyway just to show you why I apply my bias. It could help to understand perspective on other things like proprietary drivers.

                      Ill put it this way, earlier in time AMD wasn't really that open to open source (pun intended). They didn't start as an open source company and ironically NVidia up until 2010's was arguably overall doing more for open source than AMD (NVidia actually bothered making drivers that worked decently Linux where as for ATI/AMD it was bottom priority). I cannot say this definitely but it wouldn't be out of the ordinary (there are plenty of companies doing this) if AMD start pushing/starting more open source solutions simply because its one of the most effective ways to compete if you aren't a market position leader (i.e. NVidia forced their hand here).
                      Opinion: There was a time when Nvidia's Linux drivers was a lot better ATI's. I mostly used Nvidia for gaming in Linux between ~2002 and ~2011. I distinctly remember struggling for days to get a Geforce 6600 GT to work on Ubuntu 4.10 it was very painful but still easier than getting some of my friends' ATI cards to work. It was the dark ages of having to resort to manually compiling kernel modules in order to fix problems. I would take breaks sometimes by enabling the reverse engineered "nv" driver to play supertux just to relax a bit before I went back to debugging. Performance on Nvidia wasn't bad either, but that's subjective depending on the application. World of Warcraft ran better in Wine than on Windows for example.

                      Factual (with proof): This is where I suspect you are biased. AMD bought ATI in October 2006 and in September 2007 they started opening up hardware specifications according to https://en.wikipedia.org/wiki/Radeon...tation_release . By 2008 the R600 GPUs instruction set guide was released by AMD: https://web.archive.org/web/20090205..._Registers.pdf which shows AMD was pro-open-source from the beginning. Like I said in my previous comments I don't know what the future holds anything can happen. The track record of AMD (and Intel too) is much better than that of Nvidia, with regards to doing more for open source even at that early stage.

                      Opinion: Even here on Phoronix we have some reviews that talks about open source driver development in May 2009: https://www.phoronix.com/scan.php?pa..._hd4770&num=12 there's probably much more but this was the first and only one that I opened. It appears that the drivers weren't working yet but the development was underway. In May 2009 I still used a GeForce 9400 GT and upgraded to a 9600 GT later on. AMD remained ahead in some cases or at the very least competitive for many years after this. This is why I do not agree with your statement that AMD was "pushing/starting more open source solutions simply because its one of the most effective ways to compete if you aren't a market position leader". In my opinion AMD lost their competitiveness in gaming between 2014/2015* with Maxwell starting to pull in the lead while everyone was stuck on 28 nm**. This was more than 5 years after AMD started to contribute to open source.

                      * I said 2015/2016 in my previous comment but I'm changing that based on Windows performance tests just to play devil's advocate to my case.
                      ** Research showing how Nvidia's drivers impacted game performance over many years. It's an old video with bad audio but still relevant. https://www.youtube.com/watch?v=nIoZB-cnjc0 In case you think that this is selection bias on my end. I did my own research over the years with actual hardware that I had and found that this person summarised my technical findings in the shortest most clear way. It is not that I simply searched for the first thing matched my perspective like I did with the hd4770 benchmark results (due to lack of time).

                      Also I don't really agree that AMD forced NVidia to "open up", I mean NVidia did end up having to adopt some open standards such as Vulkan/Freesync but its not like NVidia ended up open sourcing their drivers.
                      I did not mean that AMD forced Nvidia to open up, so I agree with you here. Nvidia opening up would mean they would loose too much profit from artificial market segmentation. It would not make sense if they ever did that in the long run.

                      Even with open standards, its not that black and white (for example, Vuikan is clearly better than OpenGL/DX11 or under in every conceivable way when we are talking about an API for game engines so there are purely technical reasons for adopting something like Vulkan).
                      Sure, life is complicated. The paradox of choice is real 😆 . Some people will argue that OpenGL is better than Vulkan because it's higher level and easier for indie studios. If we look back a few years we see the same thing with audio drivers/libraries. Is it better to run bytecode/wasm or is glibc too much and you want to make your own syscalls? Similarly we should be able to decide between FSR and DLSS the problem that I see repeating over and over is blackbox magic which prevents people even technical people from being able to make informed choices. The researchers that spend their own personal time to investigate those systems rarely obtain enough to be considered anything other than a conspiracy theorist.

                      DLSS could conquer FSR in the long run, but would that because the fundamental technology is better or because Nvidia actively trained the algorithm for each game that was published. It could be the one or the other, we won't know. If it was the latter then after FSR died out they could slowly stop the high investment overhead of training each game or just keep it at a minimum and no one would be the wiser.

                      Something like gsync was definitely a flop, mainly because it acted more as a standard (i.e. it was fulfilling a similar role like Vesa) rather than being an actual feature
                      Exactly! I agree with you 100%. Would we have known that it's because it was a bad that that it tried to fill the role of Vesa if it wasn't for the competition? I doubt it but maybe I'm just too pessimistic.

                      Comment

                      Working...
                      X