Announcement

Collapse
No announcement yet.

NVIDIA's Proprietary Driver Is Moving Closer With Kernel Mode-Setting

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by birdie View Post
    the hell froze over
    Really?


    Originally posted by dungeon View Post
    So you can't say fglrx driver is quilty in that case (nor to expect amd will somehow fix that), but those game porters are lazy because they just add nvidia variabile for those games... which is easy for them, it is good for nvidia driver users... but it can't be every other driver is quilty because they doing that
    So what is keeping AMD from implementing their own variable that activates the same ( threaded optimisations ) for their driver?

    Comment


    • #22
      Originally posted by Licaon View Post

      So what is keeping AMD from implementing their own variable that activates the same ( threaded optimisations ) for their driver?
      Nothing and everything. But answer is simple why they shouldn't do it - engines should be properly multithreaded, not a driver to workaround some (not all) of those that aren't

      With that non default "optimization" variabile nvidia driver moves some draw calls out of main thread which help those not properly multithreaded games AFAIK... that is not "optimization" but a hack which works for their driver and mostly helps only big chips too. On other drivers moving exact those *some* draw calls, might behave entirely different and game porters just tested nvidia variabile etc... So it helps just nvidia driver, there is not guarantee it will work and behave fine or gain same performance for any other driver on same games.
      Last edited by dungeon; 24 May 2015, 04:55 AM.

      Comment


      • #23
        Originally posted by dungeon View Post

        Nothing and everything. But answer is simple why they shouldn't do it - engines should be properly multithreaded, not a driver to workaround some (not all) of those that aren't

        With that non default "optimization" variabile nvidia driver moves some draw calls out of main thread which help those not properly multithreaded games AFAIK... that is not "optimization" but a hack which works for their driver and mostly helps only big chips too. On other drivers moving exact those *some* draw calls, might behave entirely different and game porters just tested nvidia variabile etc... So it helps just nvidia driver, there is not guarantee it will work and behave fine or gain same performance for any other driver on same games.
        In fact, that is the idea behind Mantle/Vulkan, isn't it?

        Comment


        • #24
          Originally posted by dungeon View Post
          Nothing and everything. But answer is simple why they shouldn't do it - engines should be properly multithreaded, not a driver to workaround some (not all) of those that aren't
          Because this is the first and only time nVidia or AMD for that fact modify their drivers to suit games, right?

          Oh wait, this happens all the time on Windows, on both sides, drivers are released on game launch with advertised optimisations for that game, and now when we have like 2 games that benefit from that one feature ( BTW, Metro2033/LL Redux have a performance LOSS with this optimisation FYI ) you say that it should not be done, because driver/game programmers are supposed to be gods and once they release the first driver/game version that is the one and only driver and they'll amend only new cards IDs and nothing more.

          Or maybe that drivers should behave correctly on ideal programs because games/applications that are used in the real (not perfect) world should just STFU or "be properly multithreaded'?

          And I do agree they should "be properly multithreaded" but I do understand this will not be a fact any time soon, hence any driver tricks that help me as a consumer are welcomed.

          And no, this is not a GameWorks vs AMD thing like we now have on Windows with Witcher3/ProjectCars
          Last edited by Licaon; 24 May 2015, 05:43 AM.

          Comment


          • #25
            Glad this conversation is open. I was just about to upgrade to an amd m290x hoping to get away from the horrible DE performance i have with my hd 5870m and pretty much any desktop environment. But now im worried that the radeonSI based m290 is going to be just as problematic. Does anyone here recommend i should switch to a nvidia card instead? With this latest news, it sounds more interesting. Im not interested in gaming performance, but desktop performance, browser, etc.
            Ps: yea, firefox sux for me right now, i thought that was just firefox, but someone mentioned it's the driver..

            Comment


            • #26
              Originally posted by ciupenhauer View Post
              Does anyone here recommend i should switch to a nvidia card instead? With this latest news, it sounds more interesting. Im not interested in gaming performance, but desktop performance, browser, etc.
              If you are not interested in gaming, well you should go for intel or amd apus... those have better 2D performance then mentioned dedicated chips by nature, intel driver even has 2D most optimized.

              If you use radeon driver you might try glamor acceleration on that hd 5870m, that might now be better then EXA which is default for those chips.
              Last edited by dungeon; 24 May 2015, 06:34 AM.

              Comment


              • #27
                Originally posted by dungeon View Post

                If you are not interested in gaming, well you should go for intel or amd apus... those have better 2D performance then mentioned dedicated chips by nature, intel driver even has 2D most optimized.

                If you use radeon driver you might try glamor acceleration on that hd 5870m, that might now be better then EXA which is default for those chips.
                I have a laptop with dedicated gpu, I don't really have those options.
                believe it or not, glamor is still not as good as exa for me. Overall performance is ok, but it really turns my fan on for the simplest window resize, it s impossibly annoying.
                Recently switched on hyperz, also turns the fan on too much, but DE performance is better indeed. Nothing but the standard EXA is usable, and that one is kinda slow

                So nvidia is not a better option?
                Last edited by ciupenhauer; 24 May 2015, 07:00 AM.

                Comment


                • #28
                  Nope, nvidia is awfull for 2D... i guess people tend to switch and use intel for 2D, but nvidia or amd dedicated for 3D on laptops. Other way around as you are not a gamer, dedicated is not really needed.

                  For radeon i guess you tried glamor on newer distro, glamor from xserver 1.16 or better 1.17 should be the best... but can't say for 1.15 with separate glamor is awfull in comparison.
                  Last edited by dungeon; 24 May 2015, 07:01 AM.

                  Comment


                  • #29
                    Originally posted by bridgman View Post

                    Ahh... 17 year old cards. Yeah, I think it's safe to say we have stopped adding features for those.



                    Yeah, it's the same for all the vendors unfortunately. Intel only has GL 3.3, NVidia only has OpenCL 1.2 vs 2.0 for AMD, and we only have GL 4.4 instead of NVidia's GL 4.5 at the moment.
                    Unless you have a Quadro card, Nvidia's driver doesn't even support fully OpenGL 3.0. They deliberately disable internal rendering formats above 8bits per color unless you have a Quadro card, and GL_RGB10 and GL_RGB10_A2 have been required internal formats since OpenGL 3.0. So no OpenGL 3.0 for you
                    Last edited by carewolf; 24 May 2015, 07:11 AM.

                    Comment


                    • #30
                      Originally posted by bridgman View Post

                      Ahh... 17 year old cards. Yeah, I think it's safe to say we have stopped adding features for those.



                      Yeah, it's the same for all the vendors unfortunately. Intel only has GL 3.3, NVidia only has OpenCL 1.2 vs 2.0 for AMD, and we only have GL 4.4 instead of NVidia's GL 4.5 at the moment.

                      I think NVIDIA gained OpenCL 2.0 support a few releases ago, but it did not get much attention. Someone mentioned it on the forums a few days ago.

                      I really hope NVIDIA finishes their work to support Wayland before Gnome 3.18 is out so that Gnome can merge any necessary changes to support NVIDIA on their Wayland compositor.

                      Comment

                      Working...
                      X