Announcement

Collapse
No announcement yet.

NVIDIA/Radeon Windows 10 vs. Ubuntu Linux Relative Gaming Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by schmidtbag View Post
    I'm not sure if I'd say these are better than comparing raw frame rates, but these tests absolutely put this in an interesting perspective that we haven't really seen before. What I find especially interesting is how AMD's hardware performs roughly the same (proportionately) as Nvidia's. I don't recall seeing any graphs that made this so clear.

    That being said, it gets me to wonder - what if Linux may be doing something wrong (besides the obvious outdated parts of X11)? If both AMD and Nvidia hardware suffer roughly the same performance losses, in one perspective, you could say that there is something Linux does less efficiently that the drivers cannot compensate for.
    Yeah, it's called incomplete ports. The fact that people like you and even the porting companies themselves have come to expect driver developers to complete their ports is exactly what the problem is.

    Comment


    • #22
      Originally posted by gamerk2 View Post

      It's the scheduler. Here's how Windows and Linux handle thread scheduling:

      Windows:

      The Highest Priority Runnable Thread(s) ALWAYS run, without exception.

      Now, Windows does a lot to manage thread priorities to ensure every thread eventually runs. For example: Running threads get their priority lowered, and waiting threads get their priority bumped. Fullscreen applications get a thread priority boost. And so on.

      Also, and this is key: Windows does not care what CPU core a thread gets assigned to. Windows doesn't care your bumped thread was running on CPU 1; if CPU 3 has the lowest priority thread, your thread goes on CPU 3. This ensures the highest priority threads ALWAYS run.

      For a singular app that requires a large amount of CPU time, this approach maximizes performance. That application will grab a disproportionate amount of CPU runtime, and achieve maximum possible performance. The downside, is the rest of the system grinds to a halt due to a longer then normal time for other threads to get executed. In essence, you're threading Latency for Performance.

      Linux:

      Lunix schedules threads on per-core run-queues. Threads are pre-allocated to cores in such a way as to try and achieve load balancing, so all threads (in theory) have a chance to execute over a certain time span.

      This approach works great from a latency perspective, as all threads will get equal share of CPU time. But, for example, you are running a game that has strict 16ms timing deadlines; do you really want to bump one of those important threads 25% of the time so you can update some low-priority background app? Not really, but hey, your desktop clock is accurate to ~2ms. You're trading Performance for Latency.


      Make an "unfair" scheduler that biases performance for singular applications, and get rid of per-core runqueues, and performance will fall right in line with Windows.
      That's not entirely correct The reason why per thread run queues are so important even for games is because not every "core" is a real processor. Windows doesn't give a tiny shit if it throws the most CPU intensive load on a SMT core where a different thread can already have that pipline fully maxed out. It also doesn't give a crap about whether that thread gets put on a CMT core where a different thread can have the integrated NB maxed out.

      In all likely hood if you run a multithreaded load on windows you will get lesser performance than linux because it will throw heavy loads on SMT cores and it will allow CMT cores to thrash the cache and so on and so forth. If all you ever do is run a single instance of superpi, then maybe you could be correct in that one scenario, but as soon as you try to run two instances you automatically become wrong.
      Last edited by duby229; 21 February 2017, 02:01 PM.

      Comment


      • #23
        Originally posted by dungeon View Post
        Yup i think these are kind of perf comparison articles people wants, plus ideally combined with videos like this and it would be superb

        (clipped youtube video)
        I'm a little surprised to notice that Windows, while it has a higher framerate, has texture loading issues the Linux version is avoiding. If you watch the video, you'll see that sometimes Linux will have the high quality textures loaded one or even two entire seconds faster before the textures on Windows are in. Linux did appear to lose some smoothness though, stutter beyond the basic framerate. Maybe an X issue there.

        I earnestly don't know which I'd prefer. The textures would drive me insane to see things "pop" in so obviously like that, it's almost a deal-breaker on its own. I'd almost rather have consistent visuals at a lower framerate than inconsistent visuals at a higher framerate... Especially if it can cross that 60fps average, I don't need to go higher, and at 55fps for high quality... It's close.
        Last edited by Kver; 21 February 2017, 02:20 PM.

        Comment


        • #24
          Actually I do wonder why some people wonder about the performance regressions under Linux. Almost none of the Games above was designed for Linux/OpenGL in the first place. There's almost no chance to have a similar or better performance under Linux with OpenGL because of that. OpenGL does have some bigger overhead than DirectX has. That's not a new fact.


          Vulkan could be the game changer. If you have non-ready Vulkan drivers (like the current situation) and no Vulkan optimised games there's still no chance. But with a clean Vulkan-like codebase it's maybe possible to outperform DirectX - games might at least have the same performance of not a better one. You'll have extra overhead for the translation layer but less overhead for accessing the hardware.
          Last edited by cRaZy-bisCuiT; 21 February 2017, 02:54 PM.

          Comment


          • #25
            Originally posted by duby229 View Post
            Yeah, it's called incomplete ports. The fact that people like you and even the porting companies themselves have come to expect driver developers to complete their ports is exactly what the problem is.
            Yes, we get it - you (and others) can stop saying that. Everything should be always designed with Linux in mind, everything should be fully open source regardless of economic complications, and the driver devs deserve to just lay back and never optimize a single line of code ever again. As long as I keep telling people these things, they will be true, right? RIGHT? Everything is 100% up to game devs to make sure it is running flawlessly, right?

            You are so caught up in your own tirades that I explicitly said that drivers may not be the issue, yet all you interpreted was "hurr durr driver devs need to pick up the slack".
            Last edited by schmidtbag; 21 February 2017, 02:45 PM.

            Comment


            • #26
              Originally posted by elbuglione View Post
              popular 1080p resolution tests???

              Populars games test??? (DOTA 2, CS:GO, Team Fortress 2, ARK, etc...)
              Probably over 50% of all those games are in my collection.

              With regards to DOTA 2, CS:GO, free games, just download and see for yourself. Would be interesting to see Ark, but even without it, its very interesting reading.

              Please refrain from acting maggot.
              Michael works 7 days a week usually and gave you something NO OTHER SITE ONLINE GIVES YOU.

              I'd love to throw some tips, but I am broke

              Comment


              • #27
                Originally posted by duby229 View Post

                That's not entirely correct The reason why per thread run queues are so important even for games is because not every "core" is a real processor. Windows doesn't give a tiny shit if it throws the most CPU intensive load on a SMT core where a different thread can already have that pipline fully maxed out. It also doesn't give a crap about whether that thread gets put on a CMT core where a different thread can have the integrated NB maxed out.

                In all likely hood if you run a multithreaded load on windows you will get lesser performance than linux because it will throw heavy loads on SMT cores and it will allow CMT cores to thrash the cache and so on and so forth. If all you ever do is run a single instance of superpi, then maybe you could be correct in that one scenario, but as soon as you try to run two instances you automatically become wrong.
                Windows handles this case *better* as of Vista; Windows at least checks the CPUID flags to determine if the processor uses HTT, and if so, tries to put Kernel threads on HTT cores to avoid bumping user-mode threads. Granted, this doesn't always work well, but it's handled a lot better then it was in the Pentium 4 days [where Windows WOULD act in exactly the way you describe].

                This is partly why AMD's CMT stank out of the box; Windows didn't see a CPUID flag for HTT, so it treated all the cores in AMDs CPUs equally. As we now know, there's about a 20-25% performance loss as part of CMT, which ate into performance at launch. This was eventually mitigated via a Kernel patch, which essentially treated Bulldozer based CPUs and it's decedents like Hyperthreaded CPUs for the purposes of thread scheduling.

                Comment


                • #28
                  Originally posted by schmidtbag View Post
                  That being said, it gets me to wonder - what if Linux may be doing something wrong (besides the obvious outdated parts of X11)? If both AMD and Nvidia hardware suffer roughly the same performance losses, in one perspective, you could say that there is something Linux does less efficiently that the drivers cannot compensate for.
                  It's the other way around. The games are ports, the performance loss is because the games were designed with DX and Windows in mind.

                  Comment


                  • #29
                    Originally posted by schmidtbag View Post
                    Yes, we get it - you (and others) can stop saying that. Everything should be always designed with Linux in mind
                    Sorry but ports are ports. Even on Consoles <-> Windows, people are always amazed that you need beefy PCs to run windows games that run pretty decently on consoles that have pretty much shit hardware.

                    It's a fact of life, if you take something that was designed to work on something and try to adapt it to work on something different, you'll get crappier performance than than if you targeted the second platform (or both) in the first place.

                    Comment


                    • #30
                      Originally posted by Kver View Post
                      I'm a little surprised to notice that Windows, while it has a higher framerate, has texture loading issues the Linux version is avoiding.
                      Yeah, we can always pick to advertise where we are better... like in that Hitman example, Windows version uses both more RAM and VRAM, also to show these high spikes... i laughed on these the most




                      Just by looking at this i can guess about +7% on average at the end just because of these spikes that no one care about
                      Last edited by dungeon; 21 February 2017, 03:12 PM.

                      Comment

                      Working...
                      X