Announcement

Collapse
No announcement yet.

NVIDIA/Radeon Windows 10 vs. Ubuntu Linux Relative Gaming Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • NVIDIA/Radeon Windows 10 vs. Ubuntu Linux Relative Gaming Performance

    Phoronix: NVIDIA/Radeon Windows 10 vs. Ubuntu Linux Relative Gaming Performance

    Last week I published some Windows 10 vs. Ubuntu Linux Radeon benchmarks and Windows vs. Linux NVIDIA Pascal tests. Those results were published by themselves while for this article are the AMD and NVIDIA numbers merged together and normalized to get a look at the relative Windows vs. Linux gaming performance.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Very interesting comparison. IMO much better than comparing raw frame rates of different hardware on different platforms.

    Comment


    • #3
      popular 1080p resolution tests???

      Populars games test??? (DOTA 2, CS:GO, Team Fortress 2, ARK, etc...)

      Comment


      • #4
        Originally posted by Tuxee View Post
        Very interesting comparison. IMO much better than comparing raw frame rates of different hardware on different platforms.
        Agreed, and having modern games where really nice

        Comment


        • #5
          This is a great comparison. This is very useful for people to decide, whether they would be able to switch to Linux or not.

          Comment


          • #6
            I'm not sure if I'd say these are better than comparing raw frame rates, but these tests absolutely put this in an interesting perspective that we haven't really seen before. What I find especially interesting is how AMD's hardware performs roughly the same (proportionately) as Nvidia's. I don't recall seeing any graphs that made this so clear.

            That being said, it gets me to wonder - what if Linux may be doing something wrong (besides the obvious outdated parts of X11)? If both AMD and Nvidia hardware suffer roughly the same performance losses, in one perspective, you could say that there is something Linux does less efficiently that the drivers cannot compensate for.

            Comment


            • #7
              Originally posted by schmidtbag View Post
              I'm not sure if I'd say these are better than comparing raw frame rates, but these tests absolutely put this in an interesting perspective that we haven't really seen before. What I find especially interesting is how AMD's hardware performs roughly the same (proportionately) as Nvidia's. I don't recall seeing any graphs that made this so clear.

              That being said, it gets me to wonder - what if Linux may be doing something wrong (besides the obvious outdated parts of X11)? If both AMD and Nvidia hardware suffer roughly the same performance losses, in one perspective, you could say that there is something Linux does less efficiently that the drivers cannot compensate for.
              +1000

              Also, it seems to be rather an API comparison than an OS/kernel fight. Dx11/12 seem better (and/or more optimized) than OpenGL.

              Comment


              • #8
                Very interesting article. It clearly shows that quality of Linux ports is not as good as it could be.
                Rob
                email: [email protected]

                Comment


                • #9
                  Are the drivers bad or is this Linux' fault? Especially NVIDIA should deliver similar driver blobs to both platforms.
                  Or is the fault to find to the Linux game ports?

                  Comment


                  • #10
                    Originally posted by schmidtbag View Post
                    I'm not sure if I'd say these are better than comparing raw frame rates, but these tests absolutely put this in an interesting perspective that we haven't really seen before. What I find especially interesting is how AMD's hardware performs roughly the same (proportionately) as Nvidia's. I don't recall seeing any graphs that made this so clear.

                    That being said, it gets me to wonder - what if Linux may be doing something wrong (besides the obvious outdated parts of X11)? If both AMD and Nvidia hardware suffer roughly the same performance losses, in one perspective, you could say that there is something Linux does less efficiently that the drivers cannot compensate for.
                    It's the scheduler. Here's how Windows and Linux handle thread scheduling:

                    Windows:

                    The Highest Priority Runnable Thread(s) ALWAYS run, without exception.

                    Now, Windows does a lot to manage thread priorities to ensure every thread eventually runs. For example: Running threads get their priority lowered, and waiting threads get their priority bumped. Fullscreen applications get a thread priority boost. And so on.

                    Also, and this is key: Windows does not care what CPU core a thread gets assigned to. Windows doesn't care your bumped thread was running on CPU 1; if CPU 3 has the lowest priority thread, your thread goes on CPU 3. This ensures the highest priority threads ALWAYS run.

                    For a singular app that requires a large amount of CPU time, this approach maximizes performance. That application will grab a disproportionate amount of CPU runtime, and achieve maximum possible performance. The downside, is the rest of the system grinds to a halt due to a longer then normal time for other threads to get executed. In essence, you're threading Latency for Performance.

                    Linux:

                    Lunix schedules threads on per-core run-queues. Threads are pre-allocated to cores in such a way as to try and achieve load balancing, so all threads (in theory) have a chance to execute over a certain time span.

                    This approach works great from a latency perspective, as all threads will get equal share of CPU time. But, for example, you are running a game that has strict 16ms timing deadlines; do you really want to bump one of those important threads 25% of the time so you can update some low-priority background app? Not really, but hey, your desktop clock is accurate to ~2ms. You're trading Performance for Latency.


                    Make an "unfair" scheduler that biases performance for singular applications, and get rid of per-core runqueues, and performance will fall right in line with Windows.

                    Comment

                    Working...
                    X