Announcement

Collapse
No announcement yet.

NVIDIA/Radeon Windows 10 vs. Ubuntu Linux Relative Gaming Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by schmidtbag View Post
    I'm not sure if I'd say these are better than comparing raw frame rates, but these tests absolutely put this in an interesting perspective that we haven't really seen before. What I find especially interesting is how AMD's hardware performs roughly the same (proportionately) as Nvidia's. I don't recall seeing any graphs that made this so clear.

    That being said, it gets me to wonder - what if Linux may be doing something wrong (besides the obvious outdated parts of X11)? If both AMD and Nvidia hardware suffer roughly the same performance losses, in one perspective, you could say that there is something Linux does less efficiently that the drivers cannot compensate for.
    To reach a valid conclusion, we'd need a clear separation between native software and games that employ a translation layer. You can infer it from the tests, but you have to know beforehand what's native and what isn't.

    Comment


    • #12
      Well the quality of the ports are most like enjoyable meaning they are ports that will make enjoy the game with a few downside compare to the native games, this include consuming more ram , less frames, maybe more cpu usage. But at the end you will enjoy the game.
      So the good thing is the same lost performance in both video card manufacter, to me that is more than enough if I will like better ports I will prefer native games at less for recent games.

      Comment


      • #13
        Originally posted by adakite View Post

        +1000

        Also, it seems to be rather an API comparison than an OS/kernel fight. Dx11/12 seem better (and/or more optimized) than OpenGL.
        DirectX had the advantage of being tied directly to the OS, so under the hood there's a lot tighter integration, which leads directly to performance. There's also the issue that OpenGL is a dog to program in, which is largely why it's being abandoned. Finally, ATI/AMD's OpenGL drivers have been crap for literally decades now, which doesn't help OpenGLs reputation any.

        Vulkan will be "better" by virtue of not being OpenGL, but DirectX's advantages, plus Windows dominance in gaming will likely lead to the same dynamic: Games will be developed for DirectX, and ported as necessary to Vulkan.

        Comment


        • #14
          Originally posted by schmidtbag View Post
          That being said, it gets me to wonder - what if Linux may be doing something wrong (besides the obvious outdated parts of X11)? If both AMD and Nvidia hardware suffer roughly the same performance losses, in one perspective, you could say that there is something Linux does less efficiently that the drivers cannot compensate for.
          I don't know about this, to be fair, the only comparison that would confirm this is same game same api close driver. And in the displayed graph only Talos principle on Vulkan gather this all. And in this case, both OSes perform closely.

          Comment


          • #15
            It isn't so hard to understand: linux ports mostly suck.

            @Michael
            Great article, I was going to ask exactly this kind of comparison, thanks!
            ## VGA ##
            AMD: X1950XTX, HD3870, HD5870
            Intel: GMA45, HD3000 (Core i5 2500K)

            Comment


            • #16
              As member of PC Windows master race , i can tell you that results above 0.6 of Windows results are good enough and what about stability and multitasking gaming compatibility? - these are for me bigger issues with Linux gaming.

              Comment


              • #17
                Originally posted by gamerk2 View Post

                It's the scheduler. Here's how Windows and Linux handle thread scheduling:

                Windows:

                The Highest Priority Runnable Thread(s) ALWAYS run, without exception.

                Now, Windows does a lot to manage thread priorities to ensure every thread eventually runs. For example: Running threads get their priority lowered, and waiting threads get their priority bumped. Fullscreen applications get a thread priority boost. And so on.

                Also, and this is key: Windows does not care what CPU core a thread gets assigned to. Windows doesn't care your bumped thread was running on CPU 1; if CPU 3 has the lowest priority thread, your thread goes on CPU 3. This ensures the highest priority threads ALWAYS run.

                For a singular app that requires a large amount of CPU time, this approach maximizes performance. That application will grab a disproportionate amount of CPU runtime, and achieve maximum possible performance. The downside, is the rest of the system grinds to a halt due to a longer then normal time for other threads to get executed. In essence, you're threading Latency for Performance.

                Linux:

                Lunix schedules threads on per-core run-queues. Threads are pre-allocated to cores in such a way as to try and achieve load balancing, so all threads (in theory) have a chance to execute over a certain time span.

                This approach works great from a latency perspective, as all threads will get equal share of CPU time. But, for example, you are running a game that has strict 16ms timing deadlines; do you really want to bump one of those important threads 25% of the time so you can update some low-priority background app? Not really, but hey, your desktop clock is accurate to ~2ms. You're trading Performance for Latency.


                Make an "unfair" scheduler that biases performance for singular applications, and get rid of per-core runqueues, and performance will fall right in line with Windows.
                This is one of the reasons why I prefer to play games on Linux, latency seems much better. On the topic, I really like this benchmark, it puts things into perspective, and I am positively suprized by results tbh.

                Comment


                • #18
                  there really needs to be some scheduler/governer/hwmon overhaul. I pictured this as a "kernel inside a kernel". Its a framework that keeps keeps variables about cores that can be communicated with between these parts that makes things more efficient. But that is just me, i dont know anything.

                  Comment


                  • #19
                    Michael this was an awesome article! I enjoyed it, plus the condensed information in those graphs is quite easy to absorb, compared to loads of graphs with various FPS, etc.

                    A small (constructive) criticism I have, and this is IMHO, please anyone feel free to bring your own point of view.

                    When you normalize the graphs, they should all be normalized against the same target. My understanding is that you normalized following: x / highest FPS. It would have been better (again, IMO) to have x / Windows FPS.
                    In general this problem is not so bad because we have Linux < Windows, so it remains fairly consistent. For the few cases where this is not true, it is more difficult to see how the cards are faring relative to Windows.

                    E.g.:
                    Suppose that we consistently had Linux > Windows. Then all the cards would be showing Linux results as "1.00", and then smaller bars for the Windows performance. It would be much harder to see the performance relative to Windows.

                    Comment


                    • #20
                      Yup i think these are kind of perf comparison articles people wants, plus ideally combined with videos like this and it would be superb

                      Comment

                      Working...
                      X