Originally posted by schmidtbag
View Post
Announcement
Collapse
No announcement yet.
NVIDIA/Radeon Windows 10 vs. Ubuntu Linux Relative Gaming Performance
Collapse
X
-
Well the quality of the ports are most like enjoyable meaning they are ports that will make enjoy the game with a few downside compare to the native games, this include consuming more ram , less frames, maybe more cpu usage. But at the end you will enjoy the game.
So the good thing is the same lost performance in both video card manufacter, to me that is more than enough if I will like better ports I will prefer native games at less for recent games.
Comment
-
Originally posted by adakite View Post
+1000
Also, it seems to be rather an API comparison than an OS/kernel fight. Dx11/12 seem better (and/or more optimized) than OpenGL.
Vulkan will be "better" by virtue of not being OpenGL, but DirectX's advantages, plus Windows dominance in gaming will likely lead to the same dynamic: Games will be developed for DirectX, and ported as necessary to Vulkan.
- Likes 1
Comment
-
Originally posted by schmidtbag View PostThat being said, it gets me to wonder - what if Linux may be doing something wrong (besides the obvious outdated parts of X11)? If both AMD and Nvidia hardware suffer roughly the same performance losses, in one perspective, you could say that there is something Linux does less efficiently that the drivers cannot compensate for.
Comment
-
Originally posted by gamerk2 View Post
It's the scheduler. Here's how Windows and Linux handle thread scheduling:
Windows:
The Highest Priority Runnable Thread(s) ALWAYS run, without exception.
Now, Windows does a lot to manage thread priorities to ensure every thread eventually runs. For example: Running threads get their priority lowered, and waiting threads get their priority bumped. Fullscreen applications get a thread priority boost. And so on.
Also, and this is key: Windows does not care what CPU core a thread gets assigned to. Windows doesn't care your bumped thread was running on CPU 1; if CPU 3 has the lowest priority thread, your thread goes on CPU 3. This ensures the highest priority threads ALWAYS run.
For a singular app that requires a large amount of CPU time, this approach maximizes performance. That application will grab a disproportionate amount of CPU runtime, and achieve maximum possible performance. The downside, is the rest of the system grinds to a halt due to a longer then normal time for other threads to get executed. In essence, you're threading Latency for Performance.
Linux:
Lunix schedules threads on per-core run-queues. Threads are pre-allocated to cores in such a way as to try and achieve load balancing, so all threads (in theory) have a chance to execute over a certain time span.
This approach works great from a latency perspective, as all threads will get equal share of CPU time. But, for example, you are running a game that has strict 16ms timing deadlines; do you really want to bump one of those important threads 25% of the time so you can update some low-priority background app? Not really, but hey, your desktop clock is accurate to ~2ms. You're trading Performance for Latency.
Make an "unfair" scheduler that biases performance for singular applications, and get rid of per-core runqueues, and performance will fall right in line with Windows.
- Likes 1
Comment
-
there really needs to be some scheduler/governer/hwmon overhaul. I pictured this as a "kernel inside a kernel". Its a framework that keeps keeps variables about cores that can be communicated with between these parts that makes things more efficient. But that is just me, i dont know anything.
Comment
-
Michael this was an awesome article! I enjoyed it, plus the condensed information in those graphs is quite easy to absorb, compared to loads of graphs with various FPS, etc.
A small (constructive) criticism I have, and this is IMHO, please anyone feel free to bring your own point of view.
When you normalize the graphs, they should all be normalized against the same target. My understanding is that you normalized following: x / highest FPS. It would have been better (again, IMO) to have x / Windows FPS.
In general this problem is not so bad because we have Linux < Windows, so it remains fairly consistent. For the few cases where this is not true, it is more difficult to see how the cards are faring relative to Windows.
E.g.:
Suppose that we consistently had Linux > Windows. Then all the cards would be showing Linux results as "1.00", and then smaller bars for the Windows performance. It would be much harder to see the performance relative to Windows.
- Likes 3
Comment
Comment