Originally posted by schmidtbag
View Post
Announcement
Collapse
No announcement yet.
NVIDIA/Radeon Windows 10 vs. Ubuntu Linux Relative Gaming Performance
Collapse
X
-
Originally posted by gamerk2 View Post
It's the scheduler. Here's how Windows and Linux handle thread scheduling:
Windows:
The Highest Priority Runnable Thread(s) ALWAYS run, without exception.
Now, Windows does a lot to manage thread priorities to ensure every thread eventually runs. For example: Running threads get their priority lowered, and waiting threads get their priority bumped. Fullscreen applications get a thread priority boost. And so on.
Also, and this is key: Windows does not care what CPU core a thread gets assigned to. Windows doesn't care your bumped thread was running on CPU 1; if CPU 3 has the lowest priority thread, your thread goes on CPU 3. This ensures the highest priority threads ALWAYS run.
For a singular app that requires a large amount of CPU time, this approach maximizes performance. That application will grab a disproportionate amount of CPU runtime, and achieve maximum possible performance. The downside, is the rest of the system grinds to a halt due to a longer then normal time for other threads to get executed. In essence, you're threading Latency for Performance.
Linux:
Lunix schedules threads on per-core run-queues. Threads are pre-allocated to cores in such a way as to try and achieve load balancing, so all threads (in theory) have a chance to execute over a certain time span.
This approach works great from a latency perspective, as all threads will get equal share of CPU time. But, for example, you are running a game that has strict 16ms timing deadlines; do you really want to bump one of those important threads 25% of the time so you can update some low-priority background app? Not really, but hey, your desktop clock is accurate to ~2ms. You're trading Performance for Latency.
Make an "unfair" scheduler that biases performance for singular applications, and get rid of per-core runqueues, and performance will fall right in line with Windows.
In all likely hood if you run a multithreaded load on windows you will get lesser performance than linux because it will throw heavy loads on SMT cores and it will allow CMT cores to thrash the cache and so on and so forth. If all you ever do is run a single instance of superpi, then maybe you could be correct in that one scenario, but as soon as you try to run two instances you automatically become wrong.Last edited by duby229; 21 February 2017, 02:01 PM.
- Likes 2
Comment
-
Originally posted by dungeon View PostYup i think these are kind of perf comparison articles people wants, plus ideally combined with videos like this and it would be superb
(clipped youtube video)
I earnestly don't know which I'd prefer. The textures would drive me insane to see things "pop" in so obviously like that, it's almost a deal-breaker on its own. I'd almost rather have consistent visuals at a lower framerate than inconsistent visuals at a higher framerate... Especially if it can cross that 60fps average, I don't need to go higher, and at 55fps for high quality... It's close.Last edited by Kver; 21 February 2017, 02:20 PM.
- Likes 1
Comment
-
Actually I do wonder why some people wonder about the performance regressions under Linux. Almost none of the Games above was designed for Linux/OpenGL in the first place. There's almost no chance to have a similar or better performance under Linux with OpenGL because of that. OpenGL does have some bigger overhead than DirectX has. That's not a new fact.
Vulkan could be the game changer. If you have non-ready Vulkan drivers (like the current situation) and no Vulkan optimised games there's still no chance. But with a clean Vulkan-like codebase it's maybe possible to outperform DirectX - games might at least have the same performance of not a better one. You'll have extra overhead for the translation layer but less overhead for accessing the hardware.Last edited by cRaZy-bisCuiT; 21 February 2017, 02:54 PM.
- Likes 3
Comment
-
Originally posted by duby229 View PostYeah, it's called incomplete ports. The fact that people like you and even the porting companies themselves have come to expect driver developers to complete their ports is exactly what the problem is.
You are so caught up in your own tirades that I explicitly said that drivers may not be the issue, yet all you interpreted was "hurr durr driver devs need to pick up the slack".Last edited by schmidtbag; 21 February 2017, 02:45 PM.
Comment
-
Originally posted by elbuglione View Postpopular 1080p resolution tests???
Populars games test??? (DOTA 2, CS:GO, Team Fortress 2, ARK, etc...)
With regards to DOTA 2, CS:GO, free games, just download and see for yourself. Would be interesting to see Ark, but even without it, its very interesting reading.
Please refrain from acting maggot.
Michael works 7 days a week usually and gave you something NO OTHER SITE ONLINE GIVES YOU.
I'd love to throw some tips, but I am broke
- Likes 1
Comment
-
Originally posted by duby229 View Post
That's not entirely correct The reason why per thread run queues are so important even for games is because not every "core" is a real processor. Windows doesn't give a tiny shit if it throws the most CPU intensive load on a SMT core where a different thread can already have that pipline fully maxed out. It also doesn't give a crap about whether that thread gets put on a CMT core where a different thread can have the integrated NB maxed out.
In all likely hood if you run a multithreaded load on windows you will get lesser performance than linux because it will throw heavy loads on SMT cores and it will allow CMT cores to thrash the cache and so on and so forth. If all you ever do is run a single instance of superpi, then maybe you could be correct in that one scenario, but as soon as you try to run two instances you automatically become wrong.
This is partly why AMD's CMT stank out of the box; Windows didn't see a CPUID flag for HTT, so it treated all the cores in AMDs CPUs equally. As we now know, there's about a 20-25% performance loss as part of CMT, which ate into performance at launch. This was eventually mitigated via a Kernel patch, which essentially treated Bulldozer based CPUs and it's decedents like Hyperthreaded CPUs for the purposes of thread scheduling.
- Likes 2
Comment
-
Originally posted by schmidtbag View PostThat being said, it gets me to wonder - what if Linux may be doing something wrong (besides the obvious outdated parts of X11)? If both AMD and Nvidia hardware suffer roughly the same performance losses, in one perspective, you could say that there is something Linux does less efficiently that the drivers cannot compensate for.
- Likes 1
Comment
-
Originally posted by schmidtbag View PostYes, we get it - you (and others) can stop saying that. Everything should be always designed with Linux in mind
It's a fact of life, if you take something that was designed to work on something and try to adapt it to work on something different, you'll get crappier performance than than if you targeted the second platform (or both) in the first place.
- Likes 2
Comment
-
Originally posted by Kver View PostI'm a little surprised to notice that Windows, while it has a higher framerate, has texture loading issues the Linux version is avoiding.
Just by looking at this i can guess about +7% on average at the end just because of these spikes that no one care aboutLast edited by dungeon; 21 February 2017, 03:12 PM.
- Likes 1
Comment
Comment