Originally posted by Klassic Six
View Post
Announcement
Collapse
No announcement yet.
How Well Modern Linux Games Scale To Multiple CPU Cores
Collapse
X
-
-
Indeed, what I see here is that games make use of a surprising amount of cores. I thought most would be singlethreaded still, so it would plateu after 2 cores, but surprisingly many games can make use of over 4 cores. So I suppose it's true that from this point on one ought to start looking beyond 4 cores.
Comment
-
Originally posted by GreatEmerald View PostIndeed, what I see here is that games make use of a surprising amount of cores. I thought most would be singlethreaded still, so it would plateu after 2 cores, but surprisingly many games can make use of over 4 cores. So I suppose it's true that from this point on one ought to start looking beyond 4 cores.
Comment
-
Originally posted by Luke_Wolf View PostIn spite of the myths surrounding such things, there's nothing particularly serial about the concept of a game other than that everything needs to sync with the game state every "instant" which then needs to be read and be rendered (though the next instant can be computed while it's rendering). There's no reason why for example you can't run all the bot decisions for a particular instant in the game all at the same time, and then sync the game state to their decisions when they finish. The blockers to making games massively parallel are technological not conceptual.
But thats not the real topic here. In games, having the AI react faster or slower is not a real problem. The real problem is rendering of a scene in a fast paced scenery, where you have to put out a lot of data that can't be really re-used from the previous frame, as things change very quickly. If I remember X3 - that was a game made on a single core and you had a whole universe AND the players sector on just a single core. I don't really see a bottleneck here from a computing point of view. And yes, they can be put into another thread which you just extract the current data from.
But I'm not an expert... maybe physics grids are very computation-hefty in modern games? But then again, they are already pushed towards another thread. My knowledge about real-world limitations and bottlenecks in games is rather low beside the usual myths flying around. The only thing I can believe is that there are logical systems in games that are depending on each other and thus not thread-able and the best approach to optimize that, is to reduce those scenarios to a minimum. If the rest is going to be optimized or not doesn't really matter, as long as its faster or similar fast compared to the non-threaded part. And if you can do that with just 3 or 4 cores... who needs 8 or 16?Last edited by Shevchen; 08 March 2017, 02:59 PM.
Comment
-
Originally posted by Shevchen View Post...
But thats not the real topic here. In games, having the AI react faster or slower is not a real problem. The real problem is rendering of a scene in a fast paced scenery, where you have to put out a lot of data that can't be really re-used from the previous frame, as things change very quickly.
Originally posted by Shevchen View Post...
The only thing I can believe is that there are logical systems in games that are depending on each other and thus not thread-able and the best approach to optimize that, is to reduce those scenarios to a minimum.
...
Of course, I'm sure that a game or game engine which has already been written with a limited degree of multithreaded-ness, is sometimes not so easy to convert to using a higher degree of multithreaded-ness. However that depends on the specific implementation.
Comment
-
Originally posted by Shevchen View Post
Phew, I'm not sure if you can simply disregard dependencies of logic by just saying "the tech is not good enough to do it". I'm sure you can take some shortcuts in terms of precognition technology that may introduce a tiny error, but as long its about a few milliseconds, this won't really matter for games - esp. if you can combine it with some sort of "I guess the player would do something similar as a good AI would do" for it, the error would be very tiny. (esp. if we combine that with a self-learning AI - alone reaction times of the player towards a new situation can give you a lot of time to process future events despite having no real knowledge about the real reaction and just calculating based on "good" assumptions)
But thats not the real topic here. In games, having the AI react faster or slower is not a real problem. The real problem is rendering of a scene in a fast paced scenery, where you have to put out a lot of data that can't be really re-used from the previous frame, as things change very quickly. If I remember X3 - that was a game made on a single core and you had a whole universe AND the players sector on just a single core. I don't really see a bottleneck here from a computing point of view. And yes, they can be put into another thread which you just extract the current data from.
But I'm not an expert... maybe physics grids are very computation-hefty in modern games? But then again, they are already pushed towards another thread. My knowledge about real-world limitations and bottlenecks in games is rather low beside the usual myths flying around. The only thing I can believe is that there are logical systems in games that are depending on each other and thus not thread-able and the best approach to optimize that, is to reduce those scenarios to a minimum. If the rest is going to be optimized or not doesn't really matter, as long as its faster or similar fast compared to the non-threaded part. And if you can do that with just 3 or 4 cores... who needs 8 or 16?
Comment
-
Did nobody look at the kernel scheduler? The scheduler is supposed to keep data near the process, and minimize the process migrations. There is a good reason for that.
So I am wondering how much the kernel scheduler helped or helped against getting the best gaming performance.
If I have only 4 CPU's used on a ryzen platform, it just means I have 4 more cores to work on video conversion or other stuff...
Comment
-
Originally posted by indepe View Post
I'm surprised that you dismiss AI so easily. Especially since to "put out a lot of data" is something that can usually be done by multiple threads very well (even if the data changes).
Originally posted by indepe View PostDo you have a specific reason to believe that this is actually the case, and not just possible? For example if logical systems depend on each other, you may still be able to use multiple threads within each system. Is there anything specific, that you can think of ? Otherwise you might just be repeating a myth, which perhaps came into being because multithreading does involve a significant learning curve.
Originally posted by indepe View PostOf course, I'm sure that a game or game engine which has already been written with a limited degree of multithreaded-ness, is sometimes not so easy to convert to using a higher degree of multithreaded-ness. However that depends on the specific implementation.
Originally posted by Luke_Wolf View Post
The answer is modern games. What's been noted in benchmarking is that modern games like Battlefield 1 and Doom will basically max out an i7 7700k in terms of CPU utilization, while a Ryzen 7 will be at near uniformly half core utilization or less with all cores occupied by the game, which consequently results in the Ryzen 7 having substantially less microstutters. Unfortunately other games do not have nearly so detailed in game diagnostics when any at all, but from what I can gather from the task manager it's not unusual for DX11 era titles to thread out to 6 or 8 threads.
While this may push the "good enough computing" part towards better computers, it also feels a little arbitrary to just demand it based on artificially ramping up the need of processing power. There are quite some poor lads out there who only have an I3 and the gaming studios still want to sell their games to them. And as such, those games must run their main stuff on 2 cores which means that there is nothing much left for an 8C/16T powerhouse to do.
Don't get me wrong, I'd enjoy it if everyone had 8C/16T - but there are not a lot of games out there that justify such an investment beside (I can hear the stones flying) Star Citizen.
Comment
-
Originally posted by Shevchen View PostSo, we are back to the basics, where we simply have to wait for game-studios to finally implement a more-than-4-core utilization, without breaking the deal for 94% of their customers, that still run around with 2-4 cores. This would mean, that we can only utilize those additional cores if the content of the additional computation is optional (like more and better effects) but wouldn't touch the basic game logic. Or - you simply rise the minimum requirements to "4 cores, 8 threads" and put in recommendation "6-8 cores 12/16T".
While this may push the "good enough computing" part towards better computers, it also feels a little arbitrary to just demand it based on artificially ramping up the need of processing power. There are quite some poor lads out there who only have an I3 and the gaming studios still want to sell their games to them. And as such, those games must run their main stuff on 2 cores which means that there is nothing much left for an 8C/16T powerhouse to do.
Don't get me wrong, I'd enjoy it if everyone had 8C/16T - but there are not a lot of games out there that justify such an investment beside (I can hear the stones flying) Star Citizen.
Furthermore it is exceptionally unlikely that intel won't shift to make i3s and below 4c/4t, i5s 4c/8t, and i7s 6c/12t (as a baseline) or something similar in order to respond to Ryzen.
Comment
Comment