Announcement

Collapse
No announcement yet.

Feral's GameMode 1.1 Released For Optimizing Linux Gaming Performance

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Feral's GameMode 1.1 Released For Optimizing Linux Gaming Performance

    Phoronix: Feral's GameMode 1.1 Released For Optimizing Linux Gaming Performance

    One month ago Linux game porter Feral Interactive introduced GameMode as a utility/service for dynamically optimizing the Linux system performance when running games. The initial focus on GameMode was on ensuring the CPU scaling governor was in its performance mode while today brought the GameMode v1.1 release...

    http://www.phoronix.com/scan.php?pag...e-1.1-Released

  • geearf
    replied
    Originally posted by F.Ultra View Post

    To be completely honest I do not know, I have not made any power measurements between Ondemand and Performance for decades now so my "knowledge" can very much be outdated. One might think that distributions defaults to Ondemand for a reason though.

    edit: However running the cpu at 100% should show no difference between the two governors, the major differences should be with applications that create a varying stress on the CPU where the Ondemand governor will keep the frequency at a lower rate (by smoothing out the spikes so to speak) than the Performance governor.
    Oh as said before I'd assume cpufreq governors to behave quite differently than pstate ones.
    schedutil may be more sane powerwise than ondemand or performance, but I have not tested any of these so this is just a guess out of not much.
    I could if you wanted too, though I wouldn'd do as many tests as last time, as it takes a while...

    As for no difference at 100%, yes that's also what I'd expect with pstate, but not with cpufreq powersave vs performance. Are the 2w any meaningful since it's the full draw at the PSU and not just the CPU, and it's not even 2% difference? I am not sure, but it's interesting that I had the same difference on my 3 perf_bias tests. I think the previous time I did not notice any difference on max but on min by 1-3w, not sure why it changed... or if it's meaningful at all as it's so tiny.


    As for the default, well it's only the good one till proven it is not
    As for ondemand, it definitely is not the default for newer Intel CPUs.. and maybe schedutil will become the cpufreq default once matured enough.
    A while back I had some issues with schedutil and interacted with a dev about this (sorry forgot whom), at the end I asked what I should use and he said either schedutil or pstate's powersave, that they would be pretty equivalent in results, at least for my use case.

    I think at this point, we got quite far from the original question I was trying to answer

    Leave a comment:


  • F.Ultra
    replied
    Originally posted by geearf View Post

    The diff was only in mprime, which is using the CPU to the max (well sort of on option 3...) at that point with gamemode you'd be running performance anyway, so you wouldn't save anything... Actually, from what you're saying, by not using gamemode and sticking to powersave, you'd be better off.
    To be completely honest I do not know, I have not made any power measurements between Ondemand and Performance for decades now so my "knowledge" can very much be outdated. One might think that distributions defaults to Ondemand for a reason though.

    edit: However running the cpu at 100% should show no difference between the two governors, the major differences should be with applications that create a varying stress on the CPU where the Ondemand governor will keep the frequency at a lower rate (by smoothing out the spikes so to speak) than the Performance governor.
    Last edited by F.Ultra; 05-15-2018, 01:16 PM.

    Leave a comment:


  • geearf
    replied
    Originally posted by F.Ultra View Post

    As you say 2w is not useless on a major scale so even if you have a single computer, there exists probably millions in the country where you live so the energy savings for a society as a whole can also be quite significant even if the save is only 2w per system.

    Sorry about the "might not equal that of others", hadn't followed your posts back to your first one.
    The diff was only in mprime, which is using the CPU to the max (well sort of on option 3...) at that point with gamemode you'd be running performance anyway, so you wouldn't save anything... Actually, from what you're saying, by not using gamemode and sticking to powersave, you'd be better off.

    Leave a comment:


  • F.Ultra
    replied
    Originally posted by geearf View Post

    Well yes that's obvious, that's why I wrote "at least with a similar configuration to mine" and "at least on a desktop similar to mine".
    All my answers always state that, I'm not sure what your point is.

    Even then I'd guess that few people actually measured this before starting to say that it matters (well on a laptop it's obviously easy to count hours).

    To be honest, I used to think it mattered till someone challenged me about it on reddit and I thought why not measure it since I have the tools and then realize how useless all this was. (Of course not for a farm, saving 2w per CPU when you have thousands if not millions is a different thing...)
    As you say 2w is not useless on a major scale so even if you have a single computer, there exists probably millions in the country where you live so the energy savings for a society as a whole can also be quite significant even if the save is only 2w per system.

    Sorry about the "might not equal that of others", hadn't followed your posts back to your first one.

    Leave a comment:


  • geearf
    replied
    Originally posted by F.Ultra View Post

    The results that you see on your system might not be equal to that of others. E.g for you and your particular setup yes this seams to be a complete waste of time.
    Well yes that's obvious, that's why I wrote "at least with a similar configuration to mine" and "at least on a desktop similar to mine".
    All my answers always state that, I'm not sure what your point is.

    Even then I'd guess that few people actually measured this before starting to say that it matters (well on a laptop it's obviously easy to count hours).

    To be honest, I used to think it mattered till someone challenged me about it on reddit and I thought why not measure it since I have the tools and then realize how useless all this was. (Of course not for a farm, saving 2w per CPU when you have thousands if not millions is a different thing...)
    Last edited by geearf; 05-14-2018, 01:43 PM.

    Leave a comment:


  • F.Ultra
    replied
    Originally posted by geearf View Post

    Ok I understand that, but what's the point?
    I have a desktop test (chrome opened with ~100 tabs, various services running in the back, etc), average 2% load, variation between 0 and 5 or so, showing an average 1w difference between powersave and performance.
    That should be somewhat similar to your lowest points in game.
    Then I have mprime option 3, not the craziest one on the CPU, which should be somewhat similar to your highest points in game, showing hardly any difference.
    So far it shows pretty much no difference at close to min and max desktop consumption.
    I can try mid desktop consumption, but I'd bet that if min and max are equivalent average would be as well. I'm happy to lose the bet though

    I wouldn't mind running a game benchmark and measuring power then, but since my meters are not oscilloscopes but simple meters with only one number displayed, it'd be quite hard for me to get a good average when consumptions varies a lot. I guess I can film it with a camera and try to get a measure every 0.x seconds and plot that but that's fairly annoying to do, especially if I expect no difference.

    Of course, I wouldn't make that bet for cpufreq where I expect powersave to behave quite differently than with pstate.

    Changes in perf_bias might matter more than change of governor but I don't understand that very well, so not sure what to try.
    The results that you see on your system might not be equal to that of others. E.g for you and your particular setup yes this seams to be a complete waste of time.

    Leave a comment:


  • geearf
    replied
    So I've just ran a few more tests with changing perf_bias, here's the summary:

    perf_bias: 0
    diff idle: None
    diff mprime 3: 2w

    perf_bias: 6 (the default for my cpu or distro? not sure)
    diff idle: None
    diff mprime 3: 2w
    diff Dolphin Soul Calibur 2: None to 5w (A lot of variation so hard to accurately say...)

    perf_bias: 15
    diff idle: None
    diff mprime 3: 2 to 4w

    (in this case idle was only Plasma and whatever systemd services running in the background, no browser or anything heavy app open, unlike my previous test with desktop stuff open)

    With differences so small, staying with performance seems fine, at least on a desktop similar to mine using recent pstate.

    Leave a comment:


  • geearf
    replied
    Originally posted by F.Ultra View Post

    Something like mprime will put max strain on the CPU 100% of the time while a game will switch between roughly 0% and 100% quickly (depending on how fast your GPU is and how GPU/CPU intensive the game is). So it's not that a game will stress the CPU more than mprime, it's that a game will sometimes (often depending on game and/or system) put less stress on the CPU.
    Ok I understand that, but what's the point?
    I have a desktop test (chrome opened with ~100 tabs, various services running in the back, etc), average 2% load, variation between 0 and 5 or so, showing an average 1w difference between powersave and performance.
    That should be somewhat similar to your lowest points in game.
    Then I have mprime option 3, not the craziest one on the CPU, which should be somewhat similar to your highest points in game, showing hardly any difference.
    So far it shows pretty much no difference at close to min and max desktop consumption.
    I can try mid desktop consumption, but I'd bet that if min and max are equivalent average would be as well. I'm happy to lose the bet though

    I wouldn't mind running a game benchmark and measuring power then, but since my meters are not oscilloscopes but simple meters with only one number displayed, it'd be quite hard for me to get a good average when consumptions varies a lot. I guess I can film it with a camera and try to get a measure every 0.x seconds and plot that but that's fairly annoying to do, especially if I expect no difference.

    Of course, I wouldn't make that bet for cpufreq where I expect powersave to behave quite differently than with pstate.

    Changes in perf_bias might matter more than change of governor but I don't understand that very well, so not sure what to try.

    Leave a comment:


  • F.Ultra
    replied
    Originally posted by Venemo View Post
    Why not just use the tuned daemon for this? It has been around for several years and has the same features and a lot more.
    Because tuned does not have the same features and will impose the very same frequency scaling latencies that GameMode was designed to avoid unless tuned is set to the Throughput-performance profile which is the same as manually setting the CPU governor to Performance all the time (and not just when a game is running).

    Could tuned we expanded with a D-BUS endpoint and be changed into dynamically switching profiles when a game request it? Of course but #1 would that require more or less code than GameMode and #2 would Red Hat accept such patches? The answer to both might well be yes, I do not know I just wanted to say that it's not just as simple as "use tuned instead".

    Leave a comment:

Working...
X