Announcement

Collapse
No announcement yet.

Radeon+Ryzen CPUFreq CPU Scaling Governor Benchmarks On Linux 4.15

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by haagch View Post
    The average FPS aren't the measure that benefits the most. It's the minimum FPS.

    In CSGO you can actually feel the difference. Here's watching a bot game, with a patch for adding a "low-fps" graph that is based on only the slowest frame time in the interval (https://gist.github.com/ChristophHaa...44440b5a922d):
    ondemand: https://i.imgur.com/FJ6rdwv.png
    performance: https://i.imgur.com/5lwFoPH.png
    To be honest, both results are REALLY terrible for an 1600X @ 3.9GHz + RX 480, csgo is just not really playable until it's ported to Source 2 + Vulkan.
    The frametimes are not very accurate but you can still see the difference it makes.

    The frame delay graphs (not mainline either) show how much frames were lagging behind the the 60/45 fps in the interval and they show it even better. It's interesting that the GPU load gets more "stable" on a lower level while the GPU clock goes a bit up. That suggests that with ondemand the GPU is not fed commands/data as reliably fast as with performance which could cause small stalls.

    The question is: What's the power usage if you just always run with the performance governor?
    Michael Can you measure power usage in idle and under load with the performance governor vs ondemand?
    I haven't had any issue playing CSGO in Linux since they added the precompiled shader distribution through the Steam client a while back. That fixed all the stuttering it used to have on first-time run of each shader, on Nvidia at least.

    Plus I'm really not very hopeful that CSGO on Source 2 is a real thing that's going to happen. Been waiting too many years for it to believe anymore.

    Comment


    • #12
      How does Linux cpu frequency scaling interact with the CPU inherent frequency boost? For example, I have a 1950X with a base multiplier of x34 (3.4GHz) and a max clock of 4.2. When I tested with prime95 under Windows 10 running on all 32 threads/cores hwinfo reported all cores being clocked with a x37 multiplier and running at 3.7GHz and temperature varied between 48C and 55C (Tctl-27C) as the cooler fans cycled up and down.

      However, in Linux (ubuntu 17.10 with the 4.15 kernel installed) the information in /sys/devices/system/cpu/cpufreq/policy?/cpuinfo_max_freq or scaling_max_freq for example indicates 3.4GHz which is clearly incorrect since the CPU can scale actual frequency when sufficient cooling is available.

      So, my question is, does the cpu frequency scaling in linux hobble the processors intrinsic boost capability? If not, how do you obtain the actual cpu frequencies in linux?

      Comment


      • #13
        What effect do the linux cpu scaling frequency governors have on the clock boost capabilities of ryzen processors? Does linux impose the clock speeds or are these determined by the CPU itself in terms of boost values?

        For example, I have a 1950X, the base multiplier is x34 with boosts up to x42 depending on having sufficient cooling. In Windows 10, I was running prime95 on all cores with 32 threads and the cpu temperature (Tctl-27C) varied between 48C and 55C as the fans on the radiator spun up or down. The CPU clocked all threads at x37 or 3.7GHz according to the latest version of hwinfo64 when under full load. Single threaded tasks could go up to 4.2GHz.

        However, turning to ubuntu linux 17.10 with the 4.15 kernel installed and looking in /sys/devices/system/cpu/cpufreq/policy# the contents of cpuinfo_max_freq and sysinfo_max_freq are both 3.4GHz. In addition, if I look at the contents of policy#/stats/time_in_state the only frequencies listed are 2.2GHz, 2.8GHz and 3.4GHz. There are no higher values listed for any cores despite the fact that I was running some CPU intensive single thread applications which should have at least boosted the clocks on a couple of the cores.

        Running two cpu intensive processes and looking at the contents of /sys/devices/system/cpu/cpufreq/policy#/cpuinfo_cur_freq indicates that two cores are running at 3.4GHz while the rest are downclocked. The CPU temperature is 35C (based on the output from sensors) and the fans are on low.

        The value in policy#/bios_limit is 3.4GHz.

        Note: there is a file in /sys/devices/system/cpu/cpufreq called boost that contains a 1. Which is supposed to turn on boost but I am not sure it is doing anything.



        According to:



        the file cpuinfo_cur_freq is supposed to contain the current actual operating frequency of the core taken from the hardware. In my case, the maximum value in linux appears to be 3.4GHz with a single heavy thread running rather than the expected 4.2GHz.

        So, my question is, do Ryzen processors in linux (in my case the 1950X Threadripper) properly boost the clock frequencies in response to load or does linux prevent this behaviour?

        Is there something I can set to obtain the expected boost behaviour from my Threadripper in linux? (I could do BIOS overclocking but to be honest, if the CPU will do 3.7GHz under full load on all cores and higher on individual cores when I am running a lighter workload then I don't see much point in overclocking). The problem from my perspective is what do I need to do to obtain and monitor that expected performance in linux?

        Comment

        Working...
        X