Originally posted by birdie
View Post
Announcement
Collapse
No announcement yet.
New Patches Allow More Easily Managing The AMD P-State Linux Driver
Collapse
X
-
Originally posted by birdie View Postthat's why I help solve bugs including in the Linux kernel as you might have already noticed. Never mind, I feel like I'm wasting time dealing with you.
First it was a huge bold font, now "works for me". Says everything I need to know.
Comment
-
Originally posted by intelfx View PostBirdie's off his meds again it would seem.
Yo, dude, maybe it's finally time to admit that you're exactly the same kind of fanboy as the people you endlessly shit on — except that you're also really intolerant of everyone else?
Guess what, a couple of Reported-by tags in the kernel commit log don't make you better than anyone else, and surely not to the point that allows you to verbally abuse everyone in every single fucking thread that you happen to set foot in.
Comment
-
Just to clear up what seems to confuse a lot of people is that the CPUFreq driver (the old ACPI pstate driver, or newer CPPC drivers) doesn't actually decide what clocks to use when. They just expose the range of possible clocks and that is it. The governors then decide what clocks to use when based on the available clocks exposed by the CPU freq drivers. The only difference between the legacy ACPI pstate interface and CPPC is that the former provides 3 discrete clocks, the latter provides a range of clocks between the min and max supported frequencies. CPPC was also designed around an abstract performance selection mechanism. On Linux rather than using this directly it gets converted into a clock level like interface for compatibility with existing CPU Freq drivers on other platforms. Arguably there should be a new governor that uses the abstract scale directly or at least a new governor better tuned for modern interfaces, but writing a new governor and getting everyone to use it is a big task. As a starting point, just writing a new governor or adjusting an existing governor to use just 3 clock levels regardless of how many the CPU Freq driver exposes would give you a better starting point to compare the old ACPI interface and CPPC. Then you could expand the number of levels to see where performance starts to fall off, either due to too much time spent making minor clock adjustments or not enough difference in performance to make it worthwhile. You'd also want to take voltage into account. There's no reason to select anything less the the max clock supported by the base voltage. CPPC has a provision for this, but I don't know to what extent it's take into account. In practice, letting the hardware or firmware manage the clock directly is probably the best approach because it has direct access to the relevant perf counters as well as power and thermals and can adjust on a much better granularity than the CPU could. That's why things like EPP work so well. It largely takes the OS out of the picture and relies on the firmware to do the job. This is what we've done on the GPU side for years. It's not always perfect, but it tend to do a better job than letting the OS manage it.
- Likes 6
Comment
-
Originally posted by ffs_ View PostThat's true. Operating at high(er) frequencies doesn't mean that power draw is raising proportionately if there is no actual load.
whats the best way to save power then, limiting max CPU clock speed on battery?
Comment
-
Originally posted by gfunk View Post
learned something new, so if a laptop with locked 3ghz clock speed is idle, its not draining any more than one which is wound down to 400mhz..
whats the best way to save power then, limiting max CPU clock speed on battery?
If I'm using powersafe + 0.4 GHz the CPU temperature is lower than using performance + 4.0 GHz, both with with CPCC active.
There are several Zen-specific tools (whose names I just forgot) that even use kernel modules to get closer to the metal and, yes, the CPU reports running at lower internal/logical frequencies than the exposed 4.0 GHz, so it's like using x % of the externally reported 4.0 GHz - but still the temperature is higher idling on performance than idling of powersafe.
And when the temps are higher it means to me there is more power usage (that then gets converted to temperature).
So if the power usage would be the same then why aren't the temps?
Is it because of peripheral stuff to the CPU buth IIRC nowadays everything (noth bridge, south bridge, ...) has moved into the CPU itself.
Comment
-
Originally posted by reba View Post
I've read this more than once but I somehow can not believe this.
If I'm using powersafe + 0.4 GHz the CPU temperature is lower than using performance + 4.0 GHz, both with with CPCC active.
- Likes 1
Comment
-
-
Originally posted by agd5f View PostJust to clear up what seems to confuse a lot of people is that the CPUFreq driver (the old ACPI pstate driver, or newer CPPC drivers) doesn't actually decide what clocks to use when. They just expose the range of possible clocks and that is it. The governors then decide what clocks to use when based on the available clocks exposed by the CPU freq drivers. The only difference between the legacy ACPI pstate interface and CPPC is that the former provides 3 discrete clocks, the latter provides a range of clocks between the min and max supported frequencies. CPPC was also designed around an abstract performance selection mechanism. On Linux rather than using this directly it gets converted into a clock level like interface for compatibility with existing CPU Freq drivers on other platforms. Arguably there should be a new governor that uses the abstract scale directly or at least a new governor better tuned for modern interfaces, but writing a new governor and getting everyone to use it is a big task. As a starting point, just writing a new governor or adjusting an existing governor to use just 3 clock levels regardless of how many the CPU Freq driver exposes would give you a better starting point to compare the old ACPI interface and CPPC. Then you could expand the number of levels to see where performance starts to fall off, either due to too much time spent making minor clock adjustments or not enough difference in performance to make it worthwhile. You'd also want to take voltage into account. There's no reason to select anything less the the max clock supported by the base voltage. CPPC has a provision for this, but I don't know to what extent it's take into account. In practice, letting the hardware or firmware manage the clock directly is probably the best approach because it has direct access to the relevant perf counters as well as power and thermals and can adjust on a much better granularity than the CPU could. That's why things like EPP work so well. It largely takes the OS out of the picture and relies on the firmware to do the job. This is what we've done on the GPU side for years. It's not always perfect, but it tend to do a better job than letting the OS manage it.
Comment
Comment