Last week we reported that the open-source ATI Linux driver had picked up improved power management in the form of dynamic power management and power management profiles that can be defined by the end-user. With the ATI Linux power management finally coming to fruition within the Linux kernel for its kernel mode-setting / DRM driver, we have decided to take a close look at how this power management support is working in the real world.
Read last week's news post for more details on the work that has gone into the open-source ATI Linux power management support up to this point, as it's been a long-time in the making. To summarize, the patches posted by Alex Deucher of AMD last week clean-up the in-kernel ATI Radeon power management support for all ASICs and introduces these new power control options that are currently exposed to the end-user through a sysfs interface.
The dynamic power management option introduced (via writing "dynpm" to the sysfs power_method node) will dynamically adjust the GPU's clock depending upon the graphics processor's load. This load determination is being done by looking at the number of pending GPU fences for the R500 (Radeon X1000) series and older. A fence on the graphics processor signals when the GPU has executed all instructions before it in the FIFO queue. In other words, the more fences that are pending, the more work the GPU still has to finish. For the Radeon HD 2000 (R600) series and newer there is a GUI idle IRQ interrupt support, but that is not working reliably for the R500 and older ASICs.
The profile-based clocking mechanisms allow for forcing the GPU to always either run in a high or low power state or to run with the clocks that are set by default. The open-source ATI driver power management stack has not yet reached a point of supporting dynamic volting so the graphics processor's voltage will be lowered when running at a lower clock speed, but that is certainly coming soon. This new code also supports forcing the GPU into the lowest possible power state when the connected display is off via DPMS (Display Power Management Signaling).
All of this latest ATI/AMD power management code can be found in the drm-radeon-testing branch (along with plenty of other Radeon DRM changes) and may end up working its way into the Linux 2.6.35 kernel in the next couple of months. During our testing of the latest drm-radeon-testing tree and the reports of others, this power management support is not yet perfect as there can be some screen artifacts when the GPU is clocked too low or screen flashes if the GPU re-clocking does not take place perfectly during a vertical blanking period.
For this ATI Linux power management testing we looked at several different areas when testing out the drm-radeon-testing kernel on a Lenovo ThinkPad T60 notebook with an ATI Mobility Radeon X1400 (R500 class) graphics processor. When the system was idling, idling with the LVDS panel off via DPMS, video playback, and gaming we monitored a number of different metrics. We monitored the battery power consumption, CPU temperature, system temperature, GPU frequency, and the GPU fence counter to get a better idea for how taxed the ATI Mobility Radeon ASIC was under each scenario. When running the games, we also checked to see how the frame-rate was affected with the different power management options. We tested the default power management support, the low power profile, the high power profile, and the dynamic power management options.
Besides bearing the ATI Mobility Radeon X1400, the ThinkPad T60 also boasts an Intel Core Duo T2400 clocked at 1.83GHz, 1GB of DDR2 memory, an 80GB Hitachi HTS541080G9SA00 SATA HDD, and a 1400 x 1050 LVDS panel. This notebook was running Ubuntu 10.04 LTS with the stock packages (X.Org Server 1.7.6, xf86-video-ati 6.13.0) except for using the drm-radeon-testing kernel built on 2010-05-07. The drm-radeon-testing branch is tracking the Linux 2.6.34 kernel but with various Radeon DRM patches that are currently in-development for future integration into the mainline Linux kernel. All of the testing and sensor monitoring for this article was done automatically via the Phoronix Test Suite.