Announcement

Collapse
No announcement yet.

A New Patch For Radeon DRM Power Savings

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • cklein
    replied
    Hello,

    Could anybody tell me how I could test this patch (and KMS) without having to get the source code / recompile the whole kernel tree? I suppose I will have to recompile the DDX and libdrm.

    I'm running Ubuntu 9.10 with Linux 2.6.31.

    Thanks.

    Leave a comment:


  • Zajec
    replied
    Originally posted by Michael View Post
    One thing that I would really love to see would be exporting the current (and even the default/reference too) clock frequencies to a sysfs attribute to easily see what the GPU clocks are currently at (and even the voltages) and would also fit really well for hooking into the phoronix-test-suite to have the ability to monitor and chart the frequencies as test(s) are being run... That could even help you guys too with ensuring the dynamic clocking is aggressive but not too aggressive, etc.
    That's already done.

    Code:
    mount -t debugfs debugfs /debugfs
    cat /debugfs/dri/0/radeon_pm_info
    You may want to modify mount point, I was told I should not mount debugfs in root.

    Leave a comment:


  • bridgman
    replied
    Originally posted by FunkyRider View Post
    Some rumour says that R600 leaks 30 Amps(!) power even in idle, which is simply fascinating of how they could bring something like this to the market....
    Remember that modern GPUs run on extremely low voltages, so a 30A idle current corresponds to maybe 40W. It sounds like a lot more though, doesn't it ?

    Leave a comment:


  • Melcar
    replied
    Originally posted by Michael View Post
    One thing that I would really love to see would be exporting the current (and even the default/reference too) clock frequencies to a sysfs attribute to easily see what the GPU clocks are currently at (and even the voltages) and would also fit really well for hooking into the phoronix-test-suite to have the ability to monitor and chart the frequencies as test(s) are being run... That could even help you guys too with ensuring the dynamic clocking is aggressive but not too aggressive, etc.

    I second this. As an overclocker, I can't stand not being able to at least look at the frequencies for my chips. It just kills me. Temperature readings would be nice as well.

    Leave a comment:


  • Michael
    replied
    Originally posted by Zajec View Post
    In case of my HD 34x0 engine downclocking gave quite nice result already. I did not see much difference when downclocking memory. Not sure what voltage changing will bring. Maybe not much difference to temperature, but some to battery life time?

    I really hope to get engine downclocking commited for 2.6.33-rc1. Hopefully Alex will release IRQs before that, so I will able to enable memory downclocking as well for 33-rc1.
    One thing that I would really love to see would be exporting the current (and even the default/reference too) clock frequencies to a sysfs attribute to easily see what the GPU clocks are currently at (and even the voltages) and would also fit really well for hooking into the phoronix-test-suite to have the ability to monitor and chart the frequencies as test(s) are being run... That could even help you guys too with ensuring the dynamic clocking is aggressive but not too aggressive, etc.

    Leave a comment:


  • FunkyRider
    replied
    to your interest R600 is built in TSMC 80-HS process node, which is a very very leaky one. The R600's ASIC behaves differently than maybe most of other GPUs on the market:

    It is voltage and (as a result) frequency alone that decides the heat output and merely has nothing to do with actual work load. As a result, R600 GPU itself can clock up really high, someone broke the 1GHz barrier using R600. But the only thing limiting it's performance is the enormous amount of heat generated, which, if not dealt properly, might melt the core down

    To experience this is easy, get a 2900XT, in some Windows overclock tool, (ATITOOL), ramp up the frequency and watch the fan blower take off. Then force it to run at 2D frequency (so low voltage) and run some full screen games, the fan never goes up!

    Some rumour says that R600 leaks 30 Amps(!) power even in idle, which is simply fascinating of how they could bring something like this to the market....

    Leave a comment:


  • monraaf
    replied
    Originally posted by Zajec View Post
    Nothing that sounds hard to implement if you just let Alex release the code
    I hope you can pull it off, what I've read is that AMD had some problems with downclocking GDDR5 memory in the past:

    Even though ATI has implemented a 2D/3D clock switching model, the power consumption in idle is still quite high. One reason for that is that only the core frequency is reduced, while memory keeps running at full speed for the whole time. During testing I noticed that any memory frequency change will make the screen display flicker, which is probably the reason why AMD chose not to allow dynamic memory clock changes. I am surprised that AMD has not fixed this problem in their second GDDR5-capable GPU.
    Today AMD released the world's first GPU that is produced in a 40 nm process. The HD 4770 is aggressively priced around ~$100 and offers great performance for your hard earned dollars. In our testing realized out that the card performs almost on par with the HD 4850. With the amazing 30%+ memory overclock we got on our sample, the HD 4850 will be surpassed easily.

    Leave a comment:


  • bridgman
    replied
    Yep. Some of the review folks were away this week but back Monday; hopefully we can get the IRQ IP released before all the US folks take off for thanksgiving.

    Leave a comment:


  • Zajec
    replied
    Originally posted by bridgman View Post
    the code to drop memory clocks is tricky because it needs to be sync'ed with display refresh to avoid glitching the display.
    Nothing that sounds hard to implement if you just let Alex release the code

    Leave a comment:


  • bridgman
    replied
    Not sure about the formula (power tends to follow frequency squared as well) but voltage definitely makes a difference. Memory clock usually needs to be dropped before you can reduce the voltage, and the code to drop memory clocks is tricky because it needs to be sync'ed with display refresh to avoid glitching the display.

    Leave a comment:

Working...
X