Announcement

Collapse
No announcement yet.

X.Org ATI Driver Supports New Power Options

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Good point -- the X2 boards have a bridge chip between the GPU and the bus connector, so changing the number of PCIE lanes on your card might require reprogramming the bridge chip as well.

    You might want to comment out or skip over the code that changes the # of lanes and see if that helps. You could probably just jimmy the switch(lanes) statement so that 4 bits was treated the same way as 16 bits.
    Last edited by bridgman; 04-15-2009, 08:51 PM.

    Comment


    • #32
      Originally posted by bridgman View Post
      This code only adjusts the engine clock so far, not the memory clock; changing engine speed is pretty safe because you can block acceleration while changing the clock. Changing the memory clock reliably is more difficult. You also need some pretty complex logic to make sure that the memory clock doesn't get too low for your display bandwidth and latency requirements, or you start to get artifacts on the screen very quickly.
      Very interesting indeed!

      Originally posted by bridgman View Post
      Remember that dynamic power management is governed by Schmidt's law -- "if you mess with something enough, it'll break".
      So power management is not really a hardware designed property, but just something that is possible because of good quality components?

      Originally posted by bridgman View Post
      We'll probably try to identify a subset of the information which gives the biggest power savings without being too painful to code for on all the different GPUs and systems. As with 6xx 3D, we will probably release a mix of documentation and code.

      Probably more than 10 pages but not a lot more, although if you count things like display bandwidth and latency calculations the numbers go up quickly.
      Thanks

      Comment


      • #33
        Looks like the 3d engine is failing to idle. I assume you only have that problem when you enable the ForceLowPowerMode? You might try commenting out the RADEONWaitForIdleMMIO(pScrn); lines in radeon_pm.c.

        Comment


        • #34
          Originally posted by bridgman View Post
          You could probably just jimmy the switch(lanes) statement so that 4 bits was treated the same way as 16 bits.

          It solved the problem. My card is now a bit quieter .

          I'll try agd5f's solution as well.

          EDIT: still get a hard lock with agd5f's solution.
          Last edited by Xipeos; 04-15-2009, 09:09 PM.

          Comment


          • #35
            Originally posted by Louise View Post
            So power management is not really a hardware designed property, but just something that is possible because of good quality components?
            It's more the fact that power management ties into everything, so it's tricky to get right and still have everything work properly.

            Comment


            • #36
              Yeah, nobody ever seems to want to transfer *into* the power management team

              Comment


              • #37
                Originally posted by agd5f View Post
                It's more the fact that power management ties into everything, so it's tricky to get right and still have everything work properly.
                So it is like programming for multiple cores?

                Comment


                • #38
                  Originally posted by bridgman View Post
                  Yeah, nobody ever seems to want to transfer *into* the power management team
                  That's humour

                  Reminds me of a Dilbert strip, where it was made a company policy that if something doesn't work, then blame the intern

                  Comment


                  • #39
                    Originally posted by Louise View Post
                    So it is like programming for multiple cores?
                    It's sort of like programming for a constantly and unpredictably varying number of cores

                    Comment


                    • #40
                      Originally posted by bridgman View Post
                      It's sort of like programming for a constantly and unpredictably varying number of cores
                      hahah

                      Reminds me of a time where I didn't knew I should check if I got the allocated memory.

                      Comment


                      • #41
                        Originally posted by Niagra View Post
                        But nevertheless thank you for bringing power management to R700 based cards on linux! This saves nearly 17W for me in regular system operation, since the Radeon 4670 is only actually powersaving when those powerplay features are activated.
                        my 4850 _should_ clock down to 160mhz gpu and 500mhz ram since I patched the bios. But the funny thing is that mine saves exactely 17W, too. With option "ForceLowPowerMode" on that is. No power saving without options, but I didn't try the other option.

                        Thanks again to the Xorg team and AMD!

                        PS: could you use someone with a R700 but without any clue for beta testing?
                        or is this already beta testing?

                        Comment


                        • #42
                          Thats very awesome news, though I already bios-modded my 4850 for low voltages .Good job agd5f .

                          Comment


                          • #43
                            Excellent news. Power states are the last thing that's keeping me using FGLRX. I couldn't justify the extra battery usage.

                            To everyone involved, thanks for all your hard work!

                            Comment


                            • #44
                              I was wondering, is there some way to see if power management is working, aside from checking battery life? (hd3470 on a laptop, haven't found a way to check video card temperature.)
                              And most importantly, thanks for implementing power management on the awesome open driver !

                              Comment


                              • #45
                                How do you read out your GPU's temperature when using one of the open source drivers. aticonfig --od-gettemperature doesn't work without fglrx.

                                Comment

                                Working...
                                X