Announcement

Collapse
No announcement yet.

In-Kernel Power Management For ATI KMS

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Louise;

    the KMS drm driver is important in two ways :

    1. As you say, it provides the "hook" into all 3D engine use, both from the OpenGL driver (the obvious user of the 3D engine) and the X driver (which uses the 3D engine for EXA and Xv as well). It also provides access to display / modesetting info in the same driver, ie it brings all of the required information together in one place.

    2. Some of the registers which control GPIO and I2C lines for reading and writing fan/temp controllers and voltage control are also used by modesetting for reading EDID information, so modesetting and power management need to be in the same driver. Unfortunately that needs to be the drm driver, since changing clocks on the fly requires that the driver also block any use of drawing engines, which can only be done in the drm driver (since direct rendered 3D doesn't go through the userspace X driver).

    DoDoENT;

    The problem with doing dynamic PM in the userspace driver is that the userspace driver can't block drawing calls from 3D. Bad things can happen if the drawing engine is running at the same time you are reprogramming the clock generator for the engine. Doing dynamic PM in the drm means that drawing operations can be temporarily stopped and the drawing engine quiesced before changing the clock.
    Last edited by bridgman; 09-12-2009, 05:04 PM.

    Comment


    • #22
      Originally posted by bridgman View Post
      Louise;

      the KMS drm driver is important in two ways :

      1. As you say, it provides the "hook" into all 3D engine use, both from the OpenGL driver (the obvious user of the 3D engine) and the X driver (which uses the 3D engine for EXA and Xv as well). It also provides access to display / modesetting info in the same driver, ie it brings all of the required information together in one place.

      2. Some of the registers which control GPIO and I2C lines for reading and writing fan/temp controllers and voltage control are also used by modesetting for reading EDID information, so modesetting and power management need to be in the same driver. Unfortunately that needs to be the drm driver, since changing clocks on the fly requires that the driver also block any use of drawing engines, which can only be done in the drm driver (since direct rendered 3D doesn't go through the userspace X driver).
      Very interesting all the things that KMS can be used for!

      Comment


      • #23
        Originally posted by bridgman View Post
        DoDoENT;

        The problem with doing dynamic PM in the userspace driver is that the userspace driver can't block drawing calls from 3D. Bad things can happen if the drawing engine is running at the same time you are reprogramming the clock generator for the engine. Doing dynamic PM in the drm means that drawing operations can be temporarily stopped and the drawing engine quiesced before changing the clock.
        I see. So the AI in the driver is required after all, and it should make decision based on the user preferences from the userspace (i.e. does the user want max performance or max powersave).

        What I have in mind is not dynamic PM from userspace, but static instead. The user would set a request that from now on, he wants maximum performance, as he (or she) is going to play a game. Then, the PM AI would make decisions which would offer best performance, but not the best power saving. And if the user sets request for maximum power saving, the PM AI should make decisions which would offer the least performance, but best power saving. And finally, the user would have the option to set a request for smart power saving, which would then do what you've told - it would make smart decisions which will optimize the ratio between performance and power saving based on some data it collects. This third part would be the most difficult to make as it requires relatively complex AI algorithms, but even first two parts will make a lot of people happy (including me ).

        Just to make myself clear: I would like to have the famous aticonfig --set-powerstate feature in the radeon driver: when I issued "aticonfig --set-powerstate=1", I got 3 hours of battery life with awful graphics performance (but good enough to do some simple jobs), and when I issued "aticonfig --set-powerstate=3", I got less than 1 hour of battery life, but I could play nexuiz with high details in high resolution without any problems. This is what I find very useful, and it doesn't look like it requires any complex AI PM algorithms.

        Comment


        • #24
          What do you think about creating something like /sys/class/gpu/ with engine_clock, memory_clock and voltage? Example:
          Code:
          $ cat /sys/class/gpu/engine_clock
          management: auto
          300000 KHz
          
          $ echo maximum > /sys/class/gpu/engine_clock
          $ cat /sys/class/gpu/engine_clock
          management: static
          680000 KHz
          
          $ echo minimum > /sys/class/gpu/engine_clock
          $ cat /sys/class/gpu/engine_clock
          management: static
          110000 KHz
          
          $ echo 50000 > /sys/class/gpu/engine_clock
          $ cat /sys/class/gpu/engine_clock
          management: static
          110000 KHz
          
          $ echo 250000 > /sys/class/gpu/engine_clock
          $ cat /sys/class/gpu/engine_clock
          management: static
          250000 KHz
          
          $ echo auto > /sys/class/gpu/engine_clock
          $ cat /sys/class/gpu/engine_clock
          management: auto
          320000 KHz
          ?

          Comment


          • #25
            Originally posted by Zajec View Post
            What do you think about creating something like /sys/class/gpu/ with engine_clock, memory_clock and voltage? Example:
            Code:
            $ cat /sys/class/gpu/engine_clock
            management: auto
            300000 KHz
            
            $ echo maximum > /sys/class/gpu/engine_clock
            $ cat /sys/class/gpu/engine_clock
            management: static
            680000 KHz
            
            $ echo minimum > /sys/class/gpu/engine_clock
            $ cat /sys/class/gpu/engine_clock
            management: static
            110000 KHz
            
            $ echo 50000 > /sys/class/gpu/engine_clock
            $ cat /sys/class/gpu/engine_clock
            management: static
            110000 KHz
            
            $ echo 250000 > /sys/class/gpu/engine_clock
            $ cat /sys/class/gpu/engine_clock
            management: static
            250000 KHz
            
            $ echo auto > /sys/class/gpu/engine_clock
            $ cat /sys/class/gpu/engine_clock
            management: auto
            320000 KHz
            ?
            We would still need UI for that like gnome-applet or plasma widget.

            Comment


            • #26
              Originally posted by DoDoENT View Post
              I see. So the AI in the driver is required after all, and it should make decision based on the user preferences from the userspace (i.e. does the user want max performance or max powersave).

              What I have in mind is not dynamic PM from userspace, but static instead. The user would set a request that from now on, he wants maximum performance, as he (or she) is going to play a game. Then, the PM AI would make decisions which would offer best performance, but not the best power saving. And if the user sets request for maximum power saving, the PM AI should make decisions which would offer the least performance, but best power saving. And finally, the user would have the option to set a request for smart power saving, which would then do what you've told - it would make smart decisions which will optimize the ratio between performance and power saving based on some data it collects. This third part would be the most difficult to make as it requires relatively complex AI algorithms, but even first two parts will make a lot of people happy (including me ).

              Just to make myself clear: I would like to have the famous aticonfig --set-powerstate feature in the radeon driver: when I issued "aticonfig --set-powerstate=1", I got 3 hours of battery life with awful graphics performance (but good enough to do some simple jobs), and when I issued "aticonfig --set-powerstate=3", I got less than 1 hour of battery life, but I could play nexuiz with high details in high resolution without any problems. This is what I find very useful, and it doesn't look like it requires any complex AI PM algorithms.
              I think we don't need any more complex algorithm for smart than cpufreq has. It is just a bit harder to calculate load level for GPU.

              Comment


              • #27
                Originally posted by suokko View Post
                We would still need UI for that like gnome-applet or plasma widget.
                Sure, that should eventually be last step.

                Comment


                • #28
                  Originally posted by Zajec View Post
                  What do you think about creating something like /sys/class/gpu/ with engine_clock, memory_clock and voltage? Example:
                  Code:
                  $ cat /sys/class/gpu/engine_clock
                  management: auto
                  300000 KHz
                  
                  $ echo maximum > /sys/class/gpu/engine_clock
                  $ cat /sys/class/gpu/engine_clock
                  management: static
                  680000 KHz
                  
                  $ echo minimum > /sys/class/gpu/engine_clock
                  $ cat /sys/class/gpu/engine_clock
                  management: static
                  110000 KHz
                  
                  $ echo 50000 > /sys/class/gpu/engine_clock
                  $ cat /sys/class/gpu/engine_clock
                  management: static
                  110000 KHz
                  
                  $ echo 250000 > /sys/class/gpu/engine_clock
                  $ cat /sys/class/gpu/engine_clock
                  management: static
                  250000 KHz
                  
                  $ echo auto > /sys/class/gpu/engine_clock
                  $ cat /sys/class/gpu/engine_clock
                  management: auto
                  320000 KHz
                  ?
                  Exactly my idea!

                  Comment


                  • #29
                    Originally posted by suokko View Post
                    We would still need UI for that like gnome-applet or plasma widget.
                    That's not difficult. I mean: That's _really_ not difficult.

                    Comment


                    • #30
                      Originally posted by suokko View Post
                      I think we don't need any more complex algorithm for smart than cpufreq has. It is just a bit harder to calculate load level for GPU.
                      So, which information does the driver have for calculating the GPU load?

                      Comment

                      Working...
                      X