Announcement

Collapse
No announcement yet.

Making A Code Compiler Energy-Aware

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by erendorn View Post
    Hey, "max out the CPU" doesn't mean you use 100% of each instruction instead of 50% of each instruction. It means you use 100% of instructions instead of 50% of instructions.
    If your program is faster, it uses less instructions for the same result, and as such A LOWER PERCENTAGE OF CPU USAGE (as it is calculated as per available instructions, whose number is fixed).

    Also, if you have a task that:
    - must run on 100% CPU all the time
    - doesn't matter what it does during this time
    you can underclock your CPU so that it calculates on instruction per year, and sleep the rest of the time. You should get a pretty good efficiency like that.
    I thought I added a comment to this exact situation in my original post but I guess I forgot to put it in. The underclocking is a nice idea but a little inconvenient for an end-user. If you're using a server then that ends up being a waste of good hardware.

    Comment


    • #22
      Originally posted by Ericg View Post
      I wish there was a way to tell the CPU (combined with thermal sensors) "Screw the frame rate, never get above xyz degrees in temperature."
      It's possible, but using sensors would probably not be a good idea.
      A lot of games already have a frame-rate control scale that (I assume) adds a small sleep to the main game loop to slow it down, I use it in any game that has it available during the summer because my PC is way too hot.

      Comment


      • #23
        Originally posted by peppercats View Post
        It's possible, but using sensors would probably not be a good idea.
        A lot of games already have a frame-rate control scale that (I assume) adds a small sleep to the main game loop to slow it down, I use it in any game that has it available during the summer because my PC is way too hot.
        Example of a game with it, pepper? Because I've never once seen a game with that option available
        All opinions are my own not those of my employer if you know who they are.

        Comment


        • #24
          Originally posted by Ericg View Post
          Example of a game with it, pepper? Because I've never once seen a game with that option available
          right off the top of my head, World of Warcraft has a slider to let you set your max FPS.

          Comment


          • #25
            Originally posted by peppercats View Post
            right off the top of my head, World of Warcraft has a slider to let you set your max FPS.
            Yea, same with Unreal (although there it's not a slider, but rather an INI entry).

            Comment


            • #26
              Originally posted by peppercats View Post
              right off the top of my head, World of Warcraft has a slider to let you set your max FPS.
              ...Huh, must be new with Mists. I stopped playing when Mists hit (originally started like 2months after launch lol...) and I never saw that setting o.O
              All opinions are my own not those of my employer if you know who they are.

              Comment


              • #27
                Originally posted by GreatEmerald View Post
                I think it's called -march native Or -mtune, at least.
                See http://lkml.org/lkml/2013/1/26/161

                Comment


                • #28
                  On phones..

                  The CPU isn't the highest power consumer, the display is. When I get the result I'm waiting for, I turn off the screen.

                  Here a faster CPU is a better gain than a slower with considerably better efficiency because power use isn't only about the CPU.

                  But with pipelining, continuous wakeup tasks can spend more power in CPU start-up than for doing their work. In those cases, an efficient core (slow = ok) seems best.

                  Comment


                  • #29
                    I'm still waiting for -Olinus BTW -O3 that also takes cache size into account.

                    Comment


                    • #30
                      Hi everyone,

                      I'm a PhD student involved in the ENTRA project from the University of Bristol. I'm also sysadmin at a big UK technology news website, but we won't talk about that . I've been working on energy modelling of software applications for multi-threaded embedded systems for nearly three years and so some of my work is relevant to ENTRA, which kicked off in 2012.

                      Ask me anything you like - I'll answer if I can. Pre-emptively, however, a few comments on things already discussed here:

                      1. "Faster = more energy efficient" is of course true, by virtue of being able to do more work. So writing more performance-efficient code will, for the same data set, invariably be more energy efficient.

                      2. Better still, if your idle power is low enough and you can do DVFS, then race-to-idle can sometimes work in your favour. The relationship isn't that straightforward, though. The dynamic energy consumption of a CMOS device is proportional to the voltage squared, so if you need to up the voltage in order to get a higher frequency, energy consumption can rise quicker than the speed gain you get. The specifics of this behaviour, and the sweet-spots of volatge/frequency operation as well as whether dynamic or static power are dominant are dictated by the process technology's feature size as well various other fabrication options. For example, standard 45nm may have one typical static/dynamic behaviour, whereas 45nm-LP (low-power) may have a smaller static power, but you're more limited in operating frequency.

                      3. Considering the inefficiencies of things like displays and power supplies is important at a system level. Saving 20% of energy in some IP block within the processor is one thing, but if that block contributes to only 3% of the processor energy, which is itself only 10% of the system, you're not changing the world. That said, I work with embedded systems - they might have a network interface (sometimes), but my systems have no display.

                      4. Compiler optimisations for speed = compiler optimisation for energy is typically true, because of point 1. That's not to say there aren't other optimisations that specifically improve energy with no performance impact (by smoothing out unavoidable slack, for example). But that's not typically been the goal of a compiler programmer when searching for new optimisation passes.

                      5. ENTRA stands for ENergy TRAnsparency - it's not just about the tools providing you with optimisations, it's about helping programmers understand where the energy is going. There are programmers that care about performance, and they can profile it relatively easily and see where system time is being spent. The energy behaviour of a system in relation to the software that somebody is writing is much more difficult to get a handle on. So one of our motivators is that if we can give people more information on the energy behaviour of their program, they can start to understand how to code with energy efficiency in mind.

                      I hope I'm making myself useful.

                      Comment

                      Working...
                      X