Announcement

Collapse
No announcement yet.

Making A Code Compiler Energy-Aware

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Making A Code Compiler Energy-Aware

    Phoronix: Making A Code Compiler Energy-Aware

    There's a discussion on the LLVM development mailing list about making the compiler become energy-aware to provide an optimization level that would provide the most power-efficient binaries. However, it isn't clear whether this would make sense over simply trying to assemble the fastest binary...

    http://www.phoronix.com/vr.php?view=MTM1MzE

  • #2
    Couldnt agree more that "more faster = more energy efficient" on all of todays, and a lot of yesterdays hardware.

    Comment


    • #3
      Saying "faster=more energy efficient" is very narrow-minded. That is undoubtedly true, but that's only when you look at a single-instance task. For example, if you have a task running 24/7 that does not ever max out the CPU, optimizing the code for speed might in fact use up more power, because the CPU is going all-out on a task that will never end; in other words, you can't "hurry up" an infinite procedure that is already going as fast as it can. As long as the CPU doesn't get maxed out and as long as the program keeps up with it's task, I'm sure reducing instruction sets needed would help increase power efficiency.

      Overall, I'm sure the end result of this is minor. But, other articles on Phoronix have shown that, for example, updates to GPU drivers can both increase performance and power efficiency on the same hardware. Who says a compiler can't do the same?

      Comment


      • #4
        I'm afraid that there is not much possible this way.
        There are greater gains to be had from teaching people to program efficiently.

        Comment


        • #5
          Originally posted by schmidtbag View Post
          Saying "faster=more energy efficient" is very narrow-minded. That is undoubtedly true, but that's only when you look at a single-instance task. For example, if you have a task running 24/7 that does not ever max out the CPU, optimizing the code for speed might in fact use up more power, because the CPU is going all-out on a task that will never end; in other words, you can't "hurry up" an infinite procedure that is already going as fast as it can. As long as the CPU doesn't get maxed out and as long as the program keeps up with it's task, I'm sure reducing instruction sets needed would help increase power efficiency.

          Overall, I'm sure the end result of this is minor. But, other articles on Phoronix have shown that, for example, updates to GPU drivers can both increase performance and power efficiency on the same hardware. Who says a compiler can't do the same?
          Same thing with games. The game's main logic loop runs forever, forever pegging the CPU. I wish there was a way to tell the CPU (combined with thermal sensors) "Screw the frame rate, never get above xyz degrees in temperature."

          Comment


          • #6
            Originally posted by plonoma View Post
            I'm afraid that there is not much possible this way.
            There are greater gains to be had from teaching people to program efficiently.
            Good luck with that...
            Seeing the world move to the cloud and web browsers, I don't really see that happening for client side applications.

            Same thing with games. The game's main logic loop runs forever, forever pegging the CPU. I wish there was a way to tell the CPU (combined with thermal sensors) "Screw the frame rate, never get above xyz degrees in temperature."
            This is the only reason I'm looking for an alternative to xbmc... For now, I'm using an ugly combination of STOP/CONT signals.

            Serafean

            Comment


            • #7
              Originally posted by Serafean View Post
              This is the only reason I'm looking for an alternative to xbmc... For now, I'm using an ugly combination of STOP/CONT signals.
              XBMC has mobile versions, correct? Such crap coding would not stand there. This means it's not a design issue in XBMC and merely bad coding of the linux port -> something you can fix, or pay someone to fix.

              Comment


              • #8
                I'm still waiting for -Olinus BTW -O3 that also takes cache size into account.

                Comment


                • #9
                  Originally posted by Ericg View Post
                  Same thing with games. The game's main logic loop runs forever, forever pegging the CPU. I wish there was a way to tell the CPU (combined with thermal sensors) "Screw the frame rate, never get above xyz degrees in temperature."
                  That is a good point, although I suppose the easiest way to do it is to detect it's own frame rate and realize it doesn't need to go beyond the refresh rate of the monitor. Detecting temperatures would be too much of a headache, and besides, some systems run at 70C idle. That could make a game run at 1FPS that could otherwise be beyond 100.

                  Comment


                  • #10
                    Originally posted by Ericg View Post
                    Same thing with games. The game's main logic loop runs forever, forever pegging the CPU. I wish there was a way to tell the CPU (combined with thermal sensors) "Screw the frame rate, never get above xyz degrees in temperature."
                    There is and it is called underclocking. You can tune it to temperature by running something like Prime95 and adjusting the clock until the thermals are in the right place.

                    My opinion is that if your hardware is overheating then you didn't put enough cooling on it. It is never the program's fault because it is actually using the available hardware. If you have a six core CPU clocked at 4 GHz there should not be any problem running six computation threads at 100% CPU. If there is, then you need to slow it down to 3.6 GHz or give it a bigger heatsink.

                    Always annoys me when I read people complaining about some game making their system shut down. It isn't the complainer's fault, oh no, because it runs World of Warcraft just fine so their system must be perfect. As if WoW was the ultimate game.

                    Comment


                    • #11
                      Originally posted by Ericg View Post
                      Same thing with games. The game's main logic loop runs forever, forever pegging the CPU. I wish there was a way to tell the CPU (combined with thermal sensors) "Screw the frame rate, never get above xyz degrees in temperature."
                      Interesting thing about that - I have made a game (pretty simple, 2D card game based on Might and Magic Arcomage), which is using OpenGL, but I only update the window when the mouse is moving (or something else is happening, of course, like animations are playing). For a simple 2D game it works really well, and at least the GPU sleeps a lot while playing it. I am yet to implement a framerate limit, though, so it's possible to get some crazy framerates by moving the mouse around in the window very quickly.

                      Originally posted by curaga View Post
                      I'm still waiting for -Olinus BTW -O3 that also takes cache size into account.
                      I think it's called -march native Or -mtune, at least.
                      Last edited by GreatEmerald; 04-16-2013, 02:50 PM. Reason: Silly vBulletin thinking that ":P" is bad, ":p" is good

                      Comment


                      • #12
                        Originally posted by schmidtbag View Post
                        Saying "faster=more energy efficient" is very narrow-minded. That is undoubtedly true, but that's only when you look at a single-instance task. For example, if you have a task running 24/7 that does not ever max out the CPU, optimizing the code for speed might in fact use up more power, because the CPU is going all-out on a task that will never end; in other words, you can't "hurry up" an infinite procedure that is already going as fast as it can. As long as the CPU doesn't get maxed out and as long as the program keeps up with it's task, I'm sure reducing instruction sets needed would help increase power efficiency.

                        Overall, I'm sure the end result of this is minor. But, other articles on Phoronix have shown that, for example, updates to GPU drivers can both increase performance and power efficiency on the same hardware. Who says a compiler can't do the same?
                        This makes no sense at all. If you have a task that's running 24/7 on but requires not so much CPU time (=CPU is doing other work or sleeping in between) making it faster will simply make the cpu to sleep more in between and thus save energy.
                        Think about it, what you are saying is that your cpu calculates something faster but actually does the same work in the same time as before. Then it's by definition not faster.

                        Comment


                        • #13
                          Originally posted by Goderic View Post
                          This makes no sense at all. If you have a task that's running 24/7 on but requires not so much CPU time (=CPU is doing other work or sleeping in between) making it faster will simply make the cpu to sleep more in between and thus save energy.
                          Think about it, what you are saying is that your cpu calculates something faster but actually does the same work in the same time as before. Then it's by definition not faster.
                          I could be totally wrong on this but I think what he's saying is... He has a program (like a daemon) that is running through a loop that will keep the CPU awake and not idling 24/7. I THINK most governor's would push the CPU to max frequencies until he killed the program because it just kept going forever. And he wants a way to tell the CPU "I know this program is keeping you awake for the rest of eternity, and you cant idle because of it, but you dont need to run it at max, running at minimum is just fine."

                          Such as a program that just spit out "Hello World the time is $TIME" to stdout for the rest of forever, the CPU wouldnt need to run at max frequencies just because that program is keeping the system under load.

                          Comment


                          • #14
                            Originally posted by GreatEmerald View Post
                            Interesting thing about that - I have made a game (pretty simple, 2D card game based on Might and Magic Arcomage), which is using OpenGL, but I only update the window when the mouse is moving (or something else is happening, of course, like animations are playing). For a simple 2D game it works really well, and at least the GPU sleeps a lot while playing it. I am yet to implement a framerate limit, though, so it's possible to get some crazy framerates by moving the mouse around in the window very quickly.
                            Which for 2D games that works well, but I play a lot of older games that are still 3D based (Knights of The Old Republic 1 and 2 comes to mind) where they peg the GPU to max just because its a continuous load, even though if the GPU would tone down the clocks it would still run just fine without stuttering, but there's no way to TELL the GPU that.

                            Comment


                            • #15
                              Originally posted by Ericg View Post
                              Which for 2D games that works well, but I play a lot of older games that are still 3D based (Knights of The Old Republic 1 and 2 comes to mind) where they peg the GPU to max just because its a continuous load, even though if the GPU would tone down the clocks it would still run just fine without stuttering, but there's no way to TELL the GPU that.
                              Doesn't locking video to VSync in the GPU configuration do that for you? I may be thinking Windows here. If WINE doesn't offer this, then it needs to add it.

                              On my Windows gaming computer I have the ATI configuration set for triple-buffer and forced VSync on all applications.

                              Comment

                              Working...
                              X