Announcement

Collapse
No announcement yet.

Making A Code Compiler Energy-Aware

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Ericg View Post
    Same thing with games. The game's main logic loop runs forever, forever pegging the CPU. I wish there was a way to tell the CPU (combined with thermal sensors) "Screw the frame rate, never get above xyz degrees in temperature."
    Interesting thing about that - I have made a game (pretty simple, 2D card game based on Might and Magic Arcomage), which is using OpenGL, but I only update the window when the mouse is moving (or something else is happening, of course, like animations are playing). For a simple 2D game it works really well, and at least the GPU sleeps a lot while playing it. I am yet to implement a framerate limit, though, so it's possible to get some crazy framerates by moving the mouse around in the window very quickly.

    Originally posted by curaga View Post
    I'm still waiting for -Olinus BTW -O3 that also takes cache size into account.
    I think it's called -march native Or -mtune, at least.
    Last edited by GreatEmerald; 16 April 2013, 02:50 PM. Reason: Silly vBulletin thinking that ":P" is bad, ":p" is good

    Comment


    • #12
      Originally posted by schmidtbag View Post
      Saying "faster=more energy efficient" is very narrow-minded. That is undoubtedly true, but that's only when you look at a single-instance task. For example, if you have a task running 24/7 that does not ever max out the CPU, optimizing the code for speed might in fact use up more power, because the CPU is going all-out on a task that will never end; in other words, you can't "hurry up" an infinite procedure that is already going as fast as it can. As long as the CPU doesn't get maxed out and as long as the program keeps up with it's task, I'm sure reducing instruction sets needed would help increase power efficiency.

      Overall, I'm sure the end result of this is minor. But, other articles on Phoronix have shown that, for example, updates to GPU drivers can both increase performance and power efficiency on the same hardware. Who says a compiler can't do the same?
      This makes no sense at all. If you have a task that's running 24/7 on but requires not so much CPU time (=CPU is doing other work or sleeping in between) making it faster will simply make the cpu to sleep more in between and thus save energy.
      Think about it, what you are saying is that your cpu calculates something faster but actually does the same work in the same time as before. Then it's by definition not faster.

      Comment


      • #13
        Originally posted by Goderic View Post
        This makes no sense at all. If you have a task that's running 24/7 on but requires not so much CPU time (=CPU is doing other work or sleeping in between) making it faster will simply make the cpu to sleep more in between and thus save energy.
        Think about it, what you are saying is that your cpu calculates something faster but actually does the same work in the same time as before. Then it's by definition not faster.
        I could be totally wrong on this but I think what he's saying is... He has a program (like a daemon) that is running through a loop that will keep the CPU awake and not idling 24/7. I THINK most governor's would push the CPU to max frequencies until he killed the program because it just kept going forever. And he wants a way to tell the CPU "I know this program is keeping you awake for the rest of eternity, and you cant idle because of it, but you dont need to run it at max, running at minimum is just fine."

        Such as a program that just spit out "Hello World the time is $TIME" to stdout for the rest of forever, the CPU wouldnt need to run at max frequencies just because that program is keeping the system under load.
        All opinions are my own not those of my employer if you know who they are.

        Comment


        • #14
          Originally posted by GreatEmerald View Post
          Interesting thing about that - I have made a game (pretty simple, 2D card game based on Might and Magic Arcomage), which is using OpenGL, but I only update the window when the mouse is moving (or something else is happening, of course, like animations are playing). For a simple 2D game it works really well, and at least the GPU sleeps a lot while playing it. I am yet to implement a framerate limit, though, so it's possible to get some crazy framerates by moving the mouse around in the window very quickly.
          Which for 2D games that works well, but I play a lot of older games that are still 3D based (Knights of The Old Republic 1 and 2 comes to mind) where they peg the GPU to max just because its a continuous load, even though if the GPU would tone down the clocks it would still run just fine without stuttering, but there's no way to TELL the GPU that.
          All opinions are my own not those of my employer if you know who they are.

          Comment


          • #15
            Originally posted by Ericg View Post
            Which for 2D games that works well, but I play a lot of older games that are still 3D based (Knights of The Old Republic 1 and 2 comes to mind) where they peg the GPU to max just because its a continuous load, even though if the GPU would tone down the clocks it would still run just fine without stuttering, but there's no way to TELL the GPU that.
            Doesn't locking video to VSync in the GPU configuration do that for you? I may be thinking Windows here. If WINE doesn't offer this, then it needs to add it.

            On my Windows gaming computer I have the ATI configuration set for triple-buffer and forced VSync on all applications.

            Comment


            • #16
              Originally posted by Zan Lynx View Post
              Doesn't locking video to VSync in the GPU configuration do that for you? I may be thinking Windows here. If WINE doesn't offer this, then it needs to add it.

              On my Windows gaming computer I have the ATI configuration set for triple-buffer and forced VSync on all applications.
              I can tell it to sync to vsync but im not sure if that actually lowers the clocks to ONLY render at 60hertz. And for this laptop I am on windows due to (as Ive said in a other threads) overheating problems under Linux
              All opinions are my own not those of my employer if you know who they are.

              Comment


              • #17
                Originally posted by Ericg View Post
                I can tell it to sync to vsync but im not sure if that actually lowers the clocks to ONLY render at 60hertz. And for this laptop I am on windows due to (as Ive said in a other threads) overheating problems under Linux
                No, it won't lower the clocks. However, the control panel tool can probably do that. At least, on mine I have options intended to overclock the card, but I can use them to turn it down too. Changing clock speeds may void your warantee but if your laptop still has a warantee you should get the overheating fixed.

                What vsync will do is once the GPU has finished rendering the next frame (or two with triple buffer) it will stop rendering and wait. This is like the idle loop on a CPU. The clock speed stays up but the hardware is not doing any work. It is simply idling very quickly, ready to react as soon as more work comes in.

                Comment


                • #18
                  Originally posted by Zan Lynx View Post
                  No, it won't lower the clocks. However, the control panel tool can probably do that. At least, on mine I have options intended to overclock the card, but I can use them to turn it down too. Changing clock speeds may void your warantee but if your laptop still has a warantee you should get the overheating fixed.
                  Its not a hardware issue =P Its a "Linux isnt working properly" issue since it doesnt happen under Windows 7 even if I purposefully block the vents lol
                  All opinions are my own not those of my employer if you know who they are.

                  Comment


                  • #19
                    Originally posted by Ericg View Post
                    I could be totally wrong on this but I think what he's saying is... He has a program (like a daemon) that is running through a loop that will keep the CPU awake and not idling 24/7. I THINK most governor's would push the CPU to max frequencies until he killed the program because it just kept going forever. And he wants a way to tell the CPU "I know this program is keeping you awake for the rest of eternity, and you cant idle because of it, but you dont need to run it at max, running at minimum is just fine."

                    Such as a program that just spit out "Hello World the time is $TIME" to stdout for the rest of forever, the CPU wouldnt need to run at max frequencies just because that program is keeping the system under load.
                    That's not something that can be handled at the compiler level, it would have to be part of the OS, or more likely part of the app/daemon itself to be really useful.

                    Comment


                    • #20
                      Originally posted by schmidtbag View Post
                      Saying "faster=more energy efficient" is very narrow-minded. That is undoubtedly true, but that's only when you look at a single-instance task. For example, if you have a task running 24/7 that does not ever max out the CPU, optimizing the code for speed might in fact use up more power, because the CPU is going all-out on a task that will never end; in other words, you can't "hurry up" an infinite procedure that is already going as fast as it can. As long as the CPU doesn't get maxed out and as long as the program keeps up with it's task, I'm sure reducing instruction sets needed would help increase power efficiency.

                      Overall, I'm sure the end result of this is minor. But, other articles on Phoronix have shown that, for example, updates to GPU drivers can both increase performance and power efficiency on the same hardware. Who says a compiler can't do the same?
                      Hey, "max out the CPU" doesn't mean you use 100% of each instruction instead of 50% of each instruction. It means you use 100% of instructions instead of 50% of instructions.
                      If your program is faster, it uses less instructions for the same result, and as such A LOWER PERCENTAGE OF CPU USAGE (as it is calculated as per available instructions, whose number is fixed).

                      Also, if you have a task that:
                      - must run on 100% CPU all the time
                      - doesn't matter what it does during this time
                      you can underclock your CPU so that it calculates on instruction per year, and sleep the rest of the time. You should get a pretty good efficiency like that.

                      Comment

                      Working...
                      X