Announcement

Collapse
No announcement yet.

Linux 3.1 Kernel Draws More Power With Another Regression

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by lgstoian View Post
    Not like the 3.0 kernel has a good power consumption to begin with.
    No matter how you look at it power consumption under linux seems to be getting pretty ignored. A second class citizen.
    I guess it really depends on the laptop. With my SB, with 3.0 i get typically 8-10 hours of battery life doing web browsing, with power saving options activated it drains about 10W. I do not have windows to compare but from what i read from reviews that is pretty comparable. That being said the stock install drains much more power.

    Comment


    • #22
      IBM, Google, RedHat? Where are you?
      Why would they care about laptops? IBM only sells servers, Google only uses servers or mobile phones (platforms where this power regression doesn't come into play) and Red Hat mainly focuses on servers. I think the big question is where Ubuntu is, they should be the ones that has most customers targeted by this issue right now.

      Comment


      • #23
        Originally posted by lgstoian View Post
        Really another power regresion. The kernel is just sucking in more and more power with each release and noone seems to do anything about it. I got a new netmook that came with Windows 7 and the reason im sticking with the os is battery life. I get a hole extra hour. At this rate I'll have a better power consumption under Haiku Os.
        Y

        Yeah. Sitting at a terminal with most hardware not having drivers would make for a pretty efficient system j/k

        Comment


        • #24
          I can see Linus releasing RCs leading up to 3.1 and saying: "oh nothing interesting in this release...and other bullcrap" instead of doing something about the regressions that would benefit regular users.

          Comment


          • #25
            Wow, all you people are retarded, this was an intel issue and the reason that they disabled that by default is because it was still buggy. Blaming linus for something that was clearly a smart decision by the Intel DRM devs. They chose stability over power consumption, its as easy as that. Its the same thing with the ASPM power regression by defaulting to it being enabled they had issues arise so they had to fix the issues. Yes it sucks for those who didn't have those stability issues with it enabled but at least it has a better chance of it working for those who did.

            Also I know for a fact that Michael has said he doesn't report bugs of bisect things because it would take time from writing articles and gathering the data within them. He rarely digs deeper into the issues by doing things like bisecting or finding an linking to relevant upstream bugs, in fact he rarely even links to where to find more info, though once in a while he'll include a link to the relevant ML. The articles only "links" are just links to his older articles, which is fine, but he spams them so much without linking offsite or to sources it seems he is just trying to get more page views and thus more ad revenue rather than trying to actually inform the reader better.

            Comment


            • #26
              Originally posted by Med_ View Post
              I guess it really depends on the laptop. With my SB, with 3.0 i get typically 8-10 hours of battery life doing web browsing, with power saving options activated it drains about 10W. I do not have windows to compare but from what i read from reviews that is pretty comparable. That being said the stock install drains much more power.
              What are your system specifics and how are you determing draw?

              Comment


              • #27
                Stop calling this a regression. Seriously it is retarded. As you found out by bisecting earlier the "regression" was introduced with a patch that sets values _according to what hardware says_. I would hardly call that a regression. If hardware is faulty it's not the software's responsibility to second guess that. So until hardware has been tested and white flagged this is the way it will stay, and that's how it should be - if you think otherwise, like Michael, you are a fool. So stop spewing garbage about regressions when you have no clue what you are talking about.

                Comment


                • #28
                  Originally posted by netrage View Post
                  Stop calling this a regression. Seriously it is retarded. As you found out by bisecting earlier the "regression" was introduced with a patch that sets values _according to what hardware says_. I would hardly call that a regression. If hardware is faulty it's not the software's responsibility to second guess that. So until hardware has been tested and white flagged this is the way it will stay, and that's how it should be - if you think otherwise, like Michael, you are a fool. So stop spewing garbage about regressions when you have no clue what you are talking about.
                  A regression happens when things were working and then they break. In the case of the other issue, things were working because they were broken and then fixing them broke them. Regardless of whether or not the behavior is correct, it is still a regression.

                  It is just like the situation with the WINE project where the WINE developers fix stuff to match Microsoft's documentation and everything breaks because the quirks that Windows programs depended upon having are no longer there. Those are regressions too.
                  Last edited by Shining Arcanine; 22 August 2011, 10:56 PM.

                  Comment


                  • #29
                    Originally posted by liam View Post
                    What are your system specifics and how are you determing draw?
                    It is a Dell Latitude E6420. Core [email protected] GHz. Intel graphics card. 1600x900 screen. 97 Wh battery. Standard 7200RPM hard drive. To see the consumption i just use powertop2. When the AC is not plugged it gives the power drawn from the battery and the estimated lifetime (the latter is a bit exaggerated though for some reason). When it is completely idle it is under 9 W. With some standard browsing more like 10 W. As long as you do not visit a flash infested website that sucks your cpu of course. Note that i activated all possible power saving options i could reasonably find (powertop2 helps a lot in that too). I had some freezes once every few days but they seem to be gone with 3.0 (crossing fingers).

                    Comment


                    • #30
                      Originally posted by Med_ View Post
                      It is a Dell Latitude E6420. Core [email protected] GHz. Intel graphics card. 1600x900 screen. 97 Wh battery. Standard 7200RPM hard drive. To see the consumption i just use powertop2. When the AC is not plugged it gives the power drawn from the battery and the estimated lifetime (the latter is a bit exaggerated though for some reason). When it is completely idle it is under 9 W. With some standard browsing more like 10 W. As long as you do not visit a flash infested website that sucks your cpu of course. Note that i activated all possible power saving options i could reasonably find (powertop2 helps a lot in that too). I had some freezes once every few days but they seem to be gone with 3.0 (crossing fingers).
                      Cool, thanks.
                      I use powertop as well, and my system is pretty similiar to yours (you didn't mention the screen size, but I'll assume 14"-15").
                      Mine is T510 with M620 (i7 2.66GHz) 15" screen @ 1600x900, intel HD graphics and I'm generally in the 18-20W range while IDLE (with FF/Xchat/evince/etc open). Really odd that I'm running through so much more power.
                      What configuration did you do?

                      Best/Liam

                      Comment

                      Working...
                      X