Announcement

Collapse
No announcement yet.

GNOME-Usage Program Still Striving To Report Per-Program Power Analytics

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • GNOME-Usage Program Still Striving To Report Per-Program Power Analytics

    Phoronix: GNOME-Usage Program Still Striving To Report Per-Program Power Analytics

    Started back in 2018 during the Google Summer of Code was work for reporting system power information within the GNOME-Usage utility. While some user-interface elements were fleshed out and other engineering completed, the code isn't yet merged or ready for users as the approach for accomplishing the per-program power reporting is still being devised...

    http://www.phoronix.com/scan.php?pag...age-Power-2020

  • #2
    This is how you actually reduce the carbon footprint of GNOME.

    Comment


    • #3
      Wouldn't it be easier to at least report resource usage per app/process(cgroup?) instead?

      I mean if power estimates are difficult/impossible to get accurate due to numerous issues out of their control, but resource monitoring isn't so much, which type of information is going to be more useful and potentially correlate with identifying what's draining power?

      Comment


      • #4
        This kind of issue will only come to the fore with increasing rapidity as the computing world goes ARM from Supercomputers to IoT. I'll make this claim right now. By 2030 MORE than 50% of all full stack Windows based consumer desktops and laptops will be ARM based. That's NOT including MacOS based offerings and NOT including Chromebooks. The ARM ISA world reached the tipping point this year, 2020, with the announcement of Apple going full ARM along with Microsoft and Google developing their own ARM CPUs for their upcoming consumer gear. Not to mention as of right now the world's fastest Supercomputer is ARM based. The ARM world has been from the beginning concerned with power and the measuring of it by devices that use ARM. Not so much x-86. Nor has the x-86 world been concerned with built in sensor hubs and exposing them as ARM has. We will look back 10 years from now and say Apple and ARM killed x-86 on the consumer desktop. x-86 is now the 21st Century version of a Mainframe chip.

        Comment


        • #5
          By the way....the Linux world along with Debian, Ubuntu, Gnome and KDE better wake the hell up right now and initiate a full on moonshot program to fully engage the ARM ISA platform and write the entire Linux stack from kernel to uuserspace AND refactor all major Linux DE programs and major 3rd party apps like LibreOffice to all run natively and as performant if not better on ARM as on x-86. x-86 should now be considered a deprecated ISA and computing platform for anything other than high end content creation and HPC/Supercomputing.

          Comment


          • #6
            Originally posted by Jumbotron View Post
            By the way....the Linux world along with Debian, Ubuntu, Gnome and KDE better wake the hell up right now and initiate a full on moonshot program to fully engage the ARM ISA platform and write the entire Linux stack from kernel to uuserspace AND refactor all major Linux DE programs and major 3rd party apps like LibreOffice to all run natively and as performant if not better on ARM as on x-86. x-86 should now be considered a deprecated ISA and computing platform for anything other than high end content creation and HPC/Supercomputing.
            Take a breath there buddy. ARM might be the future... or not. Linux has run on it since it existed. And dozens of other architectures. As far as userspace goes, well, we have the source, so its *mostly* just a matter or recompilation. It might be a big deal for Apple or Microsoft, its old hat for Linux. No need for moonshot hyperbole I personally think competition in the cpu architecture dept. is great... and I wouldn't be so quick to count out x86...

            Comment


            • #7
              Originally posted by Jumbotron View Post
              By the way....the Linux world along with Debian, Ubuntu, Gnome and KDE better wake the hell up right now and initiate a full on moonshot program to fully engage the ARM ISA platform and write the entire Linux stack from kernel to uuserspace AND refactor all major Linux DE programs and major 3rd party apps like LibreOffice to all run natively and as performant if not better on ARM as on x-86. x-86 should now be considered a deprecated ISA and computing platform for anything other than high end content creation and HPC/Supercomputing.
              Take a breath. "Write the entire Linux stack from kernel to uuserspace AND refactor all major Linux DE programs", roflmao. Do you even understand your own words?

              Free software can be recompiled. 99% of general purpose free software does not care a slightest bit about the underlying ISA.

              Comment


              • #8
                You really think it's hyperbole? Quick question? What has been and still is the #1 Video Game Platform in the world? X-Bone...PS<insert # here>? Nope...it's ARM based phones and tablets. And that's NOT even accounting the number of units of Nintendo Switches which of course run on ARM. Even if you include Desktop x-86 games sales along with x-86 Console game sales it STILL doesn't surpass the combined sales of ARM compiled video games for phones, tablets and Nintendo Switch. Hardly ANYONE can comprehend what Apple is going to unleash when they introduce their fully unfettered by low power requirements Desktop A14 ARM CPU next year for their PowerMac desktops. But trust me...Microsoft knows. And as we have seen for the last 40 years Apple is nothing more than Microsoft's R&D department. They'll let Apple trail blaze and then merrily copy as best (badly) as they can. Once Microsoft turns ARM with a whole line of Microsoft branded Gaming desktops, All In Ones, Laptops AND Surfaces all with custom made Microsoft designed ARM Chips in them competing directly with old, slow, tired played out x-86 Dells, HPs and Lenovos then that's it for x-86 except for high end DCC, HPC and Supercomputers. This will happen by 2025 guaranteed. Why? Because ARM has introduced the ARM X series which allows device manufacturers to make high performance ARM based products without being saddled by an ARM CPU or GPU designed strictly for low power use. What Apple chose to do in house ARM will allow Qualcomm or Mediatek or Microsoft or Google or Samsung to do without having to build an entire custom CPU/GPU design division. 2020 Apple threw down the gauntlet like they did to Flash. 2025 Microsoft overturns the x-86 Hegemony. 2030 ARM is now the leading silicon provider for Desktops, Laptops, Phones and Tablets. The Linux world needs right now to dual channel everything. Parallel track development of everything. The Paradigm shift is fully upon us.

                Comment


                • #9
                  Jumbotron - as others have already pointed out, we don't "need to" "dual channel" anything, and your panic is unfounded even if your position actually turns out to be correct.
                  There is no effort involved for anyone: the work is already either done, or doesn't need doing. (With the exception of a fraction of a percent of a tiny number of performance-critical pieces of code, and anyone who's had to write that sort of thing will have no trouble transitioning. I picked up NEON in a weekend, and that was a decade after I stopped writing asm on a regular basis).

                  Comment


                  • #10
                    Intel had a program back in the mid to late 90's called vTune which could measure the cost of a program on system resources.

                    We used to benchmark various application types with the tool to see where the "costs" were. Visual Basic, Delphi, C, didn't matter, we looked at them all. (if it ran on Windows)

                    It helped us find inefficiencies in our code so we could work with the developers on where to optimize it.

                    vTune got mutated into some other product that cost a bundle and we dropped it, but if they were able to measure process cost per application as far back as the 1990's, I can't see why that can't be translated into a per-application power cost today.

                    Comment

                    Working...
                    X