Announcement

Collapse
No announcement yet.

Power & Memory Usage Of GNOME, KDE, LXDE & Xfce

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by << ⚛ >> View Post
    But the cache may share some physical memory pages with executables and libraries. Did you know about that?

    So, when you do (TOTAL_USED_MEMORY - MEMORY_USED_BY_CACHE) you might in fact be subtracting away memory belonging to executables and libraries.
    So let's look at the source code and see what files are loaded into RAM.

    Then compile Gnome and KDE under the same conditions with the same settings and compare the sizes of the loaded binary files.

    Then compare them in percentages to each other (because each different compile may differ in size of the binairies...)

    Comment


    • Monkey intelligence

      Originally posted by V!NCENT View Post

      Originally posted by << ⚛ >> View Post

      Originally posted by BlackStar View Post
      That said, you seem to miss the fact that syslog and similar processes add a constant amount of overhead that affects all measurements. They don't skew results one way or another when comparing DEs (as was done here).
      What is your IQ, my dear?
      I'm sorry, but that rhetorical question, in itself, is extremely stupid and says more about you than about BlackStar...
      Fact 1: BlackStar wrote that adding a constant does not skew results.
      Fact 2: Phoronix uses percentages in the article.
      Fact 3: Adding a constant to a divisor or a dividend skews results.
      Conclusion: BlackStar is wrong.

      Now you are going to reply that an average monkey can learn in two weeks what I just said and can contemplate about it in internet forums. OK, simply go on, make a reply about those monkeys.

      Comment


      • Originally posted by << ⚛ >> View Post
        Fact 1: BlackStar wrote that adding a constant does not skew results.
        Fact 2: Phoronix uses percentages in the article.
        Fact 3: Adding a constant to a divisor or a dividend skews results.
        Conclusion: BlackStar is wrong.
        Fact 4: This has nothing to do with IQ
        Fact 5: You just want to win
        Fact 6: You failed

        Comment


        • Nondeteministic compilers ?

          Originally posted by V!NCENT View Post
          Then compare them in percentages to each other (because each different compile may differ in size of the binairies...)
          Can you show me an example of a non-deterministic Linux compiler? (Different timestamps, version infos, etc., do not count.)

          Comment


          • Monkey intelligence

            Originally posted by V!NCENT View Post
            Fact 4: This has nothing to do with IQ
            Fact 5: You just want to win
            Fact 6: You failed
            I don't see any monkeys in your reply. Monkeys are essential in making a solid argument, you know ...

            Comment


            • Originally posted by << ⚛ >> View Post
              I don't see any monkeys in your reply.
              That's correct...

              Monkeys are essential in making a solid argument, you know ...
              No, because they are not essential. They can, however, be used to explain solid arguments.

              Which leads me to why BlackStar is winning:
              We are engaging in a social discussion and BlackStar is awesome at trolling and you failed to grasp that. A social event (quantum physics -> overruling) can't be concluded because it is not logic, but the result of logic and avarages.

              I (1) and BlackStar (2) versus you (3).
              (2/3) > (1/3) = you lose.

              Comment


              • Originally posted by << ⚛ >> View Post
                Can you show me an example of a non-deterministic Linux compiler? (Different timestamps, version infos, etc., do not count.)
                Shall we also include spacetime while we are at it? Just use GCC...

                Oh and the CPU cash is also deterministic so while we are at it: do you have an undeterministic compute unit to test it on, because by your logic your own RAM tests fail also

                Comment


                • Originally posted by << ⚛ >> View Post
                  No, it is not clear to me. I agree it is maybe clear to yourself, but if you publish it subjective clarity is not sufficient.
                  It would only be unclear to someone who hadn't read my previous posts on the subject, somebody exercising a wanton desire to cloud the current discussion with irrelevance, or someone unable to grasp the entirety of what is being put forward.

                  Originally posted by << ⚛ >> View Post
                  "Used memory" is a very broad term.
                  Not for the purposes of this discussion. It's quite well understood from the original article what is trying to be determined. And given the gravity (or mostly lack thereof) of the results on the DE choice of the readers here, some inaccuracies can be tolerated. The biggest problem I had with the original figures was that they were quite far off from what I had personally experienced, and so thought there was likely some large error. As it turns out, there was.

                  Originally posted by << ⚛ >> View Post
                  [B]But the cache may share some physical memory pages with executables and libraries. Did you know about that?
                  If absolute accuracy was required I'd be passing a dump of /proc/meminfo. (no jokes about passing a dump )
                  Given the size of error introduced by the 'cached' element relative to the other values I felt it reasonable to go with my current methodology. Remember what is trying to be determined.

                  Surely it's mostly to inform the readers whether a particular desktop environment has a meaningful impact on the quality of experience for an end user due to excessive memory consumption relative to the other choices available to an individual.

                  Comment


                  • Originally posted by mugginz View Post
                    Surely it's mostly to inform the readers whether a particular desktop environment has a meaningful impact on the quality of experience for an end user due to excessive memory consumption relative to the other choices available to an individual.
                    Is your subconscious also hinting your conscious that we are dealing with a autistic person here? X'D

                    Comment


                    • Originally posted by V!NCENT View Post
                      Is your subconscious also hinting your conscious that we are dealing with a autistic person here? X'D
                      That's one possibility.

                      An inability to grasp the greater goings on and instead focus on minutia to the detriment of the larger argument.

                      (apologies for the rambling sentences.)

                      Comment


                      • Basically it is possible that KDE 4 effects (or Compiz) can show memory leaks in in grafic drivers, in that case the memory usage for Xorg will increase dramatically. Mostly older Nv drivers seemed to be affected by that so running KDE 4 on old machines with low memory was much slower than with KDE 3.5. I tested my own live systems now with a very simple test: show memory usage directly after boot using the integrated infobash tool (inside vbox with 980 mb ram) - rounded, basically it is changeing a bit with each run, but you see the tendency.

                        kde 3.5

                        32 bit: 129 mb
                        64 bit: 212 mb

                        kde 4.3 (backport for lenny):

                        32 bit: 179 mb
                        64 bit: 276 mb

                        Comment


                        • Originally posted by mugginz View Post
                          That's one possibility.

                          An inability to grasp the greater goings on and instead focus on minutia to the detriment of the larger argument.
                          Which is, in turn, resulting from not constantly looping a function in the brain that tries to trace what caused (result from logic) the sentences spoken out by people who are also taking part of the conversation that he is part of and instead the person only obeserves the sentences and tries to correct them while they are not part of this 'greater whole' or "larger argument".

                          He or she needs to execute this loop constantly in the future (software mode so to speak) so that it becomes part of his hardwired functionality (hardware acceleration so to speak) to get rid if his or her mental shortcomings

                          Taking part in social activity is much like a parser that translates back and forth between human behavior and logic.

                          Comment


                          • PS: Or... "What the fsck are we doing here?"

                            Comment


                            • Originally posted by V!NCENT View Post
                              Which is, in turn, resulting from not constantly looping a function in the brain that tries to trace what caused (result from logic) the sentences spoken out by people who are also taking part of the conversation that he is part of and instead the person only obeserves the sentences and tries to correct them while they are not part of this 'greater whole' or "larger argument".

                              He or she needs to execute this loop constantly in the future (software mode so to speak) so that it becomes part of his hardwired functionality (hardware acceleration so to speak) to get rid if his or her mental shortcomings

                              Taking part in social activity is much like a parser that translates back and forth between human behavior and logic.
                              I couldn't of put that better myself!

                              In fact, I don't think I could've done that at all....

                              A post for the hall of fame perhaps.

                              Comment


                              • As far as memory use goes it looks like the Gnome guys are creamin the KDE guys. At least when it comes to the buntus anyways.



                                Initial blimpage in memory for Kubuntu 10.04 might be due to the Nepomuk Strigi indexer auto starting on login every session.

                                Comment

                                Working...
                                X