Announcement

Collapse
No announcement yet.

Fedora 7 to 10 Benchmarks

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    The source of the problem is quite obvious to anyone looking at the benchmarks:

    The person doing the test screwed up.

    The tests all need to be run again. You need to stop using erroneous, spurious results until you can explain -exactly- what happened. Not guess, not conjecture, and certainly not blind acceptance.

    Do the tests properly. Stop using the obviously faulty results. Stop. Really. Stop embarrassing the community you supposedly represent by showing that new distributions are half as fast as older ones.

    It's a problem seen throughout the OSS community: Other than a handful of major projects, the kids don't want to finish their work. They don't want to debug, they don't want to write documentation, or they don't want to do proper benchmarking.

    Comment


    • #22
      Michael, did you update your bios to the most recent version?

      I've had problems with lenovo's (the C2D versions) running XP & Vista, the problem seemed to have been solved with the later biosses. Problems I had where that while the OS would report both core's running at full load, only one core was actually doing the work.

      Comment


      • #23
        Originally posted by Rendus View Post
        The source of the problem is quite obvious to anyone looking at the benchmarks:

        The person doing the test screwed up.

        The tests all need to be run again. You need to stop using erroneous, spurious results until you can explain -exactly- what happened. Not guess, not conjecture, and certainly not blind acceptance.

        Do the tests properly. Stop using the obviously faulty results. Stop. Really. Stop embarrassing the community you supposedly represent by showing that new distributions are half as fast as older ones.

        It's a problem seen throughout the OSS community: Other than a handful of major projects, the kids don't want to finish their work. They don't want to debug, they don't want to write documentation, or they don't want to do proper benchmarking.
        That's completely the wrong attitude to take -- it amounts to:
        "If you can't explain it, then it doesn't exist"

        Just like most users who file bugs don't fix or debug them, it's not Micheal's job to do so either. He just has to deliver a reproducible test case.

        Now, I'm all open as to the cause and maybe it's his config or his testsuite somehow, but it's just unfair to both blindy accept or deny his results. What matters at this point it to try this comparison on other hardware and to have other users also try to reproduce this.

        Comment


        • #24
          Originally posted by daveerickson View Post
          Per this link:
          http://www.ubuntu.com/getubuntu/releasenotes/704tour
          Gnome was version 2.18.
          Woops. You got me there. It was not 1.8, but 2.18. Anyway, gnome was MUCH faster then compared to what it is now. Seriously.

          Comment


          • #25
            Originally posted by Rendus View Post
            The source of the problem is quite obvious to anyone looking at the benchmarks:

            The person doing the test screwed up.

            The tests all need to be run again. You need to stop using erroneous, spurious results until you can explain -exactly- what happened. Not guess, not conjecture, and certainly not blind acceptance.

            Do the tests properly. Stop using the obviously faulty results. Stop. Really. Stop embarrassing the community you supposedly represent by showing that new distributions are half as fast as older ones.

            It's a problem seen throughout the OSS community: Other than a handful of major projects, the kids don't want to finish their work. They don't want to debug, they don't want to write documentation, or they don't want to do proper benchmarking.

            You're wrong - Michael is doing a great job. If you're so sure that he screwed up, please tell us where and why he did - as long as you don't have such answers, you're only contradicting yourself.

            Comment


            • #26
              Very interesting, but I don't believe the ubuntu 7.04 results !

              I see no reason why the Ram sequential read bandwith should nearly double simply by changing distribution. Such test is not OS dependent, nor compiler dependent (even an unoptimized code should easiy reach the maximum memory bandwidth). Therefore I suspect something wrong and 7.04 specific in the time measurements....

              Comment


              • #27
                Originally posted by urfe View Post
                You're wrong - Michael is doing a great job. If you're so sure that he screwed up, please tell us where and why he did - as long as you don't have such answers, you're only contradicting yourself.
                There is more than enough proof (see also this another thread : http://www.phoronix.com/forums/showt...t=13486&page=6) that the results for all distributions except Ubuntu 7.04 must be wrong. The numbers are far too low for a notebook of this type.

                1. If a user with the same type of notebook (Lenovo T60) gets results for Ubuntu 6.06, Ubuntu 8.04 and Ubuntu 8.10 that are twice as fast as the ones from Phoronix test and these results are nearly the same as the ones for Ubuntu 7.04 from the Phoronix test, there must be something wrong.

                2. If users with slower hardware get better or the nearly same results for Ubuntu 8.10 in the audio-encoding tests (for example my 1GHz P3-Notebook) as the T60 from the Phoronix-test, than the results can't be right.

                We can't say what went wrong, but there is enough proof that something went wrong.

                P.S. :

                http://en.wikipedia.org/wiki/Falsifiability

                Comment


                • #28
                  Originally posted by gctt View Post
                  Very interesting, but I don't believe the ubuntu 7.04 results !

                  I see no reason why the Ram sequential read bandwith should nearly double simply by changing distribution. Such test is not OS dependent, nor compiler dependent (even an unoptimized code should easiy reach the maximum memory bandwidth). Therefore I suspect something wrong and 7.04 specific in the time measurements....
                  Hi everyone,

                  I agree, there is no distribution-specific reason why a RAM bandwidth benchmark could differ so significantly.

                  A possible reason for the 2x difference in RAM throughput (and other benchmarks): CPU was underclocked (SpeedStepped), maybe due to the Laptop running on battery, or due to a different scaling governor (or both). This is the most likely thing in my opinion. Is there a way you can check this in your results, Michael?

                  Another reason could be a change in memory configuration, i.e. one memory channel being turned off.

                  An unused core (i.e. 1 core instead of 2) doesn't seem a valid reason, the RAM-Benchmark is single-threaded, isn't it?

                  Comment


                  • #29
                    Originally posted by glasen View Post
                    There is more than enough proof (see also this another thread : http://www.phoronix.com/forums/showt...t=13486&page=6) that the results for all distributions except Ubuntu 7.04 must be wrong. The numbers are far too low for a notebook of this type.

                    1. If a user with the same type of notebook (Lenovo T60) gets results for Ubuntu 6.06, Ubuntu 8.04 and Ubuntu 8.10 that are twice as fast as the ones from Phoronix test and these results are nearly the same as the ones for Ubuntu 7.04 from the Phoronix test, there must be something wrong.

                    2. If users with slower hardware get better or the nearly same results for Ubuntu 8.10 in the audio-encoding tests (for example my 1GHz P3-Notebook) as the T60 from the Phoronix-test, than the results can't be right.

                    We can't say what went wrong, but there is enough proof that something went wrong.

                    P.S. :

                    http://en.wikipedia.org/wiki/Falsifiability
                    I also think something went wrong - I am just irritated by Rendus's comments.

                    But even if there's a hardware issue/incompatibility/some wrong default settings in some configuration file/compiled kernel - the tests are still valid: decrease in performance.
                    Such deltas between updated distros are.. well, interesting.

                    Comment


                    • #30
                      With EIST disabled, did you check the frequency? Could it be that on the non-7.04 distro/kernels, it was operating at the lowest by default?

                      Usually without OS-controlled cpufreq, the bios handles the cpu policy and may run at the lowest speed until it determines enough load. The problem in such cases is that the load may not be high enough (meaning just enough idleness in cpu %) to increase the freq. This is usually when if you're not 100% spinning on the cpu i.e. doing lots memory access or even just a small but frequent amount of I/O access.

                      that's my 2cents worth of a guess anyways..

                      Comment

                      Working...
                      X