Announcement

Collapse
No announcement yet.

One Month Of Monitoring The Linux Kernel Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • One Month Of Monitoring The Linux Kernel Performance

    Phoronix: One Month Of Monitoring The Linux Kernel Performance

    For those that may have forgot, at the start of December we launched the Phoronix Kernel Test Farm to begin benchmarking the Linux kernel on a daily basis using the automated tools that we provide via the Phoronix Test Suite and Phoromatic. Towards the middle of December we then unveiled the Phoromatic Tracker, which exposes these test results in real-time to the public. Well, it's now been a month of monitoring the kernel's performance and the entire Linux 2.6.33 kernel development cycle thus far, with many interesting findings.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I always miss some more statistical information about these benchmarks. Whats the RMDS (root mean square deviation) like with 10 runs per day for example?

    Comment


    • #3
      Originally posted by crispy View Post
      I always miss some more statistical information about these benchmarks. Whats the RMDS (root mean square deviation) like with 10 runs per day for example?
      crispy, I miss some more statistical information about the RMDS. What is the power of the RMDS (root mean square deviation) with 10 runs per day?

      Is the RMDS the correct tool for this? Is the data normally distributed? Do you want to find outliers? What do you want to do? Maybe some more robust statistics are the way to go here - http://en.wikipedia.org/wiki/Robust_statistics.

      Comment


      • #4
        Sorry, some time since ive done these error calculations, what I was thinking of was standard deviation.

        Comment


        • #5
          Originally posted by crispy View Post
          Sorry, some time since ive done these error calculations, what I was thinking of was standard deviation.
          Don't worry, you are not alone.

          Most people unfortunately do use standard deviation, ordinary least regression, etc.

          However, these methods are very very sensitive to outliers. Their high power are only applicable under very restricted circumstances, that rarely are valid. They can be good to detect outliers, but, many wish to determine a trend or a reliable central value for few observations, and then they are really bad.

          Look at http://www.mathworks.com/access/help.../robustfit.gif to get a picture.

          The R application (http://www.r-project.org/) and its robust package (http://cran.r-project.org/web/views/Robust.html) is a good start to get a feel for how it behaves for different data sets.

          Please, have a peek.

          Trust me, you will never want to go back to standard deviation based statistics!

          Comment


          • #6
            So you agree we should get just a little of some kind of statistical information about the data right?

            Comment


            • #7
              Originally posted by crispy View Post
              So you agree we should get just a little of some kind of statistical information about the data right?
              There are some available already. Look at http://www.phoromatic.com/kernel-tracker.php? There you can set thresholds for relative changes.

              Comment


              • #8
                Im not interested in regressions and the threshold for changes is not what im looking for either.

                But it would be very nice with a larger dataset to calculate standard deviation for each datapoint and plot it something like this: http://www.access-board.gov/research...ge002_0004.gif

                Then it will give us an idea if the changes in performance for each kernel version is a result of the code or just standard variations of the dataset...

                I mean, in any type of science you do this simple data analysis...

                Comment


                • #9
                  @crispy

                  I'd really like to see this, too. I am interested in the range of the values.

                  Comment


                  • #10
                    Originally posted by crispy View Post
                    Im not interested in regressions and the threshold for changes is not what im looking for either.

                    But it would be very nice with a larger dataset to calculate standard deviation for each datapoint and plot it something like this: http://www.access-board.gov/research...ge002_0004.gif

                    Then it will give us an idea if the changes in performance for each kernel version is a result of the code or just standard variations of the dataset...

                    I mean, in any type of science you do this simple data analysis...

                    The Phoronix Test Suite already does this if you run a result file with "phoronix-test-suite analyze-all-runs", it just is not implemented in the web interface on Phoromatic at this time. Though you can assume that it is always less than 3.5% deviation otherwise the run-count dynamically increases (see an earlier Phoronix posting about statistical significance and Phoronix Test Suite). Other stats can be added in if you 1. provide patches or 2. explain it all quite well what you would like and how.
                    Michael Larabel
                    https://www.michaellarabel.com/

                    Comment

                    Working...
                    X