Announcement

Collapse
No announcement yet.

OpenBenchmarking.org Launch Statistics

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • OpenBenchmarking.org Launch Statistics

    Phoronix: OpenBenchmarking.org Launch Statistics

    OpenBenchmarking.org and Phoronix Test Suite 3.0 "Iveland" were released over the weekend (press release) from the Southern California Linux Expo during our talk entitled Making More Informed Linux Hardware Choices. Here's some statistics as of this morning that OpenBenchmarking.org has been public for a few days...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Just having a quick look around some of the graphs on OBo, and I see something funky. Your graphs are supposed to be showing percentages, but for the ones by month Feb seems to consistently be too low. i.e.:
    OpenBenchmarking.org, Phoronix Test Suite, Linux benchmarking, automated benchmarking, benchmarking results, benchmarking repository, open source benchmarking, benchmarking test profiles

    and
    OpenBenchmarking.org, Phoronix Test Suite, Linux benchmarking, automated benchmarking, benchmarking results, benchmarking repository, open source benchmarking, benchmarking test profiles

    (The graphs at the bottom)
    All the lines drop off in Feb. Looks... wrong?!?

    Comment


    • #3
      how can i run tests and upload data?

      i can fegure it out with on site instructions

      Comment


      • #4
        Originally posted by kiwi_kid_aka_bod View Post
        Just having a quick look around some of the graphs on OBo, and I see something funky. Your graphs are supposed to be showing percentages, but for the ones by month Feb seems to consistently be too low. i.e.:
        OpenBenchmarking.org, Phoronix Test Suite, Linux benchmarking, automated benchmarking, benchmarking results, benchmarking repository, open source benchmarking, benchmarking test profiles

        and
        OpenBenchmarking.org, Phoronix Test Suite, Linux benchmarking, automated benchmarking, benchmarking results, benchmarking repository, open source benchmarking, benchmarking test profiles

        (The graphs at the bottom)
        All the lines drop off in Feb. Looks... wrong?!?
        Also, who the hell is gpu vendor 2? Whoever it is it's leading the pack. There are also some problems with the colors being too similar, like in the case of InnoTek, Matrox and nvidia.

        Comment


        • #5
          I would really love to add my numbers to the pile, but the problem, Michael, is that the tests downloads are WAY too large. I have a capped internet connection, and I can't donate like 20% of my monthly allowance to Phoronix just to run a couple benchmarks.

          I wouldn't mind running the CPU and disk based tests on my dedicated server, but I'm starting to populate my new server with production services, and I don't want to have to bring them down to make a clean testbed for PTS. So I think I missed that opportunity, sadly. You'd have liked it, too; my new server has a Core i7 980X.

          I will try and see if there are some test suites or individual tests with a very low download cost... but running basically any of the 3d graphics tests means several hundred megabytes download. No. Can't do it. Sorry.

          Comment


          • #6
            Following up on the post I just made (brought to you by 1 Minute Edit Limit):

            Ideas for ways to improve download size!

            1. If a test is based on a real game, the full game distribution has many many assets that are not used in the benchmarks. On a per-game basis, figure out which assets can be excluded and just rm them. If the game freaks out because it can't find every asset it expects, find out how to bypass that. If it's open source, there's always a way; if it's closed source, you can give up if it seems that expecting certain resources is built-in to the game binaries with no bypass.

            2. For all tests (except maybe "untarring the Linux kernel"), switch compression over to LZMA2, using the .tar.xz file format. This codec has excellent compression ratio and very fast decompression time. The only drawback of LZMA2 is compression time, but that's a one-time constant cost at the time of writing the test, so you can be patient.

            3. Strip out docs and help files from test source distributions. If compiling from source, you can (hopefully) use something equivalent to ./configure --disable-docs to conveniently tell it not to worry about docs. I guess the only doc you need to leave in is the license file, for legality's sake. But certain programs have megabytes of plaintext, html or xml based docs.

            4. Make sure as many tests as possible (especially those in popular suites) use the same version of each program for any tests that use the same program. So you don't want to download foo-1.4.0.tar.xz in one test, then foo-1.4.1.tar.xz in the next test, especially when you can avoid this within a suite (since most users will run suites, not individual tests).

            Also, while disk space isn't a concern on my desktop, it is on my laptop. It makes it much harder to do a graphics benchmark on my laptop when I have such space constraints and the tests are so huge (especially the ones that compile from source; all the intermediate objects plus the final build are quite massive).

            If I had an uncapped connection, I would happily run tests on my desktop where space isn't an issue.

            Maybe you can add a parameter to openbenchmarking.org letting users query tests by test download size?

            Comment


            • #7
              Originally posted by allquixotic View Post
              I would really love to add my numbers to the pile, but the problem, Michael, is that the tests downloads are WAY too large. I have a capped internet connection, and I can't donate like 20% of my monthly allowance to Phoronix just to run a couple benchmarks.

              I wouldn't mind running the CPU and disk based tests on my dedicated server, but I'm starting to populate my new server with production services, and I don't want to have to bring them down to make a clean testbed for PTS. So I think I missed that opportunity, sadly. You'd have liked it, too; my new server has a Core i7 980X.

              I will try and see if there are some test suites or individual tests with a very low download cost... but running basically any of the 3d graphics tests means several hundred megabytes download. No. Can't do it. Sorry.
              Some interesting tests with low download impact: c-ray, openssl, gcrypt, graphics-magick, john-the-ripper, iozone, etc. Most of the CPU tests.
              Michael Larabel
              https://www.michaellarabel.com/

              Comment


              • #8
                Originally posted by allquixotic View Post
                Following up on the post I just made (brought to you by 1 Minute Edit Limit):

                Ideas for ways to improve download size!

                1. If a test is based on a real game, the full game distribution has many many assets that are not used in the benchmarks. On a per-game basis, figure out which assets can be excluded and just rm them. If the game freaks out because it can't find every asset it expects, find out how to bypass that. If it's open source, there's always a way; if it's closed source, you can give up if it seems that expecting certain resources is built-in to the game binaries with no bypass.

                2. For all tests (except maybe "untarring the Linux kernel"), switch compression over to LZMA2, using the .tar.xz file format. This codec has excellent compression ratio and very fast decompression time. The only drawback of LZMA2 is compression time, but that's a one-time constant cost at the time of writing the test, so you can be patient.

                3. Strip out docs and help files from test source distributions. If compiling from source, you can (hopefully) use something equivalent to ./configure --disable-docs to conveniently tell it not to worry about docs. I guess the only doc you need to leave in is the license file, for legality's sake. But certain programs have megabytes of plaintext, html or xml based docs.

                4. Make sure as many tests as possible (especially those in popular suites) use the same version of each program for any tests that use the same program. So you don't want to download foo-1.4.0.tar.xz in one test, then foo-1.4.1.tar.xz in the next test, especially when you can avoid this within a suite (since most users will run suites, not individual tests).

                Also, while disk space isn't a concern on my desktop, it is on my laptop. It makes it much harder to do a graphics benchmark on my laptop when I have such space constraints and the tests are so huge (especially the ones that compile from source; all the intermediate objects plus the final build are quite massive).

                If I had an uncapped connection, I would happily run tests on my desktop where space isn't an issue.

                Maybe you can add a parameter to openbenchmarking.org letting users query tests by test download size?
                Unfortunately making changes to the source packages to reduce their download sizes would then require we host all test files rather than taking advantage of a plethora of mirrors. It wouldn't be possible to host all of the files ourself and it would also be much slower due to not as many servers (and not as fast connections as many university mirrors) that it would end up impairing more users than the minority that have capped connections.
                Michael Larabel
                https://www.michaellarabel.com/

                Comment

                Working...
                X