Announcement

Collapse
No announcement yet.

Phoronix Test Suite 9.2 Released For Open-Source, Cross-Platform Benchmarking

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Michael View Post

    Ahhh so you are referring to it from that perspective, which is separate from PTS releases/versions. Yes, for simple end-users wanting to compare their performance to others on a web, that can be a pain/challenge particularly due to the vast spectrum of tests, versions of those tests, and all of the test options. Many different possible combinations. For that perspective there isn't a easy solution for an open-source solution that seeks by nature to offer a diverse collection of tests beyond working more on 'recommended' test suites (set of test profiles, test suites can version lock to specific test profile versions) and encouraging users to run said suites.
    I guess you can drive that a little more. A line at the end of the usage output for a standard <year> comparison test suite. maybe?

    Recommended First Benchmark Runs:
    phoronix-test-suite benchmark pts/standard-long-2019
    phoronix-test-suite benchmark pts/standard-short-2019

    which have a nice selection of appropriate tests (cpu/memory/gpu/primary storage) for us plebs to compare our systems against "all year". (and also to look back and see how good our new shiny rigs are over past years)

    you could then have an openbenchmarking.org page just dedicated to those scores. so we can see various stats

    I know that it would be a bunch of work initially, but I could see you using these things as a standard comparison for systems all year round (potential time saving?)

    Comment


    • #12
      Originally posted by boxie View Post

      I guess you can drive that a little more. A line at the end of the usage output for a standard <year> comparison test suite. maybe?

      Recommended First Benchmark Runs:
      phoronix-test-suite benchmark pts/standard-long-2019
      phoronix-test-suite benchmark pts/standard-short-2019

      which have a nice selection of appropriate tests (cpu/memory/gpu/primary storage) for us plebs to compare our systems against "all year". (and also to look back and see how good our new shiny rigs are over past years)

      you could then have an openbenchmarking.org page just dedicated to those scores. so we can see various stats

      I know that it would be a bunch of work initially, but I could see you using these things as a standard comparison for systems all year round (potential time saving?)
      Thanks, most (or even all, besides UI work) is already in place. But even then it's still somewhat difficult in providing the recommended tests simply due to people running PTS on everything from Raspberry Pi to hundreds of core CPUs. Granted, the use-case for following such feedback will likely fall to the smaller core count / enthusiast/desktop range. WIll do some thinking and play with at least a recommendation from that perspective.

      Right now if running PTS benchmark without any options it shows:

      ./phoronix-test-suite benchmark


      [PROBLEM] Phoronix Test Suite Argument Missing

      CORRECT SYNTAX:
      phoronix-test-suite benchmark [Test | Suite | OpenBenchmarking ID | Test Result] ...


      Recently Saved Test Results:
      xxxx

      New + Updated Tests:
      libgav1 dav1d build2 blender

      Recent OpenBenchmarking.org Results From This IP:
      xxxx
      Michael Larabel
      https://www.michaellarabel.com/

      Comment


      • #13
        Originally posted by Michael View Post

        Thanks, most (or even all, besides UI work) is already in place. But even then it's still somewhat difficult in providing the recommended tests simply due to people running PTS on everything from Raspberry Pi to hundreds of core CPUs. Granted, the use-case for following such feedback will likely fall to the smaller core count / enthusiast/desktop range. WIll do some thinking and play with at least a recommendation from that perspective.

        Right now if running PTS benchmark without any options it shows:
        Then maybe break the recommended tests up into something like?

        2019-desktop-light
        2019-desktop-heavy
        2019-server-light
        2019-server-heavy

        with differences being stuff like - Desktop suites have games / production tests (darktable opencl etc).

        I feel that with a dedicated test suite like this you could also push distros in given direction. e.g. - making sure OpenCL is up and running by default.

        Comment


        • #14
          Originally posted by boxie View Post
          Then maybe break the recommended tests up into something like?

          2019-desktop-light
          2019-desktop-heavy
          2019-server-light
          2019-server-heavy

          with differences being stuff like - Desktop suites have games / production tests (darktable opencl etc).
          What about a set of workstation/HPC tests? I think the scientific & simulation tests don't belong in either the server or desktop category. Server tests should be things like database, file serving, and web serving.

          Probably put code compilation into the workstation category? Or maybe put it in desktop, under the assumption that most workstations would also run the desktop tests?

          Originally posted by boxie View Post
          I feel that with a dedicated test suite like this you could also push distros in given direction. e.g. - making sure OpenCL is up and running by default.
          OpenCL-based tests should be included in both desktop & workstation categories. Server installs tend to be more minimal, and they tend to lack GPUs.

          Also, instead of "light" and "heavy", what about "quick" and "full"?

          Comment


          • #15
            Originally posted by coder View Post
            What about a set of workstation/HPC tests? I think the scientific & simulation tests don't belong in either the server or desktop category. Server tests should be things like database, file serving, and web serving.

            Probably put code compilation into the workstation category? Or maybe put it in desktop, under the assumption that most workstations would also run the desktop tests?


            OpenCL-based tests should be included in both desktop & workstation categories. Server installs tend to be more minimal, and they tend to lack GPUs.

            Also, instead of "light" and "heavy", what about "quick" and "full"?
            Workstation tests could go into desktop-heavy
            At least 1 ooencl test in -light

            The idea of -light is to quickly run. If I can bench my system and have it done in 20 minutes that's cool. 1 test per subsystem kinda thing

            -heavy gives you a more complete look with multiple tests per subsystem.

            Comment

            Working...
            X