Announcement

Collapse
No announcement yet.

where do I find git sources for a test?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • where do I find git sources for a test?

    I'd like to write some tests for eventual submitting purposes but it's not clear to me how I should develop the test to run locally or submit "upstream" - specifically I'd like to fool around with arrayfire benchmarks on PTS for CUDA/OpenCL performance tests.

    Thanks

  • #2
    Any test profiles in ~/.phoronix-test-suite/test-profiles/local/ is where to mess around with local tests outside of OpenBenchmarking.org/
    Michael Larabel
    https://www.michaellarabel.com/

    Comment


    • #3
      so the defined pathway to OpenBenchmarking.org is to fling you a file or github url?

      Also - any skeleton tests you recommend to base from? Any Docs for the various files and how they should work? I think I can get the gist of it but it would help having a reference.

      Comment


      • #4
        Originally posted by nevion View Post
        so the defined pathway to OpenBenchmarking.org is to fling you a file or github url?

        Also - any skeleton tests you recommend to base from? Any Docs for the various files and how they should work? I think I can get the gist of it but it would help having a reference.
        Sending me the file / posting here in the forums. OpenBenchmarking.org test profiles aren't in a Git repository since each test is versioned separately, etc.

        Basically to base a test skeleton off something close, e.g. if doing a game test base it off xonotic, if doing a CPU test try basing it off a simple test like c-ray, etc. Unfortunately not much documentation aside from what's in documentation/ in the Git tree.
        Michael Larabel
        https://www.michaellarabel.com/

        Comment


        • #5
          Hey Michael - alright I understand how ResultsParser and the OutputTemplate works to a degree now and it seems every menu entry in test-definition.xml invokes a binary with respective argument values itself and uses that single result template. Now here's the interesting bit I have with arrayfire - they already have what seems to be a pretty good benchmarking program and it outputs a CSV table to file and a table output to console ui, with many simple stats. Btw, I talked to arrayfire and have their blessing in adding their library as a GPU benchmark in PTS - they have a few concerns I have to make sure to address though (I just resolved one of them).

          Anyway the part that I'm not understanding with PTS benchmarks is, well if there's already a great benchmarking program producing a bunch of benchmark results in a way that's usually trivial to hook up to plotting - well, how do I take advantage of that with PTS? The way it is now, it looks like I'd have to have copious amounts of XML reimplementing the Celero based benchmark coverage they have in C++ to limit it down to a single test per iteration. Check out the Celero github page to see some examples of the data they record. I think I can do that, but is there a more direct way that is more driven by the csv file?

          I guess it's also possible that this may be too many tests and that I should limit it down to fewer test cases anyway - they benchmark ~65 different functions (split across of several data types implementations for f32 and f64 - will probably add int32/int64), 1D/2D, problem sizes, and of course the algorithms). Maybe we can/should have a mini and full profile?

          Finally there are several backends we'd want to run these tests against as another kind of "mode" to run the same test sources against: CUDA, OpenCL, and CPU. CPU would normally not be super useful or comparable to GPU performance but for fp64 and int64 variants this will highlight just how bad desktop cards suck wrt CPUs And on OpenCL we have multiple providers of the implementations: POCL/beignet/ROCm/fglrx and whatever else when we want to show relative performance differences.
          Last edited by nevion; 06 August 2016, 02:13 AM.

          Comment


          • #6
            Hey Michael - yep another update.

            I've got WIP XML for the test here: https://github.com/nevion/arrayfire-pts . Not too much there but I wanted to show you it is being worked on, so far I was adding some stuff to make the flow the PTS framework expects possible using their existing benchmarking program and that was an adventure in itself. But I've got this part working. So now I have the capability of running a single algorithm benchmark along with a single problemsize per execution of the binary.

            I need some help from you though:

            Perhaps you can explain to me how I should let this test be started for either the CPU, CUDA, or OpenCL given that the benchmark applies to each?

            Also, I have problem sizes for each algorithm/benchmark here that I need to enter and have apply only to that test (so it's like a pair that go together, of 2 options to a program). I don't see how to do this gleaning over pts_test_profile_parser.php and test-profile.xsd. How can I accomplish this?

            Here's some additional github urls where I've done my work to accomplish this difficult task of getting arrayfire based benchmarks into PTS. Other than games it should be about the highest value and best benchmark for GPUs around - I hope you'll support its creation a bit more.
            arrayfire-benchmark work: https://github.com/nevion/arrayfire-benchmark
            Celero changes: https://github.com/nevion/Celero
            Last edited by nevion; 15 August 2016, 12:14 AM.

            Comment


            • #7
              Originally posted by nevion View Post
              Perhaps you can explain to me how I should let this test be started for either the CPU, CUDA, or OpenCL given that the benchmark applies to each?

              Also, I have problem sizes for each algorithm/benchmark here that I need to enter and have apply only to that test (so it's like a pair that go together, of 2 options to a program). I don't see how to do this gleaning over pts_test_profile_parser.php and test-profile.xsd. How can I accomplish this?

              Here's some additional github urls where I've done my work to accomplish this difficult task of getting arrayfire based benchmarks into PTS. Other than games it should be about the highest value and best benchmark for GPUs around - I hope you'll support its creation a bit more.
              arrayfire-benchmark work: https://github.com/nevion/arrayfire-benchmark
              Celero changes: https://github.com/nevion/Celero
              Have you looked at like the Caffe test profile? Or the luxmark test profile? Basically they expose CUDA, OpenCL, etc each as their own option and then the other tests.

              When you are running a test, you are able to select multiple options to run, by simply delimiting the values with a comma. e.g. when you see the list of OpenCL, Cuda, etc, input like 1,2. The Phoronix Test Suite will then run all combinations of those.
              Michael Larabel
              https://www.michaellarabel.com/

              Comment


              • #8
                It would have been so helpful to have looked at Caffe and luxmark earlier. I was working with a number of CPU and GPU tests, found here: http://openbenchmarking.org/suite/pts/opencl

                In particular I didn't realize I could hack the binary invokation like the caffe test does.

                How do options work - combinatoricly? So like 2 options of 3 entries each = 9 tests?

                Following these 2 as an example though, I'm still unsure how I should handle CUDA - we can't expect CUDA to run on AMD machines. Any ideas there? Like a guard or something? On that subject, how do I handle that with external dependencies? (e.g. CUDA on an AMD machine) - or do I even need to handle it there too?

                Comment


                • #9
                  Originally posted by nevion View Post
                  It would have been so helpful to have looked at Caffe and luxmark earlier. I was working with a number of CPU and GPU tests, found here: http://openbenchmarking.org/suite/pts/opencl

                  In particular I didn't realize I could hack the binary invokation like the caffe test does.

                  How do options work - combinatoricly? So like 2 options of 3 entries each = 9 tests?
                  Yes, it would test all combinations, so 9 is currently how it would be. Haven't figured out any other more selective way to present it intuitively from the CLI.

                  Originally posted by nevion View Post
                  Following these 2 as an example though, I'm still unsure how I should handle CUDA - we can't expect CUDA to run on AMD machines. Any ideas there? Like a guard or something? On that subject, how do I handle that with external dependencies? (e.g. CUDA on an AMD machine) - or do I even need to handle it there too?
                  Then the test will fail. There isn't any foolproof way unfortunately. Theoretically it might be possible to get the CUDA tests on AMD through their new compiler and such, but yeah if the user is running the test on unsupported hardware, best just giving them an error. http://openbenchmarking.org/innhold/...e363a013f3de9d is about the extent I do of checking for CUDA and usually just checking to see if it's on the system in standard location for it but not part of PATH.
                  Michael Larabel
                  https://www.michaellarabel.com/

                  Comment


                  • #10
                    Michael how do you deal with a test that crashes or hardlocks the system?

                    What about tests that take too long (because of a bug or performance issue) and won't complete in a practical amount of time - if ever at all?

                    Comment

                    Working...
                    X