Announcement

Collapse
No announcement yet.

PGI 18.10 Compiler Benchmarks Against GCC 8.2, LLVM Clang 7.0

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • PGI 18.10 Compiler Benchmarks Against GCC 8.2, LLVM Clang 7.0

    Phoronix: PGI 18.10 Compiler Benchmarks Against GCC 8.2, LLVM Clang 7.0

    Given the recently release of the PGI 18.10 Community Edition compiler by NVIDIA, I was curious to see how the performance on the CPU is looking for this proprietary compiler on Linux. For those curious as well, here are some benchmarks of the PGI 18.10 C/C++ compiler against the GCC 8.2.0 and LLVM Clang 7.0 open-source compilers.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Very interesting results but I have to wonder how much real use NVidias compiler gets.

    Comment


    • #3
      Whoa, what happened with GCC on the C-Ray benchmark ? It has always won that by a good margin and now suddenly it's insanely slow ?

      Comment


      • #4
        Originally posted by Grinch View Post
        Whoa, what happened with GCC on the C-Ray benchmark ? It has always won that by a good margin and now suddenly it's insanely slow ?
        To me it still behaves as it should
        $ /aux/hubicka/8-install/bin/gcc -O3 -lm -pthread c-ray-mt.c
        $ ./a.out <scene >/dev/null
        c-ray-mt v1.1
        Rendering took: 0 seconds (382 milliseconds)
        $ ./a.out <scene >/dev/null
        c-ray-mt v1.1
        Rendering took: 0 seconds (384 milliseconds)
        $ /aux/hubicka/llvm7-install/bin/clang -O3 -lm -pthread c-ray-mt.c
        $ ./a.out <scene >/dev/null
        c-ray-mt v1.1
        Rendering took: 0 seconds (476 milliseconds)
        $ ./a.out <scene >/dev/null
        c-ray-mt v1.1
        Rendering took: 0 seconds (476 milliseconds)

        It is very bad benchmark though.

        Comment


        • #5
          Originally posted by hubicka View Post
          To me it still behaves as it should

          It is very bad benchmark though.
          Ok, something is wrong with Michael's benchmark setup (not the first super weird result we've seen), thanks for testing. Why do you think it's a very bad benchmark test ?

          Comment


          • #6
            Originally posted by Grinch View Post

            Ok, something is wrong with Michael's benchmark setup (not the first super weird result we've seen), thanks for testing. Why do you think it's a very bad benchmark test ?
            Because it claims to be multithreaded raytracer but it is unrealistically simple for that. It only suports spheres and it mostly tests particular capability of inliner:

            There is function trace that iterates over spheres for given ray and calls ray_sphere for each of them. ray_sphere calculates some stuff based on ray only and some stuff based on sphere. The basic idea for success is to make inliner realize that inlining ray_sphere despite the fact it is not small will make most of the calculations based on ray loop invariant.

            So it tests one particular feature of inliner and not much of the overall code generation quality. SPEC contains povray which is a lot more realistic test for raytracing performance.
            Last edited by hubicka; 24 December 2018, 08:21 AM.

            Comment


            • #7
              I also can't reproduce the GCC vs clang results for blogbench, which is filesystem benchmark so it would be odd if it was so compiler sensitiv.

              Comment


              • #8
                These results are yet again smell fishy and are mismatching with similar data posted elsewhere. One thing that's surely off is that here -march is not used at all.

                I realized when trawling through the data dumps of the last few days: the GCC 9 benchmarks happen to use the same hardware and partially matching compilers, but show completely different performance. Is there any analysis of the date pumped out? There is simply no point to these auto-generated articles which show nothing more than bar plots for some often arbitrary configuration. Why not use compiler tuning here? Why use Ubuntu 18.10 here and in the gcc 9 benchmark clear linux?

                Comment

                Working...
                X