Announcement

Collapse
No announcement yet.

AMD Threadripper 2950X Offers Great Linux Performance At $900 USD

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Thinking more about this, this is in a makefile, so the default variables/rules would actually be those set by make. You can check those using 'make -p -f/device/null'. Here if I grep that for CC I get 'CC = cc' which is a symlink to gcc. But maybe it is not defined on your end.

    Comment


    • #22
      Originally posted by GreenReaper View Post
      Thinking more about this, this is in a makefile, so the default variables/rules would actually be those set by make. You can check those using 'make -p -f/device/null'. Here if I grep that for CC I get 'CC = cc' which is a symlink to gcc. But maybe it is not defined on your end.
      Good point. For 'make -p -f/device/null' I also get 'CC=cc'.

      Sorry for all the edits in my previous post, but I finally got it to compile and run with by setting the MPI-related environment variables. I still didn't specify the 'LA_*' variables.

      The surprising thing is the install.sh I linked to has all the definitions, including for RHEL. I guess Fedora isn't specifically defined, so maybe that's why it didn't work by default?

      Comment


      • #23
        The test performance was quite bad without specifying the 'LA_*' environment variables. It did run but took ~3 hours to complete each run of the G-HPL tests (~9 hours total) and resulted in an average ~10.5 GFLOPS, or 10X less than the benchmark. This is on a 2700X.

        Compiled with openblas (hints taken again from install.sh from previous link), it compiled and each G-HPL test ran in 20 minutes with 190 GFLOPS. That's also strange since it's more than the 1950X in the benchmark comparison. Differences in libraries?

        For the record, this is what I used to install successfully with openblas:

        Code:
        env MPI_CC=/usr/lib64/openmpi/bin/mpicc MPI_INCLUDE=/usr/include/openmpi-x86_64/ MPI_LIBS=/usr/lib64/openmpi/lib/libmpi.so MPI_PATH=/usr/lib64/openmpi LA_PATH=/usr/lib64 LA_INCLUDE=/usr/include/openblas LA_LIBS=/usr/lib64/libopenblas.so.0 phoronix-test-suite install-test hpcc
        I initially also tried to compile with blas (not openblas) which installed without error but did NOT work (3 hours per run):

        Code:
        env MPI_CC=/usr/lib64/openmpi/bin/mpicc MPI_INCLUDE=/usr/include/openmpi-x86_64/ MPI_LIBS=/usr/lib64/openmpi/lib/libmpi.so MPI_PATH=/usr/lib64/openmpi LA_PATH=/usr/lib64 LA_INCLUDE=/usr/include LA_LIBS="-lblas" phoronix-test-suite install-test hpcc
        =====

        Aside, I didn't even know what the following were or that I was using them:

        - ccache: `which cc` returns /usr/lib64/ccache/cc
        - clang: PTS reports "Compiler: Clang 6.0.1 + LLVM 6.0.1 + CUDA 9.1"

        From what I've read ccache is caching parts of repeated compilations and so shouldn't affect the benchmark runs, correct?

        Apparently the performance differences between clang vs GCC are an ongoing debate, as evidenced by phoronix articles that I just read, going back years! Short of uninstalling clang, how would I set it to use GCC for consistency? My goal has been to compare my newly built system with this article's benchmark results as a general performance check.
        Last edited by blueweb; 27 August 2018, 03:35 PM.

        Comment

        Working...
        X