Announcement

Collapse
No announcement yet.

8-Way Linux Distribution Benchmarks On The AMD EPYC 7742 2P Server

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • 8-Way Linux Distribution Benchmarks On The AMD EPYC 7742 2P Server

    Phoronix: 8-Way Linux Distribution Benchmarks On The AMD EPYC 7742 2P Server

    A few days ago I provided some benchmarks showing how running Intel's open-source Clear Linux on AMD EPYC Rome can provide some significant speed-ups over Ubuntu Linux, but how do other Linux distributions compare on AMD's new Zen 2 server processors? Here is an eight-way benchmark comparison on the AMD EPYC 7742 2P Daytona server with its 128 cores / 256 threads.

    http://www.phoronix.com/vr.php?view=28218

  • some_canuck
    replied
    okay, now try gentoo

    Leave a comment:


  • Michael
    replied
    Originally posted by brownsr View Post
    I think something is wrong with the 18.04 geometric mean result. It doesn't match the pattern of the individual tests
    As shown on the OB page, the Ubuntu 18.04 performance is really hurt in glibc as one of the big contributors to its overall lower performance.

    Leave a comment:


  • Michael
    replied
    Originally posted by StefanBruens View Post

    Fedora definition:
    <GenericName>yasm</GenericName>
    <PackageName>yasm nasm</PackageName>

    openSUSE definition:
    <GenericName>yasm</GenericName>
    <PackageName>yasm</PackageName>
    Good catch, thanks. I've dropped the x265 result and adding nasm to the openSUSE XML.

    Leave a comment:


  • kylew77
    replied
    Originally posted by Michael View Post

    FreeBSD 12 UEFI wasn't booting on the Daytona server so for now left it at that and didn't poke at the other BSDs with FreeBSD not booting.
    Thank you for trying at least, appreciate the work!

    Leave a comment:


  • skeevy420
    replied
    Since I didn't see a chart I'll just ask -- how well did these manage to hold the boost frequency and how warm do they get? I have to imagine that 128c/256t has to get pretty warm during some of these tests.

    But 64c/128t.....It seems like yesterday that I got my first Athlon 64 x2 and had not one, but two cores. Two cores. Can you imagine it? My PC was awesome.

    Now we have 32-in-1 Athlon 64 x2 CPUs. And this damn monster here has two of these 32 Athlon 64 x2 CPUs so it's like having 64 of my first dual core PC in one. My dual quad cores from Intel are hiding in shame

    bridgman Y'all should release a desktop/workstation variant of these with the Athlon 64 name we can have an actual Athlon 64 x2 PC

    Leave a comment:


  • brownsr
    replied
    I think something is wrong with the 18.04 geometric mean result. It doesn't match the pattern of the individual tests

    Leave a comment:


  • sunnyflunk
    replied
    Originally posted by Michael View Post
    All compiler flags were at their default. Clear does have more aggressive compiler flag tuning as pointed out countless times in many articles
    I'm curious about your methodology better when undertaking a distribution comparison (I couldn't a reference to one). I mean you can't just be throwing some benchmark profiles at each distribution and whacking up the results...that would be...crazy!

    Questions like, what is considered part of the distribution that you are trying to test? In general default distribution flags are exposed in the build system only, so compiling a program on a distribution doesn't reflect the default compiler flags included with a distribution. CL is the only distribution that I know of that even exposes CFLAGS to the environment, so you are comparing a bunch of distributions compiling with the same default flags of the package (other than CL). Is how you intended to represent a distribution for testing? Setting CFLAGS in the environment does nothing to impact performance of packages shipped with a distribution (which surely is the main performance metric for a user as most people download provided packages), but can help with PTS test performance.

    Compiling software outside of the distribution provided packages also ignores any integration issues or performance tuning that a distribution may have done (for example missing a build flag that impacts performance, dependency differences, package specific compiler flags, LTO, PGO and AVX optimized builds which can actually understate the performance difference between CL and others). The use of the PTS compiling software creates it's own issues as it covers different dependencies for each distribution (and yeah, you probably don't have nasm for OpenSUSE but I haven't tested that myself). However the article informs the audience
    Strangely with x265 the openSUSE performance was very poor compared to the other Linux distributions
    It's neither strange, nor unexpected that building without CPU optimized instructions leads to poor performance and despite being informed of that in the forums (which would make validating fairly quick!) that it's the case it's neither fixed in the PTS (Yes the issue would 100% be with the PTS and not OpenSUSE if that is the case), or a few minutes spent investigating whether the result is valid, writing it off more as an issue with the distribution!

    This means that tests can only be interpreted on their own when you understand what the benchmark is doing and what the differences are caused by in the results. Some comparisons in the previous week even had one test impact the geometric mean by 10% by including a benchmark not appropriate for anything!

    I did write up a bunch of issues I've noticed with tests on the CL forum which I'll link to to save writing up again https://community.clearlinux.org/t/c...ardware/1479/4 This doesn't even scratch the surface of creating a robust benchmark methodology for distribution comparison.

    Leave a comment:


  • StefanBruens
    replied
    Originally posted by Michael View Post

    The "yasm" external dependency should actually be pulling in both yasm and nasm. The naming convention is just a bit off when some program (forget which, as it's been a number of years) switched silently and for simplicity sake then just began also including nasm when yasm is requested.
    Fedora definition:
    <GenericName>yasm</GenericName>
    <PackageName>yasm nasm</PackageName>

    openSUSE definition:
    <GenericName>yasm</GenericName>
    <PackageName>yasm</PackageName>

    Leave a comment:


  • sunnyflunk
    replied
    Originally posted by StefanBruens View Post
    You should include the output of configure/cmake/meson/whatever for all tools you compile yourself, so results can be verified.
    That sounds like work...

    If you want it fixed, I'd suggest a PR adding nasm https://github.com/phoronix-test-sui...kages.xml#L235 like for Ubuntu (and probably others) https://github.com/phoronix-test-sui...kages.xml#L290

    Leave a comment:

Working...
X