Announcement

Collapse
No announcement yet.

Intel Xeon Platinum 8380 Ice Lake Linux Performance vs. AMD EPYC Milan, Cascade Lake

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Xeon Platinum 8380 Ice Lake Linux Performance vs. AMD EPYC Milan, Cascade Lake

    Phoronix: Intel Xeon Platinum 8380 Ice Lake Linux Performance vs. AMD EPYC Milan, Cascade Lake

    Last month Intel launched their 3rd Gen Xeon Scalable "Ice Lake" processors for these 10nm server processors and SKUs up to 40 cores while boasting around a 20% IPC improvement overall and big reported improvements for AI workloads and more. Recently we received an Intel Ice Lake reference server with the dual Xeon Platinum 8380 processors so we can carry out our own performance tests. In this initial article is our first look at the Xeon Platinum 8380 Linux support in general and a number of performance benchmarks.

    https://www.phoronix.com/vr.php?view=30184

  • #2
    The reason the Ice Lake based Xeons perform so well in the SVT-AV1 and x265 benchmarks is because Ice Lake features excellent AVX-512 SIMD units, and both those encoders make good use of this instruction set.

    Comment


    • #3
      Thanks, Michael! I particularly enjoyed the power consumption plot, and would love to see more benchmark results presented as joules-per-benchmark-accomplished. Bean-counters seem better able to understand such numbers.

      Comment


      • #4
        But what about the performance-per-watt between the two?

        Comment


        • #5
          Weirdly enough the results were *far* better than my initial expectations. I expected epyc 3rd gen to completely wipe the floor with icelake because of the core count disparity, but it seems as though that was not the case and it's genuinely a halfway decent product in some workloads.

          Comment


          • #6
            Originally posted by iskra32 View Post
            Weirdly enough the results were *far* better than my initial expectations. I expected epyc 3rd gen to completely wipe the floor with icelake because of the core count disparity, but it seems as though that was not the case and it's genuinely a halfway decent product in some workloads.
            Intel, and AMD, and ARM, and .... all have wins in specific use cases. And that is why the smart money is not on a specific vendor, but workload specific solutions.

            Comment


            • #7
              It performs very well with HPC Workloads, about 43% improvement over 2nd gen.

              Comment


              • #8
                It's nice to see Intel with competitive performance again!

                Comment


                • #9
                  Michael did you enable NUMA node per CCX on the EPYC server? I did some testing with that and I gained a lot of performance. The reason is that the Xen scheduler is NUMA aware and places hot threads in the same NUMA node, which benefits cache sensitive workloads a lot. Not sure if the Linux kernel is having schedulers like that?


                  https://wiki.tnonline.net/w/Blog/Xen...A_on_EPYC_CPUs

                  Comment


                  • #10
                    Originally posted by iskra32 View Post
                    Weirdly enough the results were *far* better than my initial expectations. I expected epyc 3rd gen to completely wipe the floor with icelake because of the core count disparity, but it seems as though that was not the case and it's genuinely a halfway decent product in some workloads.
                    It could very well be a case of "there are only so many cores you can throw at a problem" or "not optimized for greater than X amount of cores". It seems like some of the compile benchmarks simply hit a wall where more threads either don't help or bring diminishing returns.

                    Like CommunityMember said, different hardware is optimized for different things. I'd hope that if someone is spending the kind of money it costs to build one of those systems that they're looking into what runs better on what platform.

                    Comment

                    Working...
                    X