Intel Xeon Platinum 8380 Ice Lake Linux Performance vs. AMD EPYC Milan, Cascade Lake
Written by Michael Larabel in Processors on 12 May 2021. Page 1 of 7. 13 Comments

Last month Intel launched their 3rd Gen Xeon Scalable "Ice Lake" processors for these 10nm server processors and SKUs up to 40 cores while boasting around a 20% IPC improvement overall and big reported improvements for AI workloads and more. Recently we received an Intel Ice Lake reference server with the dual Xeon Platinum 8380 processors so we can carry out our own performance tests. In this initial article is our first look at the Xeon Platinum 8380 Linux support in general and a number of performance benchmarks.

The Intel 3rd Gen Xeon Scalable Ice Lake processors are a big improvement over 2nd Gen Cascade Lake processors with the transition to the 10nm Sunny Cove architecture and now offering processors up to 40 cores rather than topping out at 28 cores, but still lower than the likes of EPYC at 64 cores or Ampere Altra at even higher core counts. The new Xeon Scalable processors also now support eight channels of DDR4-3200, 64 lanes of PCI Express 4.0 per socket, and other improvements as outlined in the launch-day article.

For evaluating the Ice Lake server Linux performance (and BSD tests coming too), we were recently provided by Intel with a reference server sporting dual Xeon Platinum 8380 processors, 16 x 32GB DDR4-3200 Hynix memory, and multiple Intel SSDs. The Xeon Platinum 8380 as a reminder is a 40-core / 80-thread processor with 2.3GHz base clock and 3.4GHz maximum turbo frequency, offers a 60MB cache, carries a 270 Watt TDP, and the recommended customer price is at around $8099 USD.

Ice Lake Server Linux Support

Prior to getting to the benchmarks, one of the strong areas where Intel is highly well regarded is for their open-source, upstream Linux support prior to launch. Even when not challenged by any production/supply delays, Intel for years across their desktop and server portfolios has almost always been spot-on with their initial Linux support at launch being in good shape not only upstream but out there well enough ahead of time that it's often baked into the already-shipping major Linux distributions and back-ported where necessary to the notable enterprise Linux distributions. With Ice Lake Xeon this is still the case and the hardware enablement support has been out there for a while now. Testing of all the usual Linux distributions has been working out well for Ice Lake Xeon with no obstacles encountered yet.

AMD has been improving with their Linux support at launch and generally on the mark for recent generations, but still there are improvements to be made in their Linux support to be more timely at launch and ensuring new feature support is out there at least upstream but ideally already into the major distributions.

With AMD's EPYC 7003 "Zen 3" launch they did provide the same-day AOCC compiler but for those preferring the upstream open-source compilers that was one of the sore points where they can achieve better. It was only after the actual EPYC 7003 announcement that their partners at SUSE began posting more of the Znver3 tuning patches. That more tuned Znver3 support then appeared in the very recent release of GCC 11 and also back-ported to GCC 10.3. So while now it's out in released form after launch, not many major Linux distributions are quick to adopt new GCC releases or even the point releases, except for the likes of Fedora. It won't be until Ubuntu 21.10 this autumn when it moves to GCC 11 out-of-the-box with that proper Znver3 support, etc. On the LLVM Clang compiler front, the Zen 3 scheduler model was finally merged last week and that in turn won't be out in released form until September~October with the next LLVM release.

With Intel, they began adding "icelake-server" to the compilers in 2018. So at launch using the current stable GCC compilers and what is already shipping in the major Linux distributions over the past year or so already has this icelake-server support in place.

Or outside of the compiler tangent, other Linux kernel bits have arrived late for non-critical areas, such as the recent MCE driver refactoring and even that being for Rome and newer. With the new AMD SEV-SNP "Secure Nested Paging" functionality that work is still going through the upstream processes but at least out in source form for those interested. Or more broadly and a new problem as another example, the AMD energy driver was just removed from the Linux kernel ultimately due to a disagreement between AMD engineers and the HWMON subsystem maintainer.

At least from my testing over the past 2~3 weeks with the Ice Lake server, one of the only support pain points coming to mind is that the upstream SGX support was a relatively late add. Ice Lake Xeon processors add Software Guard Extensions (SGX) while found in non Xeon Scalable processors already. Intel has been working for years on upstreaming SGX support to the Linux kernel but that only landed at the end of last year with Linux 5.11. Or more recently, Linux 5.13 adds bits for Intel SGX support with KVM guests. So that's "late" by Intel standards where we are used to seeing everything squared away well ahead of launch, if that functionality is important to you. But besides that so far no other real Linux support obstacles come to mind with my Ice Lake server usage thus far.

So while the latest x86_64 server processors are working on Linux at launch from both Intel and AMD, Intel tends to have out their support earlier to ensure new features are ready to deploy out-of-the-box in modern Linux distributions and also ensuring compiler support is squared away while at least at the moment AMD tends to have more "loose ends" in their support that ultimately get settled post-launch. Or put another way, Intel has already been posting a number of Sapphire Rapids enablement patches last year for the Linux kernel, GCC and other toolchain bits, etc (as covered in numerous Phoronix articles already), at the same time while AMD still was focused on Zen 3 enablement. For AMD's case it's largely a function of their engineering resources and at least now with ramping up more Linux staff will hopefully lead to addressing such shortcomings for future launches while Intel for years has been known for their extremely large pool of open-source talent within the organization. With Intel's open-source engineering resources they have been working not only on hardware enablement but optimized open-source libraries, interesting projects like Embree and SVT-AV1, and more.

Another area worth highlighting in their open-source support is also generally having premiere BSD support, especially with FreeBSD. Among the CPU vendors (Arm included), Intel generally leads with the BSD hardware support and has developers focused on FreeBSD. I'll have an article dedicated to the BSD support on Ice Lake Xeon Server in the coming weeks, but at least from a quick try with FreeBSD 13.0 it seemed to be working out fine, but stay tuned for that later article along with tests from DragonFlyBSD 6.0 and others.

Now the area we have been most eager to look at for 3rd Gen Xeon Scalable... the Linux performance. For this preliminary Xeon Platinum 8380 2P benchmarking the comparison is against the launch day testing of the AMD EPYC 7763 / 7713 / 75F3 2P processors that also included the prior generation Cascade Lake Xeon Platinum 8280 2P. All of these results come form using Ubuntu 20.04 LTS with the Linux 5.11 kernel upgrade and performance CPU frequency scaling governor throughout all systems but otherwise sticking to the default package versions of Ubuntu 20.04 LTS. Follow-up testing will look at the performance with the very latest Ubuntu 21.04 and other Linux software modifications like trying out Linux 5.13 Git, Intel's Clear Linux performance on Ice Lake, the newest GCC and LLVM Clang compilers, and the assortment of other latest Linux benchmarking I usually do.

Each of the servers was running with memory at its maximum rated memory channels and frequency supported by each platform. A Micron 9300 3.8TB NVMe SSD with Ubuntu 20.04 LTS was used as the drive for all of the testing across servers.

110 tests were run for our initial Xeon Platinum 8380 2P vs. Xeon Platinum 8280 2P vs. EPYC 7763/7713/75F3 benchmarking.


Related Articles
Trending Linux News