Amazon EC2 M6i Performance For Intel Ice Lake In The Cloud Benchmarks
First up is looking at the generational improvement with the "8xlarge" instance type when going from Intel Cascade Lake with the m5.8xlarge to Ice Lake with the new m6i.8xlarge. Ubuntu 20.04 LTS was used throughout and all other software kept the same throughout testing.
With the same instance type and in turn the same vCPU count, Ice Lake immediately showed its strength in the cloud. There were a lot of hefty wins for the m6i.8xlarge instance particularly around workloads that could make adequate use of Intel's latest instruction set extensions, such as when running Intel's own oneDNN deep learning library that is part of their oneAPI toolkit.
If leveraging Amazon EC2 as a virtual build farm, with the m6i.8xlarge instance it was 34% faster than the m5.8xlarge for code compilation -- the test profiles used for these tests were across compiling the Linux kernel, LLVM, FFmpeg, and Node.js.
Meanwhile across a mix of different "creator" workloads, there was 34% better performance there with the geometric mean of those tests including OSPray, Yafaray, Appleeed, SVT-HEVC, Open Image Denoise, and OpenVKL.
In HPC benchmarks including the likes of HPCG, GROMACS, LULESH, Pennant, Incompact3D Xcompact3D, and TNN, the new Ice Lake instance was about 33% faster overall than the prior generation Cascade Lake based instance.
Or across all our different test profiles using Intel oneAPI software components for benchmarking, it's again at about a 34% improvement.
Or if taking the geometric mean of all 56 different benchmarks carried out on the m5.8xlarge and m6i.8xlarge instances, the new Ice Lake based instance provided a 35% boost overall. Not bad when considering the on-demand hourly rate pricing at least for the moment is the same between M5 and M6i.
Those wanting to see all of the individual m5.8xlarge vs. m6i.8xlarge Ubuntu 20.04 EC2 cloud benchmarks can do so from this OpenBenchmarking.org result file.