Benchmarking Amazon EC2's New C6a Instances Powered By 3rd Gen EPYC
While AWS doesn't offer any "C6i" directly-comparable Intel-based instance type (Edit: Err whoops, yes there is C6i but not available from the region that was being tested at the time...), they do have the EC2 M6i instances launched last year and built on latest-generation Intel Xeon Scalable "Ice Lake" processors (Xeon Platinum 8375C). The M6i.8xlarge instance offers 32 vCPUs like the C6a/C6g 8xlarge instances but with being a memory-optimized instance type has 128GB of RAM. The M6i.8xlarge instance is significantly more expensive at $1.536 USD per hour for on-demand pricing.
When it comes to the M6i.8xlarge performance against the C6g.8xlarge instance. the C6a instance generally came out ahead but these workloads tested aren't really memory-intensive designed for which the M6 series is for. In some of the workloads able to make effective use of AVX-512, the Intel Ice Lake instance did offer competitive performance against C6a Zen 3 while other times was far behind.
On a performance-per-dollar basis, the C6a offered much better value.
That's the brief story on the C6a performance with my initial testing. C6a certainly offers hefty generational improvement as we are accustomed to seeing at this stage from Zen 2 to Zen 3, the C6a versus C6g performance varies based on particular workloads, and the EPYC 7R13 instance generally delivered better performance than Intel Ice Lake over in the M6i class.
If you enjoyed this article consider joining Phoronix Premium to view this site ad-free, multi-page articles on a single page, and other benefits. PayPal or Stripe tips are also graciously accepted. Thanks for your support.