Benchmarking Amazon EC2's New C6a Instances Powered By 3rd Gen EPYC
Written by Michael Larabel in Processors on 17 February 2022. Page 1 of 5. 1 Comment

Last year Amazon launched the EC2 M6a instances powered by AMD EPYC 7003 series while this week they have expanded their range of AMD Zen 3 offerings by launching the EC2 C6a series. The EC2 C6a instances are designed for compute-intensive workloads (hence the "C" series) and AWS is promoting it as offering up to 15% improvement in price-performance over prior-generation C5a instances and up to 10% lower cost than comparable x86-based EC2 instances. I've run some benchmarks of the new EC2 C6a instances looking at how they perform over the prior 2nd Gen EPYC C5a based instances, against the Intel Ice Lake competition over in the M6i stack, and also how the C6a competes with Amazon's own Graviton2-based C6g type.

The C6a line-up ranges from the c6a.large with 2 vCPUs and 4GB of RAM up through the c6a.48xlarge with 192 vCPUs and 384GB of RAM. The list of instance types and other information on the new C6a series can be found at aws.amazon.com. The new C6a series comes just days after Google launched their C2D compute-optimized cloud instances also built on AMD EPYC 7003 series hardware.

For the purposes of my initial testing today to save on time and cloud costs, just the "8xlarge" instance type was being evaluated with 32 vCPUs and 64GB of RAM. Ubuntu 20.04 LTS was the Linux distribution in use across all of the Amazon EC2 testing. All testing (and pricing used) was from Amazon's US West (Oregon) cloud region. When it came to the pricing calculations, the Oregon on-demand pricing table was used. The C6a series is using the EPYC 7R13 processors.

First up in this article is looking at the c5a.8xlarge versus c6a.8xlarge performance for seeing the generational improvement in the compute-optimized instances powered by AMD EPYC. Both generations are 32 vCPUs and the c5a.8xlarge is using the EPYC 7R32 (Zen 2) processor while the new c6a.8xlarge has the EPYC 7R13. Both 8xlarge instances offer 64GB of system memory. Let's see generationally how the performance and performance-per-dollar looks...


Related Articles
Trending Linux News