Google Cloud Tau T2A Ampere Altra vs. T2D AMD EPYC Performance

Written by Michael Larabel in Processors on 21 September 2022. Page 5 of 5. 8 Comments

While CFD isn't one of the main focuses for the Tau VMs, I couldn't help but to have some fun with OpenFOAM... There was competitive performance between the Tau VMs for this open-source computational fluid dynamics package. In the case of the T2D series, great scaling up through the available 60 vCPU instance size.

The Tau T2D series was also delivering consistently better performance for the GROMACS and GPAW scientific computing packages, but again aren't workloads optimized for Tau VMs.

If considering Tau VMs for a build farm or any frequent code compilation tasks for like a continuous integration (CI) setup, the Tau T2D VMs were delivering faster build times... Of course, for each test it was focusing the native ISA of that system, but namely if you are just after a CI pipeline and not too concerned about the underlying CPU architecture, the T2D series were turning around builds the fastest. If needing an AArch64-based CI setup, Tau T2A is obviously a great option.

Google's launch of the Tau T2D series last year powered by the AMD EPYC 7003 "Milan" processors was very successful and now Google has prepared their first foray into the Arm cloud space. The T2A series is a nice complementary offering to the T2D series with competitive pricing and for scale-out workloads that are supported well on AArch64. The T2D series overall came out slightly ahead of the T2A series due to the more mature x86_64 Linux software ecosystem. In cases like OpenJDK Java performance there was competitive performance between T2A and T2D while the SPECjbb 2015 performance sided with the T2D instances. For a number of the database workloads there was very healthy competition between T2A and T2D while in workloads like Redis, RocksDB, Cassandra, DragonFlyDB, and definitely PostgreSQL came out in favor of the AMD EPYC powered instances. While not a key focus for Tau VMs, the T2D series came out stronger for HPC workloads like GROMACS, GPAW, and OpenFOAM CFD. The T2D series also was delivering faster build times for targeting the native ISA of the host, if looking for any CI-type setups and not being too concerned about the underlying architecture for executing said tests.

At the end of the day, it certainly pays off to evaluate both of the Google Tau VM classes to see how your particular workload(s) compare. The Tau T2A instances show that Arm/AArch64 performance has certainly come a long way and in various workloads can compete with modern x86_64 competition while in other areas the software ecosystem has room for improvement and more tuning to occur for AArch64. At least with the likes of the Apple M1/M2 beginning to run Linux as affordable developer boxes, there continues to be more progress in the open-source world continuing to enhance the open-source AArch64 software support moving forward that may help close the gap in some areas but in other areas x86_64 with AMD EPYC on T2D was delivering sharply better performance.

It will also be really interesting to see how the next-generation Google Cloud instances performance considering Ampere One and AMD EPYC Zen 4 processors are also set to debut in the months ahead. Thanks to Google for providing gratis early access to the T2A instances for early benchmarking.

If you enjoyed this article consider joining Phoronix Premium to view this site ad-free, multi-page articles on a single page, and other benefits. PayPal tips are also graciously accepted. Thanks for your support.

Related Articles
About The Author
Author picture

Michael Larabel is the principal author of and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and automated benchmarking software. He can be followed via TwitterLinkedIn,> or contacted via