Intel Xeon Max Performance Delivers A Powerful Combination With AMX + HBM2e

Written by Michael Larabel in Processors on 7 July 2023 at 03:00 PM EDT. Page 3 of 3. 50 Comments.
OpenVINO benchmark with settings of Model: Weld Porosity Detection FP16, Device: CPU. Xeon Max 9480 2P, HBM Only was the fastest.
OpenVINO benchmark with settings of Model: Weld Porosity Detection FP16, Device: CPU. EPYC 9554 2P was the fastest.
OpenVINO benchmark with settings of Model: Weld Porosity Detection FP16-INT8, Device: CPU. Xeon Max 9480 2P, HBM Only was the fastest.
OpenVINO benchmark with settings of Model: Weld Porosity Detection FP16-INT8, Device: CPU. Xeon Max 9480 2P, HBM Only was the fastest.

The weld porosity model with OpenVINO showed more impressive wins for Xeon Max with the AMX and HBM2e usage.

ONNX Runtime benchmark with settings of Model: GPT-2, Device: CPU, Executor: Standard. Xeon Max 9480 2P, HBM Only was the fastest.

For AI workloads outside of OpenVINO not yet optimized for AMX use, the HBM2e on Xeon Max also shows the significant advantages still for those workloads where 64GB of memory per CPU/socket is sufficient for the 56-core Xeon Max 9480. The Xeon Max series with HBM2e is very exciting but as noted this first-generation product will run into some bottlenecks where 1~2GB of RAM per core/thread isn't enough but in any case particularly for AI workloads where the Advanced Matrix Extensions come into play, it can mean outperforming the current EPYC 9004 series by lofty margins.

Geometric Mean Of All Test Results benchmark with settings of Result Composite, Intel Xeon Max AMX   HBM2e Performance Benchmark. Xeon Max 9480 2P, HBM Only was the fastest.

When taking the geometric mean of the small set of tests run for today's article, going from AVX-512 only to AMX yielded an incredible 2.5x the performance which was enough to put Xeon Max slightly ahead of the EPYC 9654. However, once going from DDR5-only to HBM-only is where the lead widened to deliver an additional 25%, or 3.13x the original baseline comparison.

It will be interesting to re-visit this in the months ahead as more software is adapted where relevant for AMX use and seeing what more Linux software optimizations Intel may have for amplifying the HBM2e impact. Additionally, hopefully future-generation Xeon Max products will be able to go beyond the 64GB of HBM2e per socket. It will also be interesting to see how Intel Xeon Max compares against the forthcoming AMD EPYC Genoa-X processors. In any case today we have seen that with HBM2e memory and Advanced Matrix Extensions when combined is a very powerful combination. It took quite some time for AMD to implement AVX-512 while it will be interesting to see whether AMD implements AMX instructions in the future. For now though if you have software able to make use of AMX -- or rely on libraries like oneDNN or libxsmm that already have integrated support for Advanced Matrix Extensions -- and can benefit from increased memory performance while fitting in 1~2GB of available high bandwidth memory per core, Intel's Xeon Max 9480 has proven to be a very capable contender for AI and HPC.

Thanks again to Intel and Supermicro for providing the hardware to make this Xeon Max testing possible. If you missed it check out the earlier Xeon Max 9468/9480 HBM impact comparison benchmarks too.

If you enjoyed this article consider joining Phoronix Premium to view this site ad-free, multi-page articles on a single page, and other benefits. PayPal or Stripe tips are also graciously accepted. Thanks for your support.


Related Articles
About The Author
Michael Larabel

Michael Larabel is the principal author of Phoronix.com and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated benchmarking software. He can be followed via Twitter, LinkedIn, or contacted via MichaelLarabel.com.