AMD & Intel Team Up For UALink As Open Alternative To NVIDIA's NVLink
It's rare for an advanced media briefing to involve representatives from both AMD and Intel, but that happened yesterday. AMD and Intel along with Broadcom have formed the Ultra Accelerator Link "UALink" as a new open standard they are hoping to use to take on NVIDIA's proprietary NVLink interface.
Last summer was news of Ultra Ethernet as a new industry standard backed by Intel, AMD, Meta, HPE, and others for high performance networking. Now there is Ultra Accelerator Link as a more specialized standard for linking GPUs/accelerators within either the same system or a group of systems forming a pod.
UALinks aims to be an open standard for scaling of tens to hundreds of GPUs/accelerators within pods. The simplest way to explain it is being an open alternative to NVIDIA's NVLink. In addition to Intel, AMD, and Broadcom, Cisco, Google, HPE, Meta, and Microsoft have also been involved in the formation of the Ultra Accelerator Link. Ultra Ethernet high speed networking will still have its role for scale-out purposes.
UALink is destined to be an open ecosystem for scale-up connections among AI/GPU accelerators and in a performance-sensitive manner. The hope is to have the initial Ultra Accelerator Link 1.0 specification in Q3'2024 while an update one quarter later (Q4'2024) will focus on providing additional bandwidth capabilities. UALink will leverage the Infinity Fabric protocol. The UALink 1.0 specification will allow for scaling of up to 1,024 accelerators.
UALink will allow for direct load, store, and atomic operations between AI accelerators / GPUs and serve as a high bandwidth, low-latency fabric able to handle hundreds of accelerators.
It's exciting to see basically everyone take on NVIDIA's NVLink and to see an aggressive Ultra Accelerator Link v1.0 milestone already for next quarter. But in reality the hardware implementing UALink 1.0 likely won't be in customer hands until about the ~2026 timeframe meanwhile NVLink has been pervasive already throughout NVIDIA's data center products. In any event, we love open standards at Phoronix and great seeing this UALink industry collaboration and looking forward to covering it further on Phoronix.
Last summer was news of Ultra Ethernet as a new industry standard backed by Intel, AMD, Meta, HPE, and others for high performance networking. Now there is Ultra Accelerator Link as a more specialized standard for linking GPUs/accelerators within either the same system or a group of systems forming a pod.
UALinks aims to be an open standard for scaling of tens to hundreds of GPUs/accelerators within pods. The simplest way to explain it is being an open alternative to NVIDIA's NVLink. In addition to Intel, AMD, and Broadcom, Cisco, Google, HPE, Meta, and Microsoft have also been involved in the formation of the Ultra Accelerator Link. Ultra Ethernet high speed networking will still have its role for scale-out purposes.
UALink is destined to be an open ecosystem for scale-up connections among AI/GPU accelerators and in a performance-sensitive manner. The hope is to have the initial Ultra Accelerator Link 1.0 specification in Q3'2024 while an update one quarter later (Q4'2024) will focus on providing additional bandwidth capabilities. UALink will leverage the Infinity Fabric protocol. The UALink 1.0 specification will allow for scaling of up to 1,024 accelerators.
UALink will allow for direct load, store, and atomic operations between AI accelerators / GPUs and serve as a high bandwidth, low-latency fabric able to handle hundreds of accelerators.
It's exciting to see basically everyone take on NVIDIA's NVLink and to see an aggressive Ultra Accelerator Link v1.0 milestone already for next quarter. But in reality the hardware implementing UALink 1.0 likely won't be in customer hands until about the ~2026 timeframe meanwhile NVLink has been pervasive already throughout NVIDIA's data center products. In any event, we love open standards at Phoronix and great seeing this UALink industry collaboration and looking forward to covering it further on Phoronix.
19 Comments