AMD Job Posting Confirms More Details Around Their AI GPU Compute Stack Plans

Written by Michael Larabel in AMD on 12 October 2024 at 06:44 AM EDT. 11 Comments
AMD
A Friday evening job posting has confirmed and reinforced details around their future AI GPU compute stack, presumably what's been referred to as the Unified AI Software Stack.

The Unified AI Software Stack is to support AMD's full range of hardware from CPUs to GPUs and most recently NPUs within Ryzen AI parts. The Unified AI Software Stack will help with offloading to the most appropriate device/accelerator and provide a more cohesive developer experience than what's currently provided by AMD software.

Unified AI Software Stack


Posted to the LLVM Discourse was the job posting that AMD is recruiting for an AI GPU compiler engineer with an MLIR and LLVM focus. We've known MLIR -- LLVM's modern intermediate representation -- is to be the common IR of the Unified AI Software Stack. MLIR has also played a role with their Peano compiler for Ryzen NPUs and the like.

The job posting also notes that IREE is to play a central role in their future AI compute stack too. IREE is the Intermediate Representation Execution Environment built atop MLIR for lowering machine learning models into a unified IR. IREE supports already ONNX, PyTorch, TensorFlow, JAX, and more. IREE also has an AMD ROCm back-end already plus can also output to Vulkan as well as Apple Metal and NVIDIA CUDA.

IREE diagram


Those unfamiliar with IREE can visit IREE.dev. I have covered IREE in the past such as for machine learning acceleration with Vulkan and has been mentioned in the context of AMD's AI software efforts. AMD's Nod.ai acquisition was to recruit engineering talent around not only MLIR but IREE too.

Anyhow, the job posting sums up the new AI GPU compiler development engineer position as:
"We are building IREE as an open-source compiler and runtime solution to productionize ML for a variety of use cases and hardware targets: https://iree.dev/. In particular, we aim to provide broad and performant GPU coverage, from datacenter to mobile, via a unified open-source software stack. Our team develops an end-to-end AI solution: from ML framework integration, down to generating efficient kernels."

Great seeing them reaffirm their interest from "datacenter to mobile" and thus this compiler/run-time software likely part of the Unified AI Software Stack effort. And, of course, that this in-development software stack will be open-source. It's going to be very interesting to see how well this future AMD AI software compute stack performs and exactly how well rounded the support is going to be across their different product lines.
Related News
About The Author
Michael Larabel

Michael Larabel is the principal author of Phoronix.com and founded the site in 2004 with a focus on enriching the Linux hardware experience. Michael has written more than 20,000 articles covering the state of Linux hardware support, Linux performance, graphics drivers, and other topics. Michael is also the lead developer of the Phoronix Test Suite, Phoromatic, and OpenBenchmarking.org automated benchmarking software. He can be followed via Twitter, LinkedIn, or contacted via MichaelLarabel.com.

Popular News This Week