AMD Unified Inference Frontend 1.1 Released
AMD in February quietly released version 1.1 of their in-development Unified Inference Front-end "UIF" that aims to be their catch-all solution for AI inference from CPUs to GPUs to FPGAs and other IP from their recent Xilinx acquisition.
The AMD Unified Inference Frontend is aiming to be their single solution for inference across their growing spectrum of hardware and to consolidate prior multiple different compute platforms. The AMD UIF aims to work across AMD EPYC processors, AMD Ryzen processors, AMD Instinct accelerators, Xilinx Versal Advanced Compute Acceleration Platform (ACAP) targets, and Xilinx FPGAs. With this recent v1.1 release is where support for AMD Instinct hardware has been wired up.
It was just in passing that I noticed UIF v1.1 was quietly tagged three weeks ago without crossing my radar. The AMD UIF 1.1 release notes simply mention:
The CPU support for Ryzen and EPYC processors rely on ZenDNN as AMD's fork of Intel's oneDNN project. The GPU support makes use of the existing AMD ROCm GPU compute platform to which Instinct is the primary focus and select Radeon GPUs.
Downloads and more details on the AMD UIF 1.1 release via GitHub.
The AMD Unified Inference Frontend is aiming to be their single solution for inference across their growing spectrum of hardware and to consolidate prior multiple different compute platforms. The AMD UIF aims to work across AMD EPYC processors, AMD Ryzen processors, AMD Instinct accelerators, Xilinx Versal Advanced Compute Acceleration Platform (ACAP) targets, and Xilinx FPGAs. With this recent v1.1 release is where support for AMD Instinct hardware has been wired up.
It was just in passing that I noticed UIF v1.1 was quietly tagged three weeks ago without crossing my radar. The AMD UIF 1.1 release notes simply mention:
UIF 1.1 extends the support to AMD Instinct GPUs in addition to EPYC CPUs starting from UIF 1.0. Currently, MIGraphX is the acceleration library for Instinct GPUs for Deep Learning Inference. UIF 1.1 provides 45 optimized models for Instinct GPUs and 84 for EPYC CPUs. The Vitis™ AI Optimizer tool is released as part of the Vitis AI 3.0 stack. UIF Quantizer is released in the PyTorch and TensorFlow Docker® images. Leveraging the UIF Optimizer and Quantizer enables performance benefits for customers when running with the MIGraphX and ZenDNN backends for Instinct GPUs and EPYC CPUs, respectively. This release also adds MIGraphX backend for AMD Inference Server. This document provides information about downloading, building, and running the UIF 1.1 release.
...
UIF 1.1 also introduces tools for optimizing inference models. GPU support includes the ability to use AMD GPUs for optimizing inference as well the ability to deploy inference using the AMD ROCm™ platform. Additionally, UIF 1.1 has expanded the set of models available for AMD CPUs and introduces new models for AMD GPUs as well.
The CPU support for Ryzen and EPYC processors rely on ZenDNN as AMD's fork of Intel's oneDNN project. The GPU support makes use of the existing AMD ROCm GPU compute platform to which Instinct is the primary focus and select Radeon GPUs.
Downloads and more details on the AMD UIF 1.1 release via GitHub.
7 Comments