Intel Makes Open-Source Its Python NPU Acceleration Library
Intel has made open-source its NPU Acceleration Library (intel-npu-acceleration-library) as a user-space library for Windows and Linux systems for interfacing with the Neural Processing Unit (NPU) found initially on their new Meteor Lake laptops.
Intel has long been developing the iVPU Linux kernel driver that is upstream in the kernel for supporting the Intel NPUs (formerly known as VPUs) beginning with Meteor Lake. This NPU Acceleration Library is a convenient Python library for leveraging the NPU's potential.
The Intel Neural Processing Unit library supports 8-bit quantization, Float16 support, torch.compile support, static shape inference, and other features. The NPU Acceleration Library can be downloaded from GitHub or conveniently installed via PIP.
The Intel NPU Acceleration Library GitHub page has Python code samples showing a single matrix multiply on the NPU, compiling a model for the NPU, and even running a Tiny-Llama model on the NPU. The v1.0 release marks the initial stable release for this library that will become increasingly more important in the AI era and as the NPUs work their way into more Intel processors.
Intel has long been developing the iVPU Linux kernel driver that is upstream in the kernel for supporting the Intel NPUs (formerly known as VPUs) beginning with Meteor Lake. This NPU Acceleration Library is a convenient Python library for leveraging the NPU's potential.
The Intel Neural Processing Unit library supports 8-bit quantization, Float16 support, torch.compile support, static shape inference, and other features. The NPU Acceleration Library can be downloaded from GitHub or conveniently installed via PIP.
The Intel NPU Acceleration Library GitHub page has Python code samples showing a single matrix multiply on the NPU, compiling a model for the NPU, and even running a Tiny-Llama model on the NPU. The v1.0 release marks the initial stable release for this library that will become increasingly more important in the AI era and as the NPUs work their way into more Intel processors.
4 Comments