Intel Spins Up Revised GNA Driver For AI Neural Co-Processor
While Intel with the rest of the tech industry continue investing immense resources in areas around AI and talking it up, one of the efforts that has been slow to materialize on the Linux side has been for enabling their Gaussian and Neural Accelerator (GNA) with the mainline Linux kernel. This week the latest Intel GNA driver patches were posted for this neural co-processor.
The Intel Gaussian and Neural Accelerator (GNA) co-processor has been around since the botched Cannon Lake and continues to be found in the latest Intel Core processors while also being found in various Pentium (Gemini Lake / Elkhart Lake) processors too, plus the Intel Speech Enabling Developer Kit and the Amazon Alexa Premium Far-Field Developer Kit. The GNA co-processor is intended for continuous inference workloads like noise reduction, speech recognition, and similar tasks to free up CPU resources.
Going back to February 2021 Intel engineers began work towards upstreaming their GNA driver. In the better part of two years since, they are now up to their fourth version of the driver as they work to eventually get it into the mainline kernel.
The Intel GNA v4 Linux driver posted on Thursday has been adapted to make use of the kernel's Direct Rendering Manager (DRM) framework. The user-space GNA library has also been adapte to this shift for tieing the AI driver into the DRM kernel subsystem. This has been something sought after by DRM maintainers with many AI accelerators fitting closely to the DRM GPU subsystem and its semantics.
Plus the v4 patches have now been tested against the Linux 6.0 kernel, various fixes and items raised during prior review have been addressed, and other updates.
It's too late to arrive in 2022 with the 6.1 kernel, but we'll see if the Intel GNA driver is finally ready for mainline come 2023. The intel_gna v4 patches are out for review now on the kernel mailing list.
The GNA user-space library is open-source for making use of this neural co-processor and Intel's OpenVINO deep learning toolkit software is one example of user-space software ready to make use of the GNA when enabled on Linux. Just this past week marked the v3.0 release of the Intel GNA user-space library.
The Intel Gaussian and Neural Accelerator (GNA) co-processor has been around since the botched Cannon Lake and continues to be found in the latest Intel Core processors while also being found in various Pentium (Gemini Lake / Elkhart Lake) processors too, plus the Intel Speech Enabling Developer Kit and the Amazon Alexa Premium Far-Field Developer Kit. The GNA co-processor is intended for continuous inference workloads like noise reduction, speech recognition, and similar tasks to free up CPU resources.
Going back to February 2021 Intel engineers began work towards upstreaming their GNA driver. In the better part of two years since, they are now up to their fourth version of the driver as they work to eventually get it into the mainline kernel.
The Intel GNA v4 Linux driver posted on Thursday has been adapted to make use of the kernel's Direct Rendering Manager (DRM) framework. The user-space GNA library has also been adapte to this shift for tieing the AI driver into the DRM kernel subsystem. This has been something sought after by DRM maintainers with many AI accelerators fitting closely to the DRM GPU subsystem and its semantics.
Plus the v4 patches have now been tested against the Linux 6.0 kernel, various fixes and items raised during prior review have been addressed, and other updates.
It's too late to arrive in 2022 with the 6.1 kernel, but we'll see if the Intel GNA driver is finally ready for mainline come 2023. The intel_gna v4 patches are out for review now on the kernel mailing list.
The GNA user-space library is open-source for making use of this neural co-processor and Intel's OpenVINO deep learning toolkit software is one example of user-space software ready to make use of the GNA when enabled on Linux. Just this past week marked the v3.0 release of the Intel GNA user-space library.
Add A Comment