OpenVINO 2022.3 Released With Full Support For Sapphire Rapids, Intel dGPUs
Intel's OpenVINO toolkit for deep learning is out with a major release ahead of the holidays and now has full support for Xeon Scalable "Sapphire Rapids" as well as full support now for their discrete GPUs.
Earlier this week Intel released oneDNN 3.0 as their oneAPI component for assisting in building deep learning software and can be used by the likes of PyTorch, ONNX, MATLAB, and other software. OpenVINO 2022.3 is out today as another hugely successful open-source AI project out of Intel.
While prior OpenVINO releases have worked on Sapphire Rapids support.optimizations, today's OpenVINO 2022.3 release is deemed as having complete support for 4th Generation Xeon Scalable "Sapphire Rapids" for running deep learning inference workloads from the edge to the cloud. Similarly, OpenVINO 2022.3 has full support for their discrete GPUs -- both the Data Center GPU Flex Series as well as Arc Graphics.
OpenVINO 2022.3 is faster for Intel consumer CPUs now too with having new optimizations for Alder Lake and Raptor Lake processors.
In addition to the performance/support improvements for Intel's latest wares, OpenVINO 2022.3 also expands model coverage with a variety of improvements, new APIs and various integration enhancements, and support for Apple M1 Macs with OpenVINO.
These improvements are all bundled up for OpenVINO 2022.3 which Intel is also declaring a Long Term Support (LTS) release with plans to support it for two years.
OpenVINO 2022.3 can be downloaded from GitHub.
I'll be working on some new OpenVINO benchmarks soon against the new OpenVINO 2022.3 LTS release (hopefully I'll be getting Sapphire Rapids soon for making the CPU benchmark race more competitive...). Those not familiar with OpenVINO can learn more about this wonderful open-source project via Intel.com.
Earlier this week Intel released oneDNN 3.0 as their oneAPI component for assisting in building deep learning software and can be used by the likes of PyTorch, ONNX, MATLAB, and other software. OpenVINO 2022.3 is out today as another hugely successful open-source AI project out of Intel.
While prior OpenVINO releases have worked on Sapphire Rapids support.optimizations, today's OpenVINO 2022.3 release is deemed as having complete support for 4th Generation Xeon Scalable "Sapphire Rapids" for running deep learning inference workloads from the edge to the cloud. Similarly, OpenVINO 2022.3 has full support for their discrete GPUs -- both the Data Center GPU Flex Series as well as Arc Graphics.
OpenVINO 2022.3 is faster for Intel consumer CPUs now too with having new optimizations for Alder Lake and Raptor Lake processors.
In addition to the performance/support improvements for Intel's latest wares, OpenVINO 2022.3 also expands model coverage with a variety of improvements, new APIs and various integration enhancements, and support for Apple M1 Macs with OpenVINO.
These improvements are all bundled up for OpenVINO 2022.3 which Intel is also declaring a Long Term Support (LTS) release with plans to support it for two years.
Intel OpenVINO AI Toolkit
OpenVINO 2022.3 can be downloaded from GitHub.
I'll be working on some new OpenVINO benchmarks soon against the new OpenVINO 2022.3 LTS release (hopefully I'll be getting Sapphire Rapids soon for making the CPU benchmark race more competitive...). Those not familiar with OpenVINO can learn more about this wonderful open-source project via Intel.com.
3 Comments