FFmpeg Now Supports GPU Inference With Intel's OpenVINO
Earlier this summer Intel engineers added an OpenVINO back-end to the FFmpeg multimedia framework. OpenVINO as a toolkit for optimized neural network performance on Intel hardware was added to FFmpeg for the same reasons there is TensorFlow and others also supported -- support for DNN-based video filters and other deep learning processing.
The support added back in July for FFmpeg with OpenVINO is opt-in under the --enable-libopenvino build switch and requires first building OpenVINO with its C API enabled. This Intel inference engine supports TensorFlow, Caffe, ONNX, MXNet, and more that can be converted into OpenVINO format.
What's new this past week is the code landing with the OpenVINO DNN back-end in FFmpeg to support inference on Intel GPUs.
Details on setting up FFmpeg with the OpenVINO GPU inference support can be found via this commit. The default behavior for now with FFmpeg OpenVINO support is CPU-based inference.
The support added back in July for FFmpeg with OpenVINO is opt-in under the --enable-libopenvino build switch and requires first building OpenVINO with its C API enabled. This Intel inference engine supports TensorFlow, Caffe, ONNX, MXNet, and more that can be converted into OpenVINO format.
What's new this past week is the code landing with the OpenVINO DNN back-end in FFmpeg to support inference on Intel GPUs.
Details on setting up FFmpeg with the OpenVINO GPU inference support can be found via this commit. The default behavior for now with FFmpeg OpenVINO support is CPU-based inference.
2 Comments