Show Your Support: This site is primarily supported by advertisements. Ads are what have allowed this site to be maintained on a daily basis for the past 18+ years. We do our best to ensure only clean, relevant ads are shown, when any nasty ads are detected, we work to remove them ASAP. If you would like to view the site without ads while still supporting our work, please consider our ad-free Phoronix Premium.
Linux 5.9 Bringing Mellanox VDPA Driver For Newer ConnectX Devices
The latest Mellanox driver going mainline in the Linux kernel is a VDPA (Virtual Data Path Acceleration) for their ConnectX6 DX and newer devices.
The VDPA standard is an abstraction layer on top of SR-IOV and allows for a single VirtIO driver in the guest that isn't hardware specific while still allowing wire-speed performance on the data plane. VDPA is more versatile than the likes of VirtIO full hardware offloading. More details for those interested via this Red Hat post.
The Mellanox ConnectX VDPA support works with the ConnectX6 DX and newer devices. Currently just a single queue is supported while multi-queue support will come later along with a new block device driver (off a single queue with this VDPA driver the performance measured via iperf is around 12 Gbps). This VDPA driver builds off the existing Mellanox MLX5 driver code already in the mainline tree. For using this driver with QEMU and VirtIO networking, a currently branched version of QEMU is necessary at the moment although ultimately will work its way mainline.
This MLX5 VDPA code was sent in with the VirtIO updates for Linux 5.9. Also notable from the VirtIO updates is IRQ bypass support for VDPA and IFC. The IRQ offloading for VDPA is said to shave around 0.1ms in ping latency between two VFs.