Linux 5.9 Bringing Mellanox VDPA Driver For Newer ConnectX Devices
There are a few changes worth mentioning out of the VirtIO updates submitted today for the Linux 5.9 kernel.
The latest Mellanox driver going mainline in the Linux kernel is a VDPA (Virtual Data Path Acceleration) for their ConnectX6 DX and newer devices.
The VDPA standard is an abstraction layer on top of SR-IOV and allows for a single VirtIO driver in the guest that isn't hardware specific while still allowing wire-speed performance on the data plane. VDPA is more versatile than the likes of VirtIO full hardware offloading. More details for those interested via this Red Hat post.
The Mellanox ConnectX VDPA support works with the ConnectX6 DX and newer devices. Currently just a single queue is supported while multi-queue support will come later along with a new block device driver (off a single queue with this VDPA driver the performance measured via iperf is around 12 Gbps). This VDPA driver builds off the existing Mellanox MLX5 driver code already in the mainline tree. For using this driver with QEMU and VirtIO networking, a currently branched version of QEMU is necessary at the moment although ultimately will work its way mainline.
This MLX5 VDPA code was sent in with the VirtIO updates for Linux 5.9. Also notable from the VirtIO updates is IRQ bypass support for VDPA and IFC. The IRQ offloading for VDPA is said to shave around 0.1ms in ping latency between two VFs.
The latest Mellanox driver going mainline in the Linux kernel is a VDPA (Virtual Data Path Acceleration) for their ConnectX6 DX and newer devices.
The VDPA standard is an abstraction layer on top of SR-IOV and allows for a single VirtIO driver in the guest that isn't hardware specific while still allowing wire-speed performance on the data plane. VDPA is more versatile than the likes of VirtIO full hardware offloading. More details for those interested via this Red Hat post.
The Mellanox ConnectX VDPA support works with the ConnectX6 DX and newer devices. Currently just a single queue is supported while multi-queue support will come later along with a new block device driver (off a single queue with this VDPA driver the performance measured via iperf is around 12 Gbps). This VDPA driver builds off the existing Mellanox MLX5 driver code already in the mainline tree. For using this driver with QEMU and VirtIO networking, a currently branched version of QEMU is necessary at the moment although ultimately will work its way mainline.
This MLX5 VDPA code was sent in with the VirtIO updates for Linux 5.9. Also notable from the VirtIO updates is IRQ bypass support for VDPA and IFC. The IRQ offloading for VDPA is said to shave around 0.1ms in ping latency between two VFs.
5 Comments