NVMeTCP Offload Bits Coming For Linux 5.14 To Lower CPU Utilization, Better Latency
Adding to other networking changes queuing up for the upcoming Linux 5.14 cycle, NVMeTCP Offload has begun landing into "net-next" ahead of this next kernel merge window.
Queued this week into net-next is the NVMeTCP Offload ULP host layer support as part of the broader ongoing effort for complete NVMeTCP Offload infrastructure for use by relevant network drivers/hardware. NVMeTCP Offload will provide full offloading of the NVMeTCP protocol, including the TCP level.
All the technical details around this NVMeTCP Offload work can be found via this merge message.
What excites us are the performance results with CPU utilization on an AMD EPYC server going from 15.1% to 4.7% thanks to the offloading and for a Xeon server going from 16.3% to 1.1%. Additionally, the latency was much better off with going from an average of 105 usec to 39 usec and the 99.99% tail latency going from 570 usec to 91 usec.
This initial work is being done by Marvell and thus focused on their drivers/hardware as initial users.
Those wanting to learn more about the NVMeTCP specification can do so via NVMExpress.org.
Queued this week into net-next is the NVMeTCP Offload ULP host layer support as part of the broader ongoing effort for complete NVMeTCP Offload infrastructure for use by relevant network drivers/hardware. NVMeTCP Offload will provide full offloading of the NVMeTCP protocol, including the TCP level.
All the technical details around this NVMeTCP Offload work can be found via this merge message.
What excites us are the performance results with CPU utilization on an AMD EPYC server going from 15.1% to 4.7% thanks to the offloading and for a Xeon server going from 16.3% to 1.1%. Additionally, the latency was much better off with going from an average of 105 usec to 39 usec and the 99.99% tail latency going from 570 usec to 91 usec.
This initial work is being done by Marvell and thus focused on their drivers/hardware as initial users.
Those wanting to learn more about the NVMeTCP specification can do so via NVMExpress.org.
4 Comments