In this talk, we would like to update the development status of SPDK user space NVMe/TCP transport and the performance optimizations of NVMe/TCP transport in both software and hardware areas. In the recent one year, there are great efforts to optimize the NVMe-oF transport performance in software especially with the kernel TCP/IP stack, such as:
Trade-off memory copy cost to reduce system calls to achieve optimal performance of the NVMe/TCP transport on top of the kernel TCP/IP stack
Use asynchronized writev to improve the IOPS
Use libaio/liburing to implement group based I/O submission for write operations.
We also spent some efforts to investigate user space TCP/IP stack (e.g., Seastar) to explore the performance optimization opportunity.
In this talk, we also share Intel’s latest effort to optimize the NVMe/TCP transport in SPDK using Application Device Queue (ADQ) technology from Intel 100G NICs, which improves NVMe/TCP transport performance significantly. We will talk about how SPDK can export the ADQ feature provided by Intel's new NIC into our common Sock layer library to accelerate the NVMe-oF TCP performance and share the performance data with Intel's latest 100Gb NIC (i.e., E810). ADQ significantly improves the performance of NVMe/TCP transport in SPDK including reduced average latency, significant reduction in long tail latency, and much higher IOPS.
Learning Objectives
Tell the audience the recent 1 year development status of SPDK user space NVMe-oF TCP transport
Share our learned experience on how to accelerate NVMe-oF TCP transport.
Tell the audience the optimization methods on using the Kernel TCP stack (e.g., async writev, group based I/O submission)
Tell the audience the possible optimization direction using user space TCP/IP stack.
Share the performance result from using Intel's 100Gb NIC with the ADQ (application device queue) technology to accelerate NVMe-oF TCP transport.
Presented by
Ziye Yang, Staff Cloud Software Engineer, Intel
Yadong Li, Lead software architect, Intel
Ещё видео!