The FUSE module, which allows for file-systems to be run from user-space, can now process direct I/O a-synchronously. This a-synchronous direct I/O can lead to very noticeable performance improvements for FUSE-based file-systems like ZFS.
One of the most common complaints about the FUSE project for running file-systems in user-space has been the poor performance. While using FUSE leads to portability between operating systems, simplified file-system implementations, pushing file-systems that would not be allowed in the GPL-licensed Linux kernel, and a stable API, the performance has always been a problem compared to file-systems natively implemented within the Linux kernel.
There have been FUSE performance improvements over the years, but it's still an active complaint. Linus Torvalds called FUSE a toy and for misguided people
. The latest I/O performance improvement for FUSE is being able to process direct I/O a-synchronously.
Maxim Patlasov submitted a set of six patches for allowing the direct I/O to be done a-synchronously rather than synchronously as it's done currently.
Existing fuse implementation always processes direct IO synchronously: it submits next request to userspace fuse only when previous is completed. This is suboptimal because: 1) libaio DIO works in blocking way; 2) userspace fuse can't achieve parallelism processing several requests simultaneously (e.g. in case of distributed network storage); 3) userspace fuse can't merge requests before passing it to actual storage.
The idea of the patch-set is to submit fuse requests in non-blocking way (where it's possible) and either return -EIOCBQUEUED or wait for their completion synchronously. The patch-set to be applied on top of for-next of Miklos' git repo.
Performance tests done by the patch-set's author show the dd read speeds on FUSE going up by 19%, dd writes going up by about 4%, AIO-Stress reads going up by 21%, and AIO-Stress writes increasing by 11%.