Originally posted by jacob
View Post
You trigger read synchronously. But nonblock mode does not say that the buffer cannot not be filled in background by Posix specification. Think COW mmap file where the blocks of the file are transferred when application access the memory area.
Nonblock mode it should be magically transparent. Nonblock should block when there is no possibility of meeting request quickly.
O_NONBLOCK by posix is kind of backwards. Asynchronous non-blocking I/O you get a signal when SIGIO ready.
O_NONBLOCK that has returned operation that has returned success and the data is not in fact ready should then trigger a SIGIO. This requires more complex page fault handling kernel side.
So O_ASYNC sends you a SIGIO when data is ready. O_NONBLOCK sends you back different messages 1 its going block because the data is not ready. 2 if tells you its ready and its not it will then give you a SIGIO. The 2 bit the optimistic O_NONBLOCK this is currently not a Linux kernel feature even for sockets.
If you have a optimistic O_NONBLOCK this can result in less syscalls from kernel to application than pure Synchronous non-blocking I/O.
Fully implement O_NONBLOCK and O_ASYNC as per posix both have their upsides and downsides. Its not a clear cut advantage as the implementation that have O_NONBLOCK based on Synchronous non-blocking I/O.
When you think about optimistic O_NONBLOCK application is getting to the end of it time slice if you were processing right now EWOULDBLOCK could be correct answer for call but the fact you are right on end of time slice responding with success could be correct because by the time application gets rescheduled the data can be transferred and if this has been guessed wrong that is what SIGIO is for. This could have O_NONBLOCK behaving a lot like a futex. These kind of extra behaviours need Asynchronous non-blocking I/O implemented kernel side as well has scheduler cooperation.
The performance difference between optimistic O_NONBLOCK and Asynchronous blocking I/O most times should be nothing. Lack of optimistic O_NONBLOCK results in this been different most of the time.
Originally posted by jacob
View Post
The reality implementing IO that performs well in most cases is pure evil all around. Why part of getting good performance means telling lies to applications.
Comment