Announcement

Collapse
No announcement yet.

IO_uring Bringing Better Send Zero-Copy Performance With Linux 6.10

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • IO_uring Bringing Better Send Zero-Copy Performance With Linux 6.10

    Phoronix: IO_uring Bringing Better Send Zero-Copy Performance With Linux 6.10

    Linux I/O expert and subsystem maintainer Jens Axboe has submitted all of the IO_uring feature updates ahead of the imminent Linux 6.10 merge window...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I am hearing all these benefits and perf improvements of io_uring but is there some real-world usecase of it atm? Genuine question

    Comment


    • #3
      Originally posted by bezirg View Post
      I am hearing all these benefits and perf improvements of io_uring but is there some real-world usecase of it atm? Genuine question
      I have one at work: greatly CPU I/O bottlenecked send and receive network workload (typically, around 400 Gb/s, non-peak).
      Even with all of the awesome capabilities in the Linux kernel, our CPUs can barely keep up with this insane throughput. Any improvement is always welcome.

      Comment


      • #4
        Originally posted by aviallon View Post

        I have one at work: greatly CPU I/O bottlenecked send and receive network workload (typically, around 400 Gb/s, non-peak).
        Even with all of the awesome capabilities in the Linux kernel, our CPUs can barely keep up with this insane throughput. Any improvement is always welcome.
        but is that application using IO_uring?

        Comment


        • #5
          Originally posted by bezirg View Post
          I am hearing all these benefits and perf improvements of io_uring but is there some real-world usecase of it atm? Genuine question
          From my broad understaning io_uring is just so much different than current qqueue and epoll current implementations its just hard to put in runtimes like node (libuv), java, .net. Second thing is is the implemnetion of typical http server using io_uring is really so much faster than epolll.

          Comment


          • #6
            Originally posted by ptrwis View Post

            From my broad understaning io_uring is just so much different than current qqueue and epoll current implementations its just hard to put in runtimes like node (libuv), java, .net. Second thing is is the implemnetion of typical http server using io_uring is really so much faster than epolll.
            Add io_uring support for several asynchronous file operations: read, write fsync, fdatasync stat, fstat, lstat io_uring is used when the kernel is new enough, otherwise libuv simply falls back to...

            Has been in libuv for a year or so.

            There's tricky parts though. Like:
            - It's using some io_uring. I don't know if it uses all it's features.
            - It's opt-in (aka, disabled) by default because of "security". So projects using this, who tend to go with defaults. Though on nodejs there is a runtime environment variable you can set to enable it (yes, it again is disabled by default) https://nodejs.org/api/cli.html#uv_use_io_uringvalue

            Support for io_uring probably is much more widespread then you think but disabled by default.

            Comment


            • #7
              Well. In which scenario the end-user takes benefit?

              Comment


              • #8
                Originally posted by MorrisS. View Post
                Well. In which scenario the end-user takes benefit?
                `send` is used to, well, send data through a socket. This means that every scenario where data is sent through socket will benefit from this improvement, when it's implemented over io_uring. Remember that the benefit of `io_uring` is already of reducing overhead when compared to traditional syscall-based workloads, by delegating that through an async ring-buffer based processing. This means that a typical application that does very little i/o (and in this case, most notably very little output), will not see any significant improvements.

                However, don't take this as a small thing. Any app that relies on proxies, or that writes significant data volumes out, for example video streaming, can benefit significantly as data encoding is usually CPU intensive (except for when it can be offloaded to GPU, obviously), and competing syscalls could lead to stuttering or frames dropped.

                Comment


                • #9
                  Originally posted by hkupty View Post

                  `send` is used to, well, send data through a socket. This means that every scenario where data is sent through socket will benefit from this improvement, when it's implemented over io_uring. Remember that the benefit of `io_uring` is already of reducing overhead when compared to traditional syscall-based workloads, by delegating that through an async ring-buffer based processing. This means that a typical application that does very little i/o (and in this case, most notably very little output), will not see any significant improvements.

                  However, don't take this as a small thing. Any app that relies on proxies, or that writes significant data volumes out, for example video streaming, can benefit significantly as data encoding is usually CPU intensive (except for when it can be offloaded to GPU, obviously), and competing syscalls could lead to stuttering or frames dropped.
                  In this case the upcoming TCP improvement will get further benefit. I assume that when many web links are opened in the browsers io_uring affects data transfer positively.

                  Comment


                  • #10
                    Originally posted by hkupty View Post
                    Streaming is more like sending huge chunks of data, I wouldn't say it's syscalls-heavy. Multiplayer game servers are more like.

                    Comment

                    Working...
                    X