Announcement

Collapse
No announcement yet.

Linux No-Copy Bvec Patches Revved For The New Year As Another I/O Optimization

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux No-Copy Bvec Patches Revved For The New Year As Another I/O Optimization

    Phoronix: Linux No-Copy Bvec Patches Revved For The New Year As Another I/O Optimization

    The Linux kernel has been seeing incredible innovations and optimizations in the I/O area in recent times from IO_uring to numerous performance enhancements. One of the recent performance enhancements seeing activity and promising results is the no-copy bvec behavior...

    http://www.phoronix.com/scan.php?pag...-Bvcec-Patches

  • #2
    I like these enhancements a lot.

    Comment


    • #3
      What is a bvec?

      Comment


      • #4
        Originally posted by MadeUpName View Post
        What is a bvec?
        bio_vec, often called bvec, It is just a structure to describe continuous range of PHYSICAL addresses that is target of read or write operations. It is mostly used for DMA in block device layer. Instead of just a pointer to virtual memory and length, it basically is a list of virtual pages, that are continous in physical memory, (plus a start offset). It is just internal structure used often in the kernel for block io, especially when reading more than 4KB. When userspace provides own buffer to be filled, when using a direct IO, writes will go directly to this userspace buffer, instead to file system or block cache, and then copied (or maped using copy on write techniques). Because userspace buffer, is continous in virtual memory, but not necaissirly in physical memory, kernel need this type of translation structure. This is because most of the block devices access memory using physical addresses. But even if something like IOMMU is used, this translation need to be confirmed for security reasons.

        The new patches, allows to reduce number of copying, which is done by CPU, and can be a lot of overheads, when we are talking about 3GB/s+ of data coming, it can use almost all CPU and memory bandwidth, which is all waste.

        Comment


        • #5
          Excellent description baryluk. Thanks for posting.

          Comment


          • #6
            it always strikes me how these sorts of things are touted as some ingenious enhancement. you figured out this amazing new method of doing the same thing, except without doing all the unnecessary copying? you didn't invent an ingenious new method or optimization. you fixed a bug. if the route you've taken to the grocery store your whole life starts with first driving 15 miles in the opposite direction, then one day you figure out to just drive straight to the store instead, you didn't just make some ingenious contribution to graph theory, you are an idiot who has been doing it wrong all along.

            Comment


            • #7
              Originally posted by quaz0r View Post
              it always strikes me how these sorts of things are touted as some ingenious enhancement. you figured out this amazing new method of doing the same thing, except without doing all the unnecessary copying? you didn't invent an ingenious new method or optimization. you fixed a bug. if the route you've taken to the grocery store your whole life starts with first driving 15 miles in the opposite direction, then one day you figure out to just drive straight to the store instead, you didn't just make some ingenious contribution to graph theory, you are an idiot who has been doing it wrong all along.
              Incremental progress is how most things get done. Basically nothing comes out perfectly correctly and perfectly optimized from the start.

              Comment


              • #8
                Originally posted by Pentarctagon View Post

                Incremental progress is how most things get done. Basically nothing comes out perfectly correctly and perfectly optimized from the start.
                Unfortunately people who never wrote one line of code in their life don't understand that, hence these illogical posts.

                Comment


                • #9
                  Originally posted by quaz0r View Post
                  if the route you've taken to the grocery store your whole life starts with first driving 15 miles in the opposite direction, then one day you figure out to just drive straight to the store instead, you didn't just make some ingenious contribution to graph theory, you are an idiot who has been doing it wrong all along.
                  I suppose you could say that, if you're planning a new city on an open plain, but if you're navigating one-way streets and humongous hills in an older city like San Francisco, maybe the quickest route isn't so direct. And maybe there are some additional security checkpoints to navigate, that were setup to keep the hackers out or enable the city to be transposed to another location, further complicating matters. And if you're trying to get everything working correctly before the next code freeze, then you do what you can in the time you have and defer some optimizations for a subsequent iteration.

                  I don't honestly know the details of this particular case, but I'm all too familiar with the constraints of development in the real world to take such a dismissive attitude to sub-optimal code. Sometimes the optimal solution just doesn't fit the timeframe or budget, but we don't have the luxury of letting perfection obstruct progress.

                  Furthermore, if we're talking about I/O, what's a small improvement today might have been insignificant, in hardware only a few generations ago. The gospel of premature optimization says don't optimize what you can't measure. In the absence of notable performance differences, the simplest solution is better. And while copying sounds like more work, it can serve to make code much simpler.

                  All of that is to say: if you don't know the specifics, best withhold judgement on the matter. There are probably reasons why it wasn't done optimally, from the outset.
                  Last edited by coder; 03 January 2021, 09:54 AM.

                  Comment


                  • #10
                    Originally posted by coder View Post
                    I suppose you could say that, if you're planning a new city on an open plain, but if you're navigating one-way streets and humongous hills in an older city like San Francisco, maybe the quickest route isn't so direct. And maybe there are some additional security checkpoints to navigate, that were setup to keep the hackers out or enable the city to be transposed to another location, further complicating matters. And if you're trying to get everything working correctly before the next code freeze, then you do what you can in the time you have and defer some optimizations for a subsequent iteration.

                    I don't honestly know the details of this particular case, but I'm all too familiar with the constraints of development in the real world to take such a dismissive attitude to sub-optimal code. Sometimes the optimal solution just doesn't fit the timeframe or budget, but we don't have the luxury of letting perfection obstruct progress.

                    Furthermore, if we're talking about I/O, what's a small improvement today might have been insignificant, in hardware only a few generations ago. The gospel of premature optimization says don't optimize what you can't measure. In the absence of notable performance differences, the simplest solution is better. And while copying sounds like more work, it can serve to make code much simpler.

                    All of that is to say: if you don't know the specifics, best withhold judgement on the matter. There are probably reasons why it wasn't done optimally, from the outset.
                    that is most definitely the most diplomatic way of putting it nicely done!

                    Comment

                    Working...
                    X