Announcement

Collapse
No announcement yet.

New readfile() System Call Under Review For Reading Small~Medium Files Faster

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • New readfile() System Call Under Review For Reading Small~Medium Files Faster

    Phoronix: New readfile() System Call Under Review For Reading Small~Medium Files Faster

    Back in May we reported on work being done for a readfile() system call to read small files more efficiently. Greg Kroah-Hartman has now volleyed those patches for review on the kernel mailing list for this improvement for reading small to medium file sizes on Linux systems...


  • #2
    For everybody, who is also wondering why you'd need to pass a file descriptor : it is the file descriptor of the directory this file lives in. You've to 'open' the directory first. Source : patches

    Comment


    • #3
      Originally posted by atomsymbol

      At first, I thought that the proposed system call is capable of reading multiple small files using a single system call - which would help increase HDD/SSD queue utilization and increase IOPS (I/O operations per second) - but that isn't the case and the proposed system call can read just a single file. Without the ability to read multiple small files using a single system call, it is impossible to increase IOPS (unless using multiple threads).

      https://en.wikipedia.org/wiki/Native_Command_Queuing
      It is not too late, comment those patches on the mailing list

      Comment


      • #4
        Originally posted by atomsymbol

        At first, I thought that the proposed system call is capable of reading multiple small files using a single system call - which would help increase HDD/SSD queue utilization and increase IOPS (I/O operations per second) - but that isn't the case and the proposed system call can read just a single file. Without the ability to read multiple small files using a single system call, it is impossible to increase IOPS (unless using multiple threads).

        https://en.wikipedia.org/wiki/Native_Command_Queuing
        There are no queues and iops in sysfs and procfs This syscall is not about better hardware utilization, it's about doing fewer syscalls.

        If you are at the point of being bothered by queue utilization, then you should already be using async I/O (cf. io_uring).

        Comment


        • #5
          Originally posted by oleid View Post
          For everybody, who is also wondering why you'd need to pass a file descriptor : it is the file descriptor of the directory this file lives in. You've to 'open' the directory first. Source : patches
          I can think of two reasons to do it that way off the top of my head:
          1. Linux has no path length limit. Some legacy POSIX APIs are limited to 4096 characters, but filesystems like ext4 have no path length limit and you can work around the API limits using relative paths.
          2. It's getting more popular to compile code to ABIs where all paths must be relative to file descriptors passed in by the sandbox host. (i.e. The binary does not have access to any syscalls that accept absolute paths)

          Comment


          • #6
            Originally posted by atomsymbol
            Is there a real-world scenario in which open/read/close of sysfs and procfs files is a bottleneck?
            Not sure — but I didn't make this up, it's just the rationale from the lkml thread:

            <...>
            This is especially helpful for tools that poke around in procfs or sysfs, making a little bit of a less system load than before, especially as syscall overheads go up over time due to various CPU bugs being addressed.
            <...>

            Comment


            • #7
              Guest I've run precisely into this sysfs bottleneck when doing fast switching of GPIOs in an industrial PC. I ended up keeping those fds open and doing seek()/write() over open()/write()/close() - it cut down my CPU utilization by 90%.

              ​​​​

              Comment


              • #8
                Originally posted by jaskij View Post
                Guest I've run precisely into this sysfs bottleneck when doing fast switching of GPIOs in an industrial PC. I ended up keeping those fds open and doing seek()/write() over open()/write()/close() - it cut down my CPU utilization by 90%.

                ​​​​
                Sorry. But that's just stupid. Doing high volume GPIO-switching from userspace using sysfs...
                I see these tendencies all over the place. "I have installed a Raspberry Pi. I know embedded!".
                Sysfs was never meant as a high speed/volume interface and this does not change that.
                It just reduces the amount of syscalls from userspace.
                The kernel still has to do vfs-crap and open/seek/read/close for every state.
                And that's for something that should be a byte memory access read away in a CPU GPIO bank register.

                Comment


                • #9
                  Can anyone tell me the maximum filesize for this syscall?
                  Yes, i did read the code, and the new manpage. Nothing seems to indicate what qualifies as "small". The post also mentions "small and medium sizes".

                  It even has testcases:

                  + test_filesize(0x10);
                  + test_filesize(0x100);
                  + test_filesize(0x1000);
                  + test_filesize(0x10000);
                  + test_filesize(0x100000);
                  + test_filesize(0x1000000);

                  Where that largest testcase is 16 MiB.

                  I'm asking because i want to know if this syscall can be used to, for example, load icons and config files. Now config files are likely sub megabyte ones. But icons can be multiple megabytes (think about ico files that contain multiple images). But besides that, think about reading thumbnails for display in, say, dolphin or gwenview etc...

                  Now i'm not betting on this to be "substantially faster" then 3 syscalls (open/read/close) as the syscalls themselves are probably just a tiny percentage of actually handling the file data to do something with it (say jpeg/png/svg/webp decoder). But still, if you expand this over large folders it might become a notable difference.

                  Lastly, and this one is specifically for phoronix. Greg is, in the next version of this patch, going to post benchmarks too: https://lore.kernel.org/lkml/[email protected]/

                  Comment


                  • #10
                    Originally posted by jaskij View Post
                    Guest I've run precisely into this sysfs bottleneck when doing fast switching of GPIOs in an industrial PC. I ended up keeping those fds open and doing seek()/write() over open()/write()/close() - it cut down my CPU utilization by 90%.

                    ​​​​
                    You are abusing the platform. Industrial + raspberry pi does not mix well.
                    You should be using a RTOS platform. Even an ESP8266 would be better suited in your case.

                    Comment

                    Working...
                    X