Announcement

Collapse
No announcement yet.

READFILE System Call Rebased For More Efficient Reading Of Small Files

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by rene View Post
    This has nothing todo with nano kernel but reasonable OS design, maybe ask the BSD folks or Apple's macOS what they think about this Posix extension ;-)
    I don't think this is meant to be a posix extension. Just a Linux syscall.

    But maybe it should be, since if you are talking posix, readfile() actualy requires 4 calls: two calls to read, since only a read call returning zero guarantees that you have reached the end of the file. Otherwise, a read call is allowed to return a partial number of bytes (for example if interrupted by a signal). So readfile() would prevent anyone from making that mistake.

    But readfile() on Linux (apparently) knows that it just needs one read call based on the internal API and implementation that it uses.

    So another advantage of readfile() for apps that read lots, perhaps thousands of (possibly cached) small files or sysfs values is that they can expect that readfile() uses the best internal API for performance and also with the least resource consumption (who knows, maybe avoiding the waste of globally available file descriptors, recycling memory without going through malloc/free, or something else). Even in cases where there is no actual performance advantage, these apps won't need to make measurements to ensure that this is really the case.

    EDIT: That is, at least 2 read calls, actually a loop. Even if limiting it to a fixed total (buffer) size, in which case you will need to update the count of remaining space.
    Last edited by indepe; 04 April 2021, 03:42 PM.

    Comment


    • #22
      Originally posted by rene View Post
      that you believe readfile() can in any way, shape or form speed up shell script execution speed is laughable at best as reading (small) files is not a bottleneck there and does not even register in a profile.
      Just so we're clear, what I'm talking about is the impact on shell script speed by execution of external commands -- not execution of shell built-ins.

      Originally posted by rene View Post
      I skip over commenting all the other non technical personal attacks.
      That's great and I commend you for it, but I think you're just using it as a cheap way out of answering my other technical points. So, maybe your raising of an effectively moot point about io_uring wasn't simply out of desperation, but you've offered no better explanation.

      Originally posted by rene View Post
      maybe ask the BSD folks or Apple's macOS what they think about this Posix extension ;-)
      If you have anything from them that you can cite, please do. Otherwise, it's unfair to appeal to an authority that hasn't actually weighed in on your side.

      Comment


      • #23
        Originally posted by rene View Post
        that you believe readfile() can in any way, shape or form speed up shell script execution speed is laughable at best as reading (small) files is not a bottleneck there and does not even register in a profile. I skip over commenting all the other non technical personal attacks.
        This is right and wrong it depends how the shell script is interfacing with the small files. Like you are doing particular system changes on Linux you shell script could be using the command sysctl and this command could be using readfile() to get current values.

        Other thing bash and other shell do need to read some particular small files on start up so there could be a shell start up speed improvement using readfile() over the old open read close.

        Small files are used a hell of a lot on unix based system. Remember the Unix idea started with everything is a file. Yes I would agree that reading small files in your general applications once running is not a large bottleneck. There are a lot applications that read a lot small files in the init stage of the application to get information about how the kernel is configured and so on. Mostly so they don't do operations that the kernel build does not support. Gains in performance here will mostly be start-up speed.

        Profile of usage of small files on Unix/Linux/BSD system has a majority in system/application init stages. Not all performance improvements are across the board things.

        Comment


        • #24
          Originally posted by oiaohm View Post

          This is right and wrong it depends how the shell script is interfacing with the small files. Like you are doing particular system changes on Linux you shell script could be using the command sysctl and this command could be using readfile() to get current values.

          Other thing bash and other shell do need to read some particular small files on start up so there could be a shell start up speed improvement using readfile() over the old open read close.

          Small files are used a hell of a lot on unix based system. Remember the Unix idea started with everything is a file. Yes I would agree that reading small files in your general applications once running is not a large bottleneck. There are a lot applications that read a lot small files in the init stage of the application to get information about how the kernel is configured and so on. Mostly so they don't do operations that the kernel build does not support. Gains in performance here will mostly be start-up speed.

          Profile of usage of small files on Unix/Linux/BSD system has a majority in system/application init stages. Not all performance improvements are across the board things.
          that you repeat random wishful guesswork numbers without profiling it does not make it true. Not only do most programs not load more than a handful of small files (which shell or apps use thousands of small files?) Even if they would this would barely register between spawning a new process, dynamically loading it and in term of shells interpreting and executing gazillions of programs or heck manipulating environment variables, ... everything is a magnitude or two more expensive than loading some small files.
          Last edited by rene; 05 April 2021, 04:21 AM.

          Comment


          • #25
            Originally posted by rene View Post
            This has nothing todo with nano kernel but reasonable OS design, maybe ask the BSD folks or Apple's macOS what they think about this Posix extension ;-)
            Nobody cares about opinion of people that are years behind.

            Comment


            • #26
              Originally posted by Volta View Post
              Nobody cares about opinion of people that are years behind.
              Linux could sure do with something like MacOS' GCD, though. And I don't just mean the userspace shim library used for compatibility with stuff that uses it, but I mean a standard, process-global work-stealing facility that's natively supported by the kernel, so that performance doesn't go to crap when you have more than one process running that thinks it should try to harness all your cores.

              Overall, you might be right. But, it's worth having a bit of humility because other OS' do have their strengths, as well.

              Comment


              • #27
                Originally posted by coder View Post
                Linux could sure do with something like MacOS' GCD, though. And I don't just mean the userspace shim library used for compatibility with stuff that uses it, but I mean a standard, process-global work-stealing facility that's natively supported by the kernel, so that performance doesn't go to crap when you have more than one process running that thinks it should try to harness all your cores.

                Overall, you might be right. But, it's worth having a bit of humility because other OS' do have their strengths, as well.
                That would be nice. Of course they chose GPL incompatible library for this. Btw. isn't OpenMP alternative to it?

                Comment


                • #28
                  Originally posted by rene View Post
                  that you repeat random wishful guesswork numbers without profiling it does not make it true. Not only do most programs not load more than a handful of small files (which shell or apps use thousands of small files?) Even if they would this would barely register between spawning a new process, dynamically loading it and in term of shells interpreting and executing gazillions of programs or heck manipulating environment variables, ... everything is a magnitude or two more expensive than loading some small files.
                  Its not wishful thinking. When I said start-up speed this is does not have to reference a single application or single script. When you are starting a system you have a lot programs in the init stage.

                  https://lwn.net/Articles/813827/
                  Many of the utilities in util-linux (tools like ps and top, for example) spend a lot of time reading information from small /proc and sysfs files; having a readfile() call would make them quite a bit more efficient.
                  There are programs in the start up process going over a few thousand to sometimes millions of small files.

                  Also you have not read the history why readfile is coming. Fsinfo was proposed as a syscall before the recent readfile push due to the excess overhead getting file system information out of sysfs.

                  Yes the first stage we have readfile as a syscall being a single syscall. Second stage able to perform multi-able readfile by a single io_ring so reducing applications that need to access multi small files to a single setup syscall and a single close syscall with no syscalls in the middle this is going to be a harder thing to create.

                  The end result is going to be something like a vectored syscall but its been done in stages. The io_ring solution will end up using less syscalls than vectored syscalls.

                  The error handling problems is why going in small steps is going to be better than big ones. A single open with single read with single close is a lot simpler problems to error handle. Multi of the single open, single read, single close is also simpler than a open followed by multi reads with a single close to create correct error handling for.

                  Comment


                  • #29
                    Originally posted by coder View Post
                    Just so we're clear, what I'm talking about is the impact on shell script speed by execution of external commands -- not execution of shell built-ins.


                    That's great and I commend you for it, but I think you're just using it as a cheap way out of answering my other technical points. So, maybe your raising of an effectively moot point about io_uring wasn't simply out of desperation, but you've offered no better explanation.


                    If you have anything from them that you can cite, please do. Otherwise, it's unfair to appeal to an authority that hasn't actually weighed in on your side.
                    How many small file do shell built-in open? I don't need authority on my side, however, the advancements of Unix and Posix should be a bit more wider, open audience, we have already enough obscure and barely used GNU and Linux extensions. If you want a good solution, that people actually will be using by porting their code to it needs more than a quick Linux hack.

                    Comment


                    • #30
                      Originally posted by oiaohm View Post

                      Its not wishful thinking. When I said start-up speed this is does not have to reference a single application or single script. When you are starting a system you have a lot programs in the init stage.

                      https://lwn.net/Articles/813827/

                      There are programs in the start up process going over a few thousand to sometimes millions of small files.

                      Also you have not read the history why readfile is coming. Fsinfo was proposed as a syscall before the recent readfile push due to the excess overhead getting file system information out of sysfs.

                      Yes the first stage we have readfile as a syscall being a single syscall. Second stage able to perform multi-able readfile by a single io_ring so reducing applications that need to access multi small files to a single setup syscall and a single close syscall with no syscalls in the middle this is going to be a harder thing to create.

                      The end result is going to be something like a vectored syscall but its been done in stages. The io_ring solution will end up using less syscalls than vectored syscalls.

                      The error handling problems is why going in small steps is going to be better than big ones. A single open with single read with single close is a lot simpler problems to error handle. Multi of the single open, single read, single close is also simpler than a open followed by multi reads with a single close to create correct error handling for.
                      good luck porting all the plethora of applications to io_uring and readfile. I will be surprised if your system boot up will clock in in advance of measurement accuracy ;-) Maybe it will even come out slower with all the io_uring setup overhead ;-)

                      Comment

                      Working...
                      X