Announcement

Collapse
No announcement yet.

EXT4 "Fast Commits" Coming For Big Performance Boost In Ordered Mode

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    I took a few moments to try to render the text with greater clarity, with some added commas, rephrasing, sentence breaks, and paragraph breaks. I hope it helps you learn the unholy mess that is English grammar -- hopefully I didn't unthinkingly inject much of my local dialect of it in the process!

    Originally posted by Azrael5 View Post

    "I have noted that Linux file systems are generally not so efficient in comparison against how XP does the same operations. When I used to run both Xp and any kind of Linux operating system in dual boot, Linux operating systems struggled in hard drive operations.

    For example,
    if the entirety of available RAM is full, once the swap system is invoked, the slowdown makes it impossible that the RAM will become available again.

    A further issue I noted is that no noise is heard during hard rive operations on Xp , unlike any kind of Linux operating system.
    I'd note that Windows XP was likely using FAT32 or, maybe, NTFS. The former definitely doesn't have the extensive logging and data integrity operations that most Linux/UNIX filesystems in use at the time and since use, and the caching algorithms are very different, so there wouldn't be nearly as many I/O operations sending the drive head noisily skittering about.

    Additionally, it's technically not accurate to say that the RAM will never become available again. If the swap space approaches full, the kernel Out Of Memory code will start killing processes. Whether it will manage to do so before it doesn't even have enough memory available to perform that operation, with the hard drive being thrashed, is another matter.

    Upgrading to a dual-core CPU helped me greatly with Linux I/O in such conditions. I suspect that distribution' devs getting off their behinds and implementing sane cgroup and quota limits for various common offenders would help massively in avoiding these situations in the first place! It's what those kernel features are for in the first place!

    Comment


    • #22
      Originally posted by ultimA View Post
      I'm not sure why unzipping on your system is noticeably slower, but there can be countless reasons: different decompressor applications, a driver issue, filesystem mount options, immature FS (like btrfs) etc. It also depends somewhat on the distribution you used for Linux, as the second link shows. If you're talking personal impressions, then mine is the opposite of yours: I always felt Linux IO is at least as fast as Windows (on my Laptop I dual boot), but this was never a methodical test. For adequate tests, see my links in the previous paragraphs.
      I actually meant the unzipping is a lot faster on Linux, sometimes around 10 times so. Might have something to do with the extractor software as well. Same zip, 10x time to unzip on Vista or better.

      Comment


      • #23
        Originally posted by bezirg View Post
        That sounds like a big performance boost, especially when taking into account that `data=ordered` is the default option of mounting ext4 filesystems. I am looking forward to a new round of Phoronix's FS benchmarks (particularly f2fs/btrfs/xfs/ext4) for linux 5.10!
        Q4os-winsetup.zip is a version of Linux (Q4os-3.12-x64.r3) which runs on Microsoft NTFS partitions, as a folder in a Windows operating system.
        When installed, the user can choose booting either Windows, or Q4OS. They both run on the same physical & virtual NTFS drive(s).
        This new EXT4 should also then be tested on Microsoft NTFS partitions. These partitions can be NTFS-compressed, or not. If compressed is selected, it would be worth comparing with the other partition types that allow compression.
        Compression varies according to several factors: CPU, memory speeds, file sizes & types, etc. There may also be other read-write variations that can be selected as well. Generally, we should expect that the more of these extra options are selected, this affect bench results. Is there a way to be able to generalize these different factors on speeds, reliability, etc?

        Comment


        • #24
          Originally posted by cl333r View Post

          Been using Linux for 12 years, but I'd be using windows if not for its viruses.
          in mot cases the spread of viruses is a pebkac problem, not the os.

          Comment


          • #25
            Originally posted by Azrael5 View Post


            tl;dr

            add

            vm.swappiness=1
            vm.vfs_cache_pressure=25

            to your /etc/sysctl.conf file

            Comment


            • #26
              Originally posted by szymon_g View Post
              Yeah, I assume that the issue involves the limited cache of the hard disk used. However, it doesn't happen in the same situation on Xp. In this case, there is a different way to manage the files. The management seems far better on Xp than other Microsoft operating systems such as Vista or Windows 10, and Linux operating systems as well. Probably because, they are operating systems much more complex and because of the different cache management in the mass memories. This happens above all during the boot or the turning off of programs. Linux operating systems seem fast enough about the transfer of files. It makes the transfer suddenly between a folder and another one. I red a post where a progress in IO_uring could have improve a lot the speed. This means that the developers are improving the file system management. One of the most annoying problem which affected the management of my hard drives dealt with the crash of the system, once the capacity of the RAM was overloaded. This problem seems to have been fixed by the recent kernels.

              Comment


              • #27
                Originally posted by Azrael5 View Post
                Yeah, I assume that the issue involves the limited cache of the hard disk used. However, it doesn't happen in the same situation on Xp. In this case, there is a different way to manage the files. The management seems far better on Xp than other Microsoft operating systems such as Vista or Windows 10, and Linux operating systems as well. Probably because, they are operating systems much more complex and because of the different cache management in the mass memories. This happens above all during the boot or the turning off of programs. Linux operating systems seem fast enough about the transfer of files. It makes the transfer suddenly between a folder and another one. I red a post where a progress in IO_uring could have improve a lot the speed. This means that the developers are improving the file system management. One of the most annoying problem which affected the management of my hard drives dealt with the crash of the system, once the capacity of the RAM was overloaded. This problem seems to have been fixed by the recent kernels.
                There is a problem here you presume it does not happen on XP because the defaults are about right for desktop. You have never used Windows 2003 and Windows XP 64 bit both as desktop they are both the same kernel different configuration. Windows 2003 is heavier on swap usage. You have to adjust priories if you are going to use 2003 server as desktop.

                You are party right that the cache of the hard disk is a issue. Most Linux and Windows server editions come out box with swap configured not for your desktop drives that have lower cache and lower general performance. There are a few SMR drives in device managed mode in the desktop market that don't like being heavily used for swap either.

                Linux kernel has a lot of tuneable so does windows. If Windows or Linux is not configured in way that suitable for your usage case its going to stall out.

                Linux kernel cache management is generally not the cause of stall out.

                There are many factors that cause Linux to have trouble,

                The biggest difference between Windows and Linux general is the fact Linux allows overcommit memeory this is not a small difference.

                This is a true double sided sword in a lot of case overcommit memory gives better performance but there is a price if you truly do run out of memory the result is more than a little out of memory. Also you will run into programs on Linux expecting over-commit memory to be there.

                The reality is all the work on file systems like IO_uring and Fast Commit do nothing to alter the low memory problem. The low memory problem(ram overloaded) is 100% the virtual memory subsystem in trouble small changes to file systems is like trying to fix a badly baked cake by hiding under icing.

                To deal with Ram being overloaded you are looking options like oomd and related work

                Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

                Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite


                Yes you can disable swap completely be running with Linux on the fastest drive solutions on the earth and still have it stall out because you have a system without enough ram.

                The difference in the way Linux manages files compared to windows has bugger all effect on Linux stalling in case of overloaded ram/not enough ram. Please note I said stalling not crashing people coming from windows are react to a Linux stall by force rebooting instead of waiting. Windows stalled it is locked/crashed. Linux stalled its generally in the process of working out what in heck it has to kill to get memory back.

                There are setting you can change that will alter when Linux will stall. But correct long term fix is really implement windows process priorities equal on Linux. This is really want gnome and systemd are up to with cgroups.


                One of the biggest causes of out of memory with a gnome desktop is the file indexer tracker going stupid. Increasing file system speed in fact makes this case even more effective at creating the out of memory event. So work on file systems will at times help the out of memory problem but at other times will make it way way worse because its not the correct fix.

                Correct fix is better resource management with better detection of processes gone bad. Yes part of that better management is marking different services as not critical like file indexing that can be terminated instantly when you get to low memory(Windows does have this feature by the Windows priority system). Notice something the Windows fix does not have anything to-do with file systems either.


                Comment


                • #28
                  Originally posted by oiaohm View Post

                  There is a problem here you presume it does not happen on XP because the defaults are about right for desktop. You have never used Windows 2003 and Windows XP 64 bit both as desktop they are both the same kernel different configuration. Windows 2003 is heavier on swap usage. You have to adjust priories if you are going to use 2003 server as desktop.

                  You are party right that the cache of the hard disk is a issue. Most Linux and Windows server editions come out box with swap configured not for your desktop drives that have lower cache and lower general performance. There are a few SMR drives in device managed mode in the desktop market that don't like being heavily used for swap either.

                  Linux kernel has a lot of tuneable so does windows. If Windows or Linux is not configured in way that suitable for your usage case its going to stall out.

                  Linux kernel cache management is generally not the cause of stall out.

                  There are many factors that cause Linux to have trouble,

                  The biggest difference between Windows and Linux general is the fact Linux allows overcommit memeory this is not a small difference.

                  This is a true double sided sword in a lot of case overcommit memory gives better performance but there is a price if you truly do run out of memory the result is more than a little out of memory. Also you will run into programs on Linux expecting over-commit memory to be there.

                  The reality is all the work on file systems like IO_uring and Fast Commit do nothing to alter the low memory problem. The low memory problem(ram overloaded) is 100% the virtual memory subsystem in trouble small changes to file systems is like trying to fix a badly baked cake by hiding under icing.

                  To deal with Ram being overloaded you are looking options like oomd and related work

                  https://www.phoronix.com/scan.php?pa...OOMD-April-WIP
                  https://www.phoronix.com/scan.php?pa...es-Bad-Low-RAM

                  Yes you can disable swap completely be running with Linux on the fastest drive solutions on the earth and still have it stall out because you have a system without enough ram.

                  The difference in the way Linux manages files compared to windows has bugger all effect on Linux stalling in case of overloaded ram/not enough ram. Please note I said stalling not crashing people coming from windows are react to a Linux stall by force rebooting instead of waiting. Windows stalled it is locked/crashed. Linux stalled its generally in the process of working out what in heck it has to kill to get memory back.

                  There are setting you can change that will alter when Linux will stall. But correct long term fix is really implement windows process priorities equal on Linux. This is really want gnome and systemd are up to with cgroups.
                  https://linuxplumbersconf.org/event/...management.pdf

                  One of the biggest causes of out of memory with a gnome desktop is the file indexer tracker going stupid. Increasing file system speed in fact makes this case even more effective at creating the out of memory event. So work on file systems will at times help the out of memory problem but at other times will make it way way worse because its not the correct fix.

                  Correct fix is better resource management with better detection of processes gone bad. Yes part of that better management is marking different services as not critical like file indexing that can be terminated instantly when you get to low memory(Windows does have this feature by the Windows priority system). Notice something the Windows fix does not have anything to-do with file systems either.

                  Yeah, the system stalls, the hard drive runs all the time, the system hangs. This issue doesn't happen in XP. As you states, a management work must be done to fix this annoying issue. I think that it is possible to fix it and above all, that it would be an important improvement.
                  Last edited by Azrael5; 13 October 2020, 05:28 AM.

                  Comment


                  • #29
                    Originally posted by Azrael5 View Post
                    Yeah, the system stalls, the hard drive runs all the time, the system hangs. This issue doesn't happen in XP. As you states, a management work must be done to fix this annoying issue. I think that it is possible to fix it and above all that it would be an important improvement.
                    There is a version of Windows that will do the same thing on low ram systems it Windows Vista because the priority system out box is set wrong. It is possible to make XP do the same as Linux I have done it when I was to run apache webserver with database on Windows XP with low memory because it go assigned not the best priorities.

                    So your idea that the issue does not happen under XP is party wrong. Does not happen under XP while everything is configured right if you install the wrong things you can result in XP now having a bad configuration so doomed to stall just like Linux.

                    On my system with 32G of ram by hard drive io that caused by swap is basically zero, The issue you are describing is not having enough ram and attempted to run software that using more memory than you have.

                    Low memory desktop systems setting the following is recommend on Linux.

                    Setting vm.swappiness=1 this is to attempt to prevent active in usage applications being shoved into swap to reduce disk thrash. The default value of 60 is good for people like me with large amounts of ram. Setting this to 1 on my system is not ideal with the usage it gets.

                    vm.vfs_cache_pressure also need to be adjusted on lower ram systems to make the standard file system cache give up ram easier. Again on a system like mine with my usage case this is bad for performance.

                    These two settings also explain why 64 bit XP does not really gain performance with extra ram like it should because it still configured for sub 4G of ram.

                    Also overcommit



                    Its a feature you can turn off that will prevent Linux from digging itself into as deep of hole.

                    Don't be running old kernels there was a bug long fixed.

                    There was a bug in old kernels where it would stall out due to slow storage this does include your desktop 5200 RPM harddrives

                    This is everything you can change on all existing distributions. 1 size fits all with no adjustments is not in fact possible. Issue is Linux kernel default configuration and most distribution default work from the presume you have plenty of ram in your system so need to optimise for that usage case.

                    Gnome and KDE are both working on integrating systemd user mode for better session control.

                    Comment


                    • #30
                      Originally posted by oiaohm View Post

                      There is a version of Windows that will do the same thing on low ram systems it Windows Vista because the priority system out box is set wrong. It is possible to make XP do the same as Linux I have done it when I was to run apache webserver with database on Windows XP with low memory because it go assigned not the best priorities.

                      So your idea that the issue does not happen under XP is party wrong. Does not happen under XP while everything is configured right if you install the wrong things you can result in XP now having a bad configuration so doomed to stall just like Linux.

                      On my system with 32G of ram by hard drive io that caused by swap is basically zero, The issue you are describing is not having enough ram and attempted to run software that using more memory than you have.

                      Low memory desktop systems setting the following is recommend on Linux.

                      Setting vm.swappiness=1 this is to attempt to prevent active in usage applications being shoved into swap to reduce disk thrash. The default value of 60 is good for people like me with large amounts of ram. Setting this to 1 on my system is not ideal with the usage it gets.

                      vm.vfs_cache_pressure also need to be adjusted on lower ram systems to make the standard file system cache give up ram easier. Again on a system like mine with my usage case this is bad for performance.

                      These two settings also explain why 64 bit XP does not really gain performance with extra ram like it should because it still configured for sub 4G of ram.

                      Also overcommit

                      https://www.kernel.org/doc/Documenta...mit-accounting

                      Its a feature you can turn off that will prevent Linux from digging itself into as deep of hole.

                      Don't be running old kernels there was a bug long fixed.

                      There was a bug in old kernels where it would stall out due to slow storage this does include your desktop 5200 RPM harddrives

                      This is everything you can change on all existing distributions. 1 size fits all with no adjustments is not in fact possible. Issue is Linux kernel default configuration and most distribution default work from the presume you have plenty of ram in your system so need to optimise for that usage case.

                      Gnome and KDE are both working on integrating systemd user mode for better session control.
                      This is an interesting argument, based on which, I could ask the reason why developers don't integrate an index able to parameterize the values you have mentioned, to the different amount of RAM any systems uses, in order to make the operating system much more adaptive. If the amount of RAM is low some kind of values are applied, if the ram is medium other different parameters are applied and so on. A simple arithmetical proportion could solve this issue.
                      Last edited by Azrael5; 13 October 2020, 05:30 AM.

                      Comment

                      Working...
                      X