Announcement

Collapse
No announcement yet.

EXT4 "Fast Commits" Coming For Big Performance Boost In Ordered Mode

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Azrael5 View Post
    This is an interesting argument, based on which, I could ask the reason why developers don't integrate an index able to parameterize the values you have mentioned, to the different amount of RAM any systems uses, in order to make the operating system much more adaptive. If the amount of RAM is low some kind of values are applied, if the ram is medium other different parameters are applied and so on. A simple arithmetical proportion could solve this issue.
    Its not simple. Its possible for me to put a workload on my system that will run my system out of ram as well. Right up until you run out of memory having the settings the other way give better overall performance. The heuristics the Linux kernel is using is good right up until you have a usage case that it gets it wrong.

    vm.swappiness and vm.vfs_cache_pressure are able to be changed while you are running your system. So the Linux kernel from the get go has provided the means to change these settings on fly because its been know different workloads these values going into the system arithmetic need to be changeable.



    The PSI information the Linux kernel provides since 4.20(yes released December 2018) is so the user mode tools get information when you are getting close to stall points.

    This is the problem the ram problem is very much like sneaking up to edge of cliff to see more and if you put hand rail or anything else on cliff you can see less. Performance with this stuff is the same way. oomd to come with systemd in future will be picking up the PSI information that is kind of the safety rail to say you are getting close to the stall cliff better start considering adjusting settings and stopping any processes you don't really need to run right now.

    This is really a two to tango problem. Linux kernel has been providing controls to user space over this memory stuff but on the desktop user space side there has nothing in the desktop user space picking them up.

    The Linux kernel also does not have the means to crystal ball into the future that much on memory management but a userspace application taking the drivers seat and using the Linux kernel provided controls can. Yes the little thing about being able to write logs of what has gone wrong then read logs so adjust for future is something user space program or user can do but the Linux kernel itself cannot.

    Linux cgroups also provide userspace with ways to provide the Linux kernel more information to make memory management choices with if they are used.


    The problem goes back to bad resource management from the user-space. The Linux kernel equal to the Windows priority system in NT the cgroups have not been getting filled out from userspace so the Linux kernel cannot make as good of choices as it could. The tuneable setting like vm.swappiness and vm.vfs_cache_pressure are left by most distributions on default settings they are provided as tuneable settings because they are known not to suite all workloads..

    This is the problem the Linux kernel is providing all the resource management tools and the userspace has not been using them. When you look at windows you find that the priority information is always filled out.

    Comment


    • #32
      Originally posted by oiaohm View Post

      Its not simple. Its possible for me to put a workload on my system that will run my system out of ram as well. Right up until you run out of memory having the settings the other way give better overall performance. The heuristics the Linux kernel is using is good right up until you have a usage case that it gets it wrong.

      vm.swappiness and vm.vfs_cache_pressure are able to be changed while you are running your system. So the Linux kernel from the get go has provided the means to change these settings on fly because its been know different workloads these values going into the system arithmetic need to be changeable.

      https://www.kernel.org/doc/html/late...nting/psi.html

      The PSI information the Linux kernel provides since 4.20(yes released December 2018) is so the user mode tools get information when you are getting close to stall points.

      This is the problem the ram problem is very much like sneaking up to edge of cliff to see more and if you put hand rail or anything else on cliff you can see less. Performance with this stuff is the same way. oomd to come with systemd in future will be picking up the PSI information that is kind of the safety rail to say you are getting close to the stall cliff better start considering adjusting settings and stopping any processes you don't really need to run right now.

      This is really a two to tango problem. Linux kernel has been providing controls to user space over this memory stuff but on the desktop user space side there has nothing in the desktop user space picking them up.

      The Linux kernel also does not have the means to crystal ball into the future that much on memory management but a userspace application taking the drivers seat and using the Linux kernel provided controls can. Yes the little thing about being able to write logs of what has gone wrong then read logs so adjust for future is something user space program or user can do but the Linux kernel itself cannot.

      Linux cgroups also provide userspace with ways to provide the Linux kernel more information to make memory management choices with if they are used.


      The problem goes back to bad resource management from the user-space. The Linux kernel equal to the Windows priority system in NT the cgroups have not been getting filled out from userspace so the Linux kernel cannot make as good of choices as it could. The tuneable setting like vm.swappiness and vm.vfs_cache_pressure are left by most distributions on default settings they are provided as tuneable settings because they are known not to suite all workloads..

      This is the problem the Linux kernel is providing all the resource management tools and the userspace has not been using them. When you look at windows you find that the priority information is always filled out.
      Once known the aforementioned limitations, is it possible to develop a kernel able to make calculations in order to optimize the operating system based on the hardware capabilities of a machine? If new hardware is implemented or removed, kernel gets a new input to make a new calculation during boot phase, simply comparing a table of hardware content with the upgraded table due to the new installed piece of hardware.

      Comment


      • #33
        Originally posted by Azrael5 View Post
        Once known the aforementioned limitations, is it possible to develop a kernel able to make calculations in order to optimize the operating system based on the hardware capabilities of a machine? If new hardware is implemented or removed, kernel gets a new input to make a new calculation during boot phase, simply comparing a table of hardware content with the upgraded table due to the new installed piece of hardware.
        Problem is the hardware capabilities is way less than half the information need to make correct choices. Its not possible to make a new calculation in the boot phase and be 100 percent right this is the problem. The calculation need to be on going and need to be getting information from the user-space as well as the hardware information to be making correct or at least correct enough choices.

        Lets say I have a 2 application that allocates 100GiB of memory. One of those 2 applications will work fine with 128Meg of memory because it never really uses it and another will require the full 100GiB of memory how are you as the kernel without more information from user space going to tell what application is what. This is the start of the problem thinking kernel makes a guess on something like this wrong its going to run itself into trouble. Over commit means a mistake like this is really big trouble.

        Next question how does the kernel know what applications should not be pushed out to swap ever. Turns out you need memory priority information. One of the reasons why graphical linux stall so bad is that the Linux has miss guessed on what memory should be pushed to swap and pushed like your desktop environment memory to swap then need it back. Desktop environments have not been using cgroups around their core tasks to say to the Linux kernel memory system these processes the memory must never be pushed to swap. There is ways to priority memory under Linux but its has not been used.



        The framework for memory priority information exists.
        memory.swap.max in cgroupv2 is set to 0 everything in that cgroup must remain in ram. This allows protecting important interface things from being placed in swap so protect the interface from stalling out in swap. Yes MS Windows your basic interface never places anything in swap as the memory priority it tag with forbids this yet our historic Linux desktop having key interface pushed out to swap due to memory pressure has totally allowed so we need memory.swap.max in cgroups set on particular things to stop stalls being as bad.

        memory.oom.group if set to 1 this one allows the Linux kernel killer to know that when it out of resources everything in this cgroup can be terminated in one big hit instead of slowly by individual processes so able to free up resources faster.

        You will notice there are also a stack of options for the Userspace to say this process/processes in this cgroup should not be using more than X memory if it does stop it.

        The problem here is the Linux kernel is configured for performance this means it running close to the edge so a wrong guess with low memory causes stall. Probability of a wrong guess is very high when user space is not filling out cgroup information relating to memory management.

        Please note setting all the cgroupv2 stuff does not mean stalls will 100 percent never happen but will move a swap thrash to be like a Windows one when the interface stays workable while the disc goes stupid and application stalls.


        When a process references a virtual memory page that is on the disc, because it has been paged-out, the referenced page must be paged-in and this might cause one or more pages to be paged-out using a page replacement algorithm. If the CPU is continuously busy swapping pages, so much that it cannot respond to user requests, the computer session is likely to crash and such a state is called Thrashing. If thrashing occurs repeatedly, then the only solution is to increase the RAM.
        This is the problem be it Linux or Windows disc thrashing is a problem that will happen. All you can attempt to-do its control what is effected and how bad. Part of the reason why windows users don't wait for a Linux stall to work it way out is the fact that Windows itself when it gets into a thrashing problem windows has a very high probability of complete dieing. The reality is the Linux kernel is better at handling a thrashing event than Windows. Problem is the until recently no Linux desktop environment has been giving the Linux kernel any information to make sure the core user interface does not get pulled into a thrashing event.

        The reality Linux desktop interface freezing up and coming not usable for a time period is not the Linux kernel fault. It quite impressive that the Linux kernel does recover itself.
        1) its the user attempt to run too much more than the system can in fact handle.(Ok its normally bad to blame the user I know this)
        2) The Desktop environments on Linux not providing the kernel with the information it need to correct decide what can and cannot be shoved into swap. This information is critical to prevent the core desktop environment from being pulled into a thrashing event so stalling on the user.
        3) The Desktop environments not providing controls to the different Linux kernel tuning features around memory.

        Number 1 here is fairly much a given the user at some point is going to push system too hard. 2 and 3 on the other hand they are the desktop environment problem. Horrible as it sounds the Linux kernel is not at fault.

        The hard reality is using Linux or Windows pushing the system hard thrashing event just part of using the system. This is why Linux kernel cannot on boot look at the hardware and set stuff and be done. Any setting the Linux kernel guesses on boot at some point is going to be wrong with ram so you will have a thrashing this is also the same for windows.

        So since thrashing with virtual memory/swap is a given the objective is not 100 percent prevent thrashing but manage thrashing to be as least disruptive to the user as possible. Problem is managing thrashing need cooperating and information from user-space to get it somewhere near right windows with the priority stuff protect the user core interface being pulled in to thrashing delays the trashing from being in users face.

        Its horrible right Linux kernel is better at handling thrashing without failing yet Linux kernel is getting bad reputation it does not deserve. Instead users need to be up the desktop environment ribs for not providing the Linux kernel with the information it need to-do the job properly.

        Comment


        • #34
          Originally posted by oiaohm View Post

          Problem is the hardware capabilities is way less than half the information need to make correct choices. Its not possible to make a new calculation in the boot phase and be 100 percent right this is the problem. The calculation need to be on going and need to be getting information from the user-space as well as the hardware information to be making correct or at least correct enough choices.

          Lets say I have a 2 application that allocates 100GiB of memory. One of those 2 applications will work fine with 128Meg of memory because it never really uses it and another will require the full 100GiB of memory how are you as the kernel without more information from user space going to tell what application is what. This is the start of the problem thinking kernel makes a guess on something like this wrong its going to run itself into trouble. Over commit means a mistake like this is really big trouble.

          Next question how does the kernel know what applications should not be pushed out to swap ever. Turns out you need memory priority information. One of the reasons why graphical linux stall so bad is that the Linux has miss guessed on what memory should be pushed to swap and pushed like your desktop environment memory to swap then need it back. Desktop environments have not been using cgroups around their core tasks to say to the Linux kernel memory system these processes the memory must never be pushed to swap. There is ways to priority memory under Linux but its has not been used.

          https://www.kernel.org/doc/html/late...cgroup-v2.html

          The framework for memory priority information exists.
          memory.swap.max in cgroupv2 is set to 0 everything in that cgroup must remain in ram. This allows protecting important interface things from being placed in swap so protect the interface from stalling out in swap. Yes MS Windows your basic interface never places anything in swap as the memory priority it tag with forbids this yet our historic Linux desktop having key interface pushed out to swap due to memory pressure has totally allowed so we need memory.swap.max in cgroups set on particular things to stop stalls being as bad.

          memory.oom.group if set to 1 this one allows the Linux kernel killer to know that when it out of resources everything in this cgroup can be terminated in one big hit instead of slowly by individual processes so able to free up resources faster.

          You will notice there are also a stack of options for the Userspace to say this process/processes in this cgroup should not be using more than X memory if it does stop it.

          The problem here is the Linux kernel is configured for performance this means it running close to the edge so a wrong guess with low memory causes stall. Probability of a wrong guess is very high when user space is not filling out cgroup information relating to memory management.

          Please note setting all the cgroupv2 stuff does not mean stalls will 100 percent never happen but will move a swap thrash to be like a Windows one when the interface stays workable while the disc goes stupid and application stalls.

          https://answers.microsoft.com/en-us/...c-e7d39b979938


          This is the problem be it Linux or Windows disc thrashing is a problem that will happen. All you can attempt to-do its control what is effected and how bad. Part of the reason why windows users don't wait for a Linux stall to work it way out is the fact that Windows itself when it gets into a thrashing problem windows has a very high probability of complete dieing. The reality is the Linux kernel is better at handling a thrashing event than Windows. Problem is the until recently no Linux desktop environment has been giving the Linux kernel any information to make sure the core user interface does not get pulled into a thrashing event.

          The reality Linux desktop interface freezing up and coming not usable for a time period is not the Linux kernel fault. It quite impressive that the Linux kernel does recover itself.
          1) its the user attempt to run too much more than the system can in fact handle.(Ok its normally bad to blame the user I know this)
          2) The Desktop environments on Linux not providing the kernel with the information it need to correct decide what can and cannot be shoved into swap. This information is critical to prevent the core desktop environment from being pulled into a thrashing event so stalling on the user.
          3) The Desktop environments not providing controls to the different Linux kernel tuning features around memory.

          Number 1 here is fairly much a given the user at some point is going to push system too hard. 2 and 3 on the other hand they are the desktop environment problem. Horrible as it sounds the Linux kernel is not at fault.

          The hard reality is using Linux or Windows pushing the system hard thrashing event just part of using the system. This is why Linux kernel cannot on boot look at the hardware and set stuff and be done. Any setting the Linux kernel guesses on boot at some point is going to be wrong with ram so you will have a thrashing this is also the same for windows.

          So since thrashing with virtual memory/swap is a given the objective is not 100 percent prevent thrashing but manage thrashing to be as least disruptive to the user as possible. Problem is managing thrashing need cooperating and information from user-space to get it somewhere near right windows with the priority stuff protect the user core interface being pulled in to thrashing delays the trashing from being in users face.

          Its horrible right Linux kernel is better at handling thrashing without failing yet Linux kernel is getting bad reputation it does not deserve. Instead users need to be up the desktop environment ribs for not providing the Linux kernel with the information it need to-do the job properly.
          Ok, so the solution is to increase the amount of RAM or that Linux could make use of GRam in some ways taking benefit from hardware acceleration, if it doesn't already do. I consider to buy new DIMMs.
          A question: some computer allows to enable IOMMU in UEFI which has the ability to increase a bit of the total visible RAM. Is it a benefit or not?
          Last edited by Azrael5; 17 October 2020, 05:22 AM.

          Comment


          • #35
            Originally posted by Azrael5 View Post
            Ok, so the solution is to increase the amount of RAM or that Linux could make use of GRam in some ways taking benefit from hardware acceleration, if it doesn't already do. I consider to buy new DIMMs.
            A question: some computer allows to enable IOMMU in UEFI which has the ability to increase a bit of the total visible RAM. Is it a benefit or not?
            Gnome and KDE systemd integration to replace their old session management to take advantage of the cgroup stuff in time should if they set everything up right this should stop their complete interfaces stalling out due to lack of memory. Of course this does not mean the applications you are wanting to run will not stall out. Yes this will make Linux replicate windows behaviour with the same problem so is basically punting the problem down a step. So it might be worth trying gnome and kde with the systemd stuff and see if that makes the Linux stall out tolerable.



            Using video card ram as swap under Linux has been done quite a bit. Newer cards it using fuse vramfs. It works out to be about 1/3 of the speed of system ram due to the PCIe transfer costs. Yes since it still swap you can still end up in swap thrash between swap storage and ram. This problem no matter how you look at it needs a userspace fix like what kde and gnome are doing of course we need more Linux desktops to-do something to use cgroups and other Linux kenrel interfaces to provide the kernel with more information. .

            Swap thrash problem exists on every platform using virtual memory the effects when it happens differ based on the management of memory resources or lack of management of memory resources that exist. Yes BSD desktop solutions also stall out like Linux in swap thrash as well same problem the desktop environments not using the features the kernel provided to give the memory management more information to make better memory management choices.

            One of the biggest problems here has been lets make our software cross platform using posix standards. Posix standard really don't include how to correctly inform the kernel what should go to swap and what should not to swap.

            Comment


            • #36
              Originally posted by oiaohm View Post

              Gnome and KDE systemd integration to replace their old session management to take advantage of the cgroup stuff in time should if they set everything up right this should stop their complete interfaces stalling out due to lack of memory. Of course this does not mean the applications you are wanting to run will not stall out. Yes this will make Linux replicate windows behaviour with the same problem so is basically punting the problem down a step. So it might be worth trying gnome and kde with the systemd stuff and see if that makes the Linux stall out tolerable.



              Using video card ram as swap under Linux has been done quite a bit. Newer cards it using fuse vramfs. It works out to be about 1/3 of the speed of system ram due to the PCIe transfer costs. Yes since it still swap you can still end up in swap thrash between swap storage and ram. This problem no matter how you look at it needs a userspace fix like what kde and gnome are doing of course we need more Linux desktops to-do something to use cgroups and other Linux kenrel interfaces to provide the kernel with more information. .

              Swap thrash problem exists on every platform using virtual memory the effects when it happens differ based on the management of memory resources or lack of management of memory resources that exist. Yes BSD desktop solutions also stall out like Linux in swap thrash as well same problem the desktop environments not using the features the kernel provided to give the memory management more information to make better memory management choices.

              One of the biggest problems here has been lets make our software cross platform using posix standards. Posix standard really don't include how to correctly inform the kernel what should go to swap and what should not to swap.
              Very interesting. It looks like there are many potential improvements linux kernel and software as well can get . However the management of swap in Vram is much more better in case of mechanical hard drive, probably less relevant in case of SSD based on NVMe, also considering the employ of Non volatile RAM able to make faster any kind of operation. Probably, the new technology will make less important the bottleneck problem and the consequent hardware synergy necessity, as well.

              Comment


              • #37
                Originally posted by Azrael5 View Post
                Very interesting. It looks like there are many potential improvements linux kernel and software as well can get . However the management of swap in Vram is much more better in case of mechanical hard drive, probably less relevant in case of SSD based on NVMe, also considering the employ of Non volatile RAM able to make faster any kind of operation. Probably, the new technology will make less important the bottleneck problem and the consequent hardware synergy necessity, as well.
                Vram on video cards will be faster than NVMe on average wear levelling and other things don't come cheap. vramfs part is not ideal that you have to go to user space to send and receive items from swap yes this is something where vramfs needs cgroup protection from the out of memory killer so the horrible does not happen of out of memory and out of memory killer goes and kills vramfs so losing all the swap so leaving system in a totally stuffed state.

                Other considerations
                1) is newer video cards are getting direct from storage feature so having you swap on the video card could run into pci-e bandwidth problem.
                2) Hibernatioon to Vram will be bad so vram swap will have to be shutdown before hibernation.
                3) Video cards are built for performance while under a lot of memory operations not power effectiveness so yes compared to using a nvme for swap you could be burning though a lot more power.

                Yes just because something is possible does not make it always the best idea.

                Comment


                • #38
                  Originally posted by oiaohm View Post

                  Vram on video cards will be faster than NVMe on average wear levelling and other things don't come cheap. vramfs part is not ideal that you have to go to user space to send and receive items from swap yes this is something where vramfs needs cgroup protection from the out of memory killer so the horrible does not happen of out of memory and out of memory killer goes and kills vramfs so losing all the swap so leaving system in a totally stuffed state.

                  Other considerations
                  1) is newer video cards are getting direct from storage feature so having you swap on the video card could run into pci-e bandwidth problem.
                  2) Hibernatioon to Vram will be bad so vram swap will have to be shutdown before hibernation.
                  3) Video cards are built for performance while under a lot of memory operations not power effectiveness so yes compared to using a nvme for swap you could be burning though a lot more power.

                  Yes just because something is possible does not make it always the best idea.
                  Probably, Vram is not suitable for swap. So, more RAM is the only solution, albeit developers are improving file systems management and IO-uring as well. In my opinion Vram is useful to manage data information of both software and desktop environment so to free ram, that is possible making the software able to use hardware acceleration via GPU, so in this case, Wayland should improve the graphical management directly by graphical processor unit and Vram. The way in which developers develop their software become relevant.

                  Comment

                  Working...
                  X