Originally posted by oiaohm
View Post
Announcement
Collapse
No announcement yet.
EXT4 "Fast Commits" Coming For Big Performance Boost In Ordered Mode
Collapse
X
-
-
Originally posted by Azrael5 View PostVery interesting. It looks like there are many potential improvements linux kernel and software as well can get . However the management of swap in Vram is much more better in case of mechanical hard drive, probably less relevant in case of SSD based on NVMe, also considering the employ of Non volatile RAM able to make faster any kind of operation. Probably, the new technology will make less important the bottleneck problem and the consequent hardware synergy necessity, as well.
Other considerations
1) is newer video cards are getting direct from storage feature so having you swap on the video card could run into pci-e bandwidth problem.
2) Hibernatioon to Vram will be bad so vram swap will have to be shutdown before hibernation.
3) Video cards are built for performance while under a lot of memory operations not power effectiveness so yes compared to using a nvme for swap you could be burning though a lot more power.
Yes just because something is possible does not make it always the best idea.
Leave a comment:
-
Originally posted by oiaohm View Post
Gnome and KDE systemd integration to replace their old session management to take advantage of the cgroup stuff in time should if they set everything up right this should stop their complete interfaces stalling out due to lack of memory. Of course this does not mean the applications you are wanting to run will not stall out. Yes this will make Linux replicate windows behaviour with the same problem so is basically punting the problem down a step. So it might be worth trying gnome and kde with the systemd stuff and see if that makes the Linux stall out tolerable.
Using video card ram as swap under Linux has been done quite a bit. Newer cards it using fuse vramfs. It works out to be about 1/3 of the speed of system ram due to the PCIe transfer costs. Yes since it still swap you can still end up in swap thrash between swap storage and ram. This problem no matter how you look at it needs a userspace fix like what kde and gnome are doing of course we need more Linux desktops to-do something to use cgroups and other Linux kenrel interfaces to provide the kernel with more information. .
Swap thrash problem exists on every platform using virtual memory the effects when it happens differ based on the management of memory resources or lack of management of memory resources that exist. Yes BSD desktop solutions also stall out like Linux in swap thrash as well same problem the desktop environments not using the features the kernel provided to give the memory management more information to make better memory management choices.
One of the biggest problems here has been lets make our software cross platform using posix standards. Posix standard really don't include how to correctly inform the kernel what should go to swap and what should not to swap.
Leave a comment:
-
Originally posted by Azrael5 View PostOk, so the solution is to increase the amount of RAM or that Linux could make use of GRam in some ways taking benefit from hardware acceleration, if it doesn't already do. I consider to buy new DIMMs.
A question: some computer allows to enable IOMMU in UEFI which has the ability to increase a bit of the total visible RAM. Is it a benefit or not?
Using video card ram as swap under Linux has been done quite a bit. Newer cards it using fuse vramfs. It works out to be about 1/3 of the speed of system ram due to the PCIe transfer costs. Yes since it still swap you can still end up in swap thrash between swap storage and ram. This problem no matter how you look at it needs a userspace fix like what kde and gnome are doing of course we need more Linux desktops to-do something to use cgroups and other Linux kenrel interfaces to provide the kernel with more information. .
Swap thrash problem exists on every platform using virtual memory the effects when it happens differ based on the management of memory resources or lack of management of memory resources that exist. Yes BSD desktop solutions also stall out like Linux in swap thrash as well same problem the desktop environments not using the features the kernel provided to give the memory management more information to make better memory management choices.
One of the biggest problems here has been lets make our software cross platform using posix standards. Posix standard really don't include how to correctly inform the kernel what should go to swap and what should not to swap.
- Likes 1
Leave a comment:
-
Originally posted by oiaohm View Post
Problem is the hardware capabilities is way less than half the information need to make correct choices. Its not possible to make a new calculation in the boot phase and be 100 percent right this is the problem. The calculation need to be on going and need to be getting information from the user-space as well as the hardware information to be making correct or at least correct enough choices.
Lets say I have a 2 application that allocates 100GiB of memory. One of those 2 applications will work fine with 128Meg of memory because it never really uses it and another will require the full 100GiB of memory how are you as the kernel without more information from user space going to tell what application is what. This is the start of the problem thinking kernel makes a guess on something like this wrong its going to run itself into trouble. Over commit means a mistake like this is really big trouble.
Next question how does the kernel know what applications should not be pushed out to swap ever. Turns out you need memory priority information. One of the reasons why graphical linux stall so bad is that the Linux has miss guessed on what memory should be pushed to swap and pushed like your desktop environment memory to swap then need it back. Desktop environments have not been using cgroups around their core tasks to say to the Linux kernel memory system these processes the memory must never be pushed to swap. There is ways to priority memory under Linux but its has not been used.
https://www.kernel.org/doc/html/late...cgroup-v2.html
The framework for memory priority information exists.
memory.swap.max in cgroupv2 is set to 0 everything in that cgroup must remain in ram. This allows protecting important interface things from being placed in swap so protect the interface from stalling out in swap. Yes MS Windows your basic interface never places anything in swap as the memory priority it tag with forbids this yet our historic Linux desktop having key interface pushed out to swap due to memory pressure has totally allowed so we need memory.swap.max in cgroups set on particular things to stop stalls being as bad.
memory.oom.group if set to 1 this one allows the Linux kernel killer to know that when it out of resources everything in this cgroup can be terminated in one big hit instead of slowly by individual processes so able to free up resources faster.
You will notice there are also a stack of options for the Userspace to say this process/processes in this cgroup should not be using more than X memory if it does stop it.
The problem here is the Linux kernel is configured for performance this means it running close to the edge so a wrong guess with low memory causes stall. Probability of a wrong guess is very high when user space is not filling out cgroup information relating to memory management.
Please note setting all the cgroupv2 stuff does not mean stalls will 100 percent never happen but will move a swap thrash to be like a Windows one when the interface stays workable while the disc goes stupid and application stalls.
https://answers.microsoft.com/en-us/...c-e7d39b979938
This is the problem be it Linux or Windows disc thrashing is a problem that will happen. All you can attempt to-do its control what is effected and how bad. Part of the reason why windows users don't wait for a Linux stall to work it way out is the fact that Windows itself when it gets into a thrashing problem windows has a very high probability of complete dieing. The reality is the Linux kernel is better at handling a thrashing event than Windows. Problem is the until recently no Linux desktop environment has been giving the Linux kernel any information to make sure the core user interface does not get pulled into a thrashing event.
The reality Linux desktop interface freezing up and coming not usable for a time period is not the Linux kernel fault. It quite impressive that the Linux kernel does recover itself.
1) its the user attempt to run too much more than the system can in fact handle.(Ok its normally bad to blame the user I know this)
2) The Desktop environments on Linux not providing the kernel with the information it need to correct decide what can and cannot be shoved into swap. This information is critical to prevent the core desktop environment from being pulled into a thrashing event so stalling on the user.
3) The Desktop environments not providing controls to the different Linux kernel tuning features around memory.
Number 1 here is fairly much a given the user at some point is going to push system too hard. 2 and 3 on the other hand they are the desktop environment problem. Horrible as it sounds the Linux kernel is not at fault.
The hard reality is using Linux or Windows pushing the system hard thrashing event just part of using the system. This is why Linux kernel cannot on boot look at the hardware and set stuff and be done. Any setting the Linux kernel guesses on boot at some point is going to be wrong with ram so you will have a thrashing this is also the same for windows.
So since thrashing with virtual memory/swap is a given the objective is not 100 percent prevent thrashing but manage thrashing to be as least disruptive to the user as possible. Problem is managing thrashing need cooperating and information from user-space to get it somewhere near right windows with the priority stuff protect the user core interface being pulled in to thrashing delays the trashing from being in users face.
Its horrible right Linux kernel is better at handling thrashing without failing yet Linux kernel is getting bad reputation it does not deserve. Instead users need to be up the desktop environment ribs for not providing the Linux kernel with the information it need to-do the job properly.
A question: some computer allows to enable IOMMU in UEFI which has the ability to increase a bit of the total visible RAM. Is it a benefit or not?Last edited by Azrael5; 17 October 2020, 05:22 AM.
Leave a comment:
-
Originally posted by Azrael5 View PostOnce known the aforementioned limitations, is it possible to develop a kernel able to make calculations in order to optimize the operating system based on the hardware capabilities of a machine? If new hardware is implemented or removed, kernel gets a new input to make a new calculation during boot phase, simply comparing a table of hardware content with the upgraded table due to the new installed piece of hardware.
Lets say I have a 2 application that allocates 100GiB of memory. One of those 2 applications will work fine with 128Meg of memory because it never really uses it and another will require the full 100GiB of memory how are you as the kernel without more information from user space going to tell what application is what. This is the start of the problem thinking kernel makes a guess on something like this wrong its going to run itself into trouble. Over commit means a mistake like this is really big trouble.
Next question how does the kernel know what applications should not be pushed out to swap ever. Turns out you need memory priority information. One of the reasons why graphical linux stall so bad is that the Linux has miss guessed on what memory should be pushed to swap and pushed like your desktop environment memory to swap then need it back. Desktop environments have not been using cgroups around their core tasks to say to the Linux kernel memory system these processes the memory must never be pushed to swap. There is ways to priority memory under Linux but its has not been used.
The framework for memory priority information exists.
memory.swap.max in cgroupv2 is set to 0 everything in that cgroup must remain in ram. This allows protecting important interface things from being placed in swap so protect the interface from stalling out in swap. Yes MS Windows your basic interface never places anything in swap as the memory priority it tag with forbids this yet our historic Linux desktop having key interface pushed out to swap due to memory pressure has totally allowed so we need memory.swap.max in cgroups set on particular things to stop stalls being as bad.
memory.oom.group if set to 1 this one allows the Linux kernel killer to know that when it out of resources everything in this cgroup can be terminated in one big hit instead of slowly by individual processes so able to free up resources faster.
You will notice there are also a stack of options for the Userspace to say this process/processes in this cgroup should not be using more than X memory if it does stop it.
The problem here is the Linux kernel is configured for performance this means it running close to the edge so a wrong guess with low memory causes stall. Probability of a wrong guess is very high when user space is not filling out cgroup information relating to memory management.
Please note setting all the cgroupv2 stuff does not mean stalls will 100 percent never happen but will move a swap thrash to be like a Windows one when the interface stays workable while the disc goes stupid and application stalls.
When a process references a virtual memory page that is on the disc, because it has been paged-out, the referenced page must be paged-in and this might cause one or more pages to be paged-out using a page replacement algorithm. If the CPU is continuously busy swapping pages, so much that it cannot respond to user requests, the computer session is likely to crash and such a state is called Thrashing. If thrashing occurs repeatedly, then the only solution is to increase the RAM.
The reality Linux desktop interface freezing up and coming not usable for a time period is not the Linux kernel fault. It quite impressive that the Linux kernel does recover itself.
1) its the user attempt to run too much more than the system can in fact handle.(Ok its normally bad to blame the user I know this)
2) The Desktop environments on Linux not providing the kernel with the information it need to correct decide what can and cannot be shoved into swap. This information is critical to prevent the core desktop environment from being pulled into a thrashing event so stalling on the user.
3) The Desktop environments not providing controls to the different Linux kernel tuning features around memory.
Number 1 here is fairly much a given the user at some point is going to push system too hard. 2 and 3 on the other hand they are the desktop environment problem. Horrible as it sounds the Linux kernel is not at fault.
The hard reality is using Linux or Windows pushing the system hard thrashing event just part of using the system. This is why Linux kernel cannot on boot look at the hardware and set stuff and be done. Any setting the Linux kernel guesses on boot at some point is going to be wrong with ram so you will have a thrashing this is also the same for windows.
So since thrashing with virtual memory/swap is a given the objective is not 100 percent prevent thrashing but manage thrashing to be as least disruptive to the user as possible. Problem is managing thrashing need cooperating and information from user-space to get it somewhere near right windows with the priority stuff protect the user core interface being pulled in to thrashing delays the trashing from being in users face.
Its horrible right Linux kernel is better at handling thrashing without failing yet Linux kernel is getting bad reputation it does not deserve. Instead users need to be up the desktop environment ribs for not providing the Linux kernel with the information it need to-do the job properly.
- Likes 1
Leave a comment:
-
Originally posted by oiaohm View Post
Its not simple. Its possible for me to put a workload on my system that will run my system out of ram as well. Right up until you run out of memory having the settings the other way give better overall performance. The heuristics the Linux kernel is using is good right up until you have a usage case that it gets it wrong.
vm.swappiness and vm.vfs_cache_pressure are able to be changed while you are running your system. So the Linux kernel from the get go has provided the means to change these settings on fly because its been know different workloads these values going into the system arithmetic need to be changeable.
https://www.kernel.org/doc/html/late...nting/psi.html
The PSI information the Linux kernel provides since 4.20(yes released December 2018) is so the user mode tools get information when you are getting close to stall points.
This is the problem the ram problem is very much like sneaking up to edge of cliff to see more and if you put hand rail or anything else on cliff you can see less. Performance with this stuff is the same way. oomd to come with systemd in future will be picking up the PSI information that is kind of the safety rail to say you are getting close to the stall cliff better start considering adjusting settings and stopping any processes you don't really need to run right now.
This is really a two to tango problem. Linux kernel has been providing controls to user space over this memory stuff but on the desktop user space side there has nothing in the desktop user space picking them up.
The Linux kernel also does not have the means to crystal ball into the future that much on memory management but a userspace application taking the drivers seat and using the Linux kernel provided controls can. Yes the little thing about being able to write logs of what has gone wrong then read logs so adjust for future is something user space program or user can do but the Linux kernel itself cannot.
Linux cgroups also provide userspace with ways to provide the Linux kernel more information to make memory management choices with if they are used.
The problem goes back to bad resource management from the user-space. The Linux kernel equal to the Windows priority system in NT the cgroups have not been getting filled out from userspace so the Linux kernel cannot make as good of choices as it could. The tuneable setting like vm.swappiness and vm.vfs_cache_pressure are left by most distributions on default settings they are provided as tuneable settings because they are known not to suite all workloads..
This is the problem the Linux kernel is providing all the resource management tools and the userspace has not been using them. When you look at windows you find that the priority information is always filled out.
Leave a comment:
-
Originally posted by Azrael5 View PostThis is an interesting argument, based on which, I could ask the reason why developers don't integrate an index able to parameterize the values you have mentioned, to the different amount of RAM any systems uses, in order to make the operating system much more adaptive. If the amount of RAM is low some kind of values are applied, if the ram is medium other different parameters are applied and so on. A simple arithmetical proportion could solve this issue.
vm.swappiness and vm.vfs_cache_pressure are able to be changed while you are running your system. So the Linux kernel from the get go has provided the means to change these settings on fly because its been know different workloads these values going into the system arithmetic need to be changeable.
The PSI information the Linux kernel provides since 4.20(yes released December 2018) is so the user mode tools get information when you are getting close to stall points.
This is the problem the ram problem is very much like sneaking up to edge of cliff to see more and if you put hand rail or anything else on cliff you can see less. Performance with this stuff is the same way. oomd to come with systemd in future will be picking up the PSI information that is kind of the safety rail to say you are getting close to the stall cliff better start considering adjusting settings and stopping any processes you don't really need to run right now.
This is really a two to tango problem. Linux kernel has been providing controls to user space over this memory stuff but on the desktop user space side there has nothing in the desktop user space picking them up.
The Linux kernel also does not have the means to crystal ball into the future that much on memory management but a userspace application taking the drivers seat and using the Linux kernel provided controls can. Yes the little thing about being able to write logs of what has gone wrong then read logs so adjust for future is something user space program or user can do but the Linux kernel itself cannot.
Linux cgroups also provide userspace with ways to provide the Linux kernel more information to make memory management choices with if they are used.
The problem goes back to bad resource management from the user-space. The Linux kernel equal to the Windows priority system in NT the cgroups have not been getting filled out from userspace so the Linux kernel cannot make as good of choices as it could. The tuneable setting like vm.swappiness and vm.vfs_cache_pressure are left by most distributions on default settings they are provided as tuneable settings because they are known not to suite all workloads..
This is the problem the Linux kernel is providing all the resource management tools and the userspace has not been using them. When you look at windows you find that the priority information is always filled out.
Leave a comment:
-
Originally posted by oiaohm View Post
There is a version of Windows that will do the same thing on low ram systems it Windows Vista because the priority system out box is set wrong. It is possible to make XP do the same as Linux I have done it when I was to run apache webserver with database on Windows XP with low memory because it go assigned not the best priorities.
So your idea that the issue does not happen under XP is party wrong. Does not happen under XP while everything is configured right if you install the wrong things you can result in XP now having a bad configuration so doomed to stall just like Linux.
On my system with 32G of ram by hard drive io that caused by swap is basically zero, The issue you are describing is not having enough ram and attempted to run software that using more memory than you have.
Low memory desktop systems setting the following is recommend on Linux.
Setting vm.swappiness=1 this is to attempt to prevent active in usage applications being shoved into swap to reduce disk thrash. The default value of 60 is good for people like me with large amounts of ram. Setting this to 1 on my system is not ideal with the usage it gets.
vm.vfs_cache_pressure also need to be adjusted on lower ram systems to make the standard file system cache give up ram easier. Again on a system like mine with my usage case this is bad for performance.
These two settings also explain why 64 bit XP does not really gain performance with extra ram like it should because it still configured for sub 4G of ram.
Also overcommit
https://www.kernel.org/doc/Documenta...mit-accounting
Its a feature you can turn off that will prevent Linux from digging itself into as deep of hole.
Don't be running old kernels there was a bug long fixed.
There was a bug in old kernels where it would stall out due to slow storage this does include your desktop 5200 RPM harddrives
This is everything you can change on all existing distributions. 1 size fits all with no adjustments is not in fact possible. Issue is Linux kernel default configuration and most distribution default work from the presume you have plenty of ram in your system so need to optimise for that usage case.
Gnome and KDE are both working on integrating systemd user mode for better session control.Last edited by Azrael5; 13 October 2020, 05:30 AM.
Leave a comment:
-
Originally posted by Azrael5 View PostYeah, the system stalls, the hard drive runs all the time, the system hangs. This issue doesn't happen in XP. As you states, a management work must be done to fix this annoying issue. I think that it is possible to fix it and above all that it would be an important improvement.
So your idea that the issue does not happen under XP is party wrong. Does not happen under XP while everything is configured right if you install the wrong things you can result in XP now having a bad configuration so doomed to stall just like Linux.
On my system with 32G of ram by hard drive io that caused by swap is basically zero, The issue you are describing is not having enough ram and attempted to run software that using more memory than you have.
Low memory desktop systems setting the following is recommend on Linux.
Setting vm.swappiness=1 this is to attempt to prevent active in usage applications being shoved into swap to reduce disk thrash. The default value of 60 is good for people like me with large amounts of ram. Setting this to 1 on my system is not ideal with the usage it gets.
vm.vfs_cache_pressure also need to be adjusted on lower ram systems to make the standard file system cache give up ram easier. Again on a system like mine with my usage case this is bad for performance.
These two settings also explain why 64 bit XP does not really gain performance with extra ram like it should because it still configured for sub 4G of ram.
Also overcommit
Its a feature you can turn off that will prevent Linux from digging itself into as deep of hole.
Don't be running old kernels there was a bug long fixed.
There was a bug in old kernels where it would stall out due to slow storage this does include your desktop 5200 RPM harddrives
This is everything you can change on all existing distributions. 1 size fits all with no adjustments is not in fact possible. Issue is Linux kernel default configuration and most distribution default work from the presume you have plenty of ram in your system so need to optimise for that usage case.
Gnome and KDE are both working on integrating systemd user mode for better session control.
- Likes 1
Leave a comment:
Leave a comment: