Announcement

Collapse
No announcement yet.

New Low-Memory-Monitor Project Can Help With Linux's RAM/Responsiveness Problem

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    I do not know. Maybe it's a matter of system configuration.
    I don't have any anomalies on a very, very old, slow laptop. In this example, I don't use memcg, PSI, or special OOM daemon killers.
    As you can see in the attached video everything works OK.
    https://drive.google.com/file/d/1Hif...ew?usp=sharing

    I used it for the test.
    https://chromium.googlesource.com/ch...oryTest.tar.gz

    Comment


    • #32
      Originally posted by birdie View Post
      [*]In many cases a system can run without SWAP perfectly. I've been running without SWAP for over 15 years now. 100% of my servers (over a hundred high load machines) run without SWAP.
      Apparently decided to ditch swap about the same time as you did. =)
      Haven't run SWAP on a machine for something like 15 years too.
      I build all kernel releases and all my kernels have SWAP permanently disabled in the configuration.

      Comment


      • #33
        Originally posted by milkylainen View Post

        Apparently decided to ditch swap about the same time as you did. =)
        Haven't run SWAP on a machine for something like 15 years too.
        I build all kernel releases and all my kernels have SWAP permanently disabled in the configuration.
        Yep, I also disabled SWAP support in the kernel config quite a long time ago. :-)

        Comment


        • #34
          It is nice to see that discussion being revived, it was becoming unbearable (I had to sysrq+f about 5 times today, with mostly Firefox -- one or two profiles in use -- despite zramswap). Of course, what happens in userspace shouldn't be a reason not to fix this in the kernel: for instance, is there a way to specify that this daemon should never be swapped out, and take absolute priority when it comes to memory?

          I think all of this should probably belong to a namespace (or cgroup): when a user does something stupid with memory, it shouldn't bring down the whole system. Same as with disk and CPU schedulers.

          Comment


          • #35
            Originally posted by oiaohm View Post
            ...
            Sure, but again, it is a bad idea to compare a server, and a desktop... Desktop memory profile is much more a place of regular pages than HP or THP, even considering the transparent nature of THPs and the possible performance gains, we have many distros out there using transparent_hugepage=madvise due to many possible ways of facing overhead among other problems in Linux, fragmentation being just one of them. Lets stop talking about swap and RAM outage in desktop with server profiling arguments that can only be proven effectively reliable by a admin/engineering/developer that knows exactly the profile of the job running homogeneously in a server?
            Last edited by RomuloP; 21 August 2019, 07:19 PM.

            Comment


            • #36
              I'm using a swap partition a bit larger than my installed memory just to be able to use hibernation. Disk space is so cheap nowdays that I won't miss it and gain a great feature.. But no, I never even experienced an out-of-memory situation on my computer, so besides that use I don't really care one way or the other about swap partitions.

              Comment


              • #37
                Glad to see this issue is finally getting some attention. I don't run into it much anymore, part of the reason my desktop is overconfigured on RAM at 32GB is because of this issue. I used to run into it fairly often several years ago when I was using VirtualBox to run Windows VMs. VirtualBox allocates all the RAM upfront when starting up a guest, which could quickly take out the entire host system. It was frustrating because it would take an eternity to even just switch to a TTY and kill the hog process. I've never seen anything quite like it on Windows.

                Aside from the route of detecting low memory in order to automatically kill off processes, would it be feasible for the OS to set aside a minimal amount of dedicated resources (whether that's cpu and/or memory) to ensure that the machine remains interactive so the user can decide what to do? i.e. enough to switch to a tty and run bash/top/kill. Think of it like this: your 486 could have easily ran bash, top, and kill, and even a simplistic GUI. Your modern desktop probably has several orders of magnitude more CPU/RAM/resources than said 486. Why not (have the option to) set aside a small fraction of those resources to keep the system interactive.

                Comment


                • #38
                  Originally posted by Overlordz View Post
                  Aside from the route of detecting low memory in order to automatically kill off processes, would it be feasible for the OS to set aside a minimal amount of dedicated resources (whether that's cpu and/or memory) to ensure that the machine remains interactive so the user can decide what to do?
                  Increasing vm.min_free_kbytes helps but does not solve it completely.

                  Comment


                  • #39
                    Originally posted by milkylainen View Post
                    Apparently decided to ditch swap about the same time as you did. =)
                    Haven't run SWAP on a machine for something like 15 years too.
                    I build all kernel releases and all my kernels have SWAP permanently disabled in the configuration.
                    Originally posted by birdie View Post
                    Yep, I also disabled SWAP support in the kernel config quite a long time ago. :-)
                    Neither of you understand what you have traded doing it in kernel config or at least are not tell people so they can make an informed selection. Yes doing it in kernel configuration is very different to do it as run time disable.

                    The following changes happen when you disable swap in kernel config and build kernel.
                    1) Particular structures defragmentation code is disabled.
                    2) Processing fragmented structures is slower so your total system performance is slow reducing the longer the system runs.
                    3) if those structures get critically fragmented your system kernel panics so losing all the work you are working on. The code for OOM killer to defrag these structures you have also disabled thank you for playing.


                    Following happens when you have swap enabled in kernel build but swap disabled at runtime and you hit a structure fragmentation.
                    1) system stalls what it processes the OOM kill that gets complex.
                    2) OOM killer takes out a select process to reduce the structure fragmentation. Yes the exact same method as if swap is full. Yes if this happens you can lose some of your work but you have a chance that it will not be critical work you want.
                    .
                    Following happens when you have swap enable in kernel and some swap at runtime that is not full.
                    1) section of structure is pushed to swap
                    2) structure is remade no fragmented and the pages sent to swap get evicted in swap if the process is successful.
                    No application terminations or system terminations here. This is also processed normally while applications are not wanting IO or cpu time so really 99% of the time 100 percent performance invisible.

                    Running out of memory is basically bad and adding more swap comes at a performance price based on how fast your swap device is. Having no swap you have to agree to accept a few other issues.

                    Filling swap completely triggers 2 different problems. OOM killer attempting to recover ram and OOM killer attempt to defrag structures. Both are not nice.


                    Originally posted by RomuloP View Post
                    Sure, but again, it is a bad idea to compare a server, and a desktop... Desktop memory profile is much more a place of regular pages than HP or THP, even considering the transparent nature of THPs and the possible performance gains, we have many distros out there using transparent_hugepage=madvise due to many possible ways of facing overhead among other problems in Linux, fragmentation being just one of them. Lets stop talking about swap and RAM outage in desktop with server profiling arguments that can only be proven effectively reliable by a admin/engineering/developer that knows exactly the profile of the job running homogeneously in a server?
                    Structure issues don't only stop with Hugepages. Problem here is the trade off of the different settings need to be known. Desktops do have like applications allocating insane amount of virtual memory so fragmenting the page tables. Some of that defrag need swap as well.

                    The effects of enabling and disabling swap need to be understood to make the correct choice.

                    Something that there has not been very much research on is the over-commit value.
                    overcommit_memory set to 2 is fun watching chrome and other things barf because they have attempted allocated 10G of virtual memory on a 4G system.
                    We do have problems with kernel structures in a lot desktop workloads because application developers have taken the point of view they can allocate as much memory as they like and overcommit will give it to them. Yes so they don't have to be cleaning their kernel structure usage up.

                    The low memory performance issue is 2 problems not 1. Yes is running out of memory its also that we have a lot of structure allocations for stuff that will never be used that gets fragmented.

                    RomuloP yes these problems effect servers with THP quicker than desktop but they are still effecting desktop badly. Leaking memory allocations, device handles, file handles does need to be taken way more serous-ally as these things do have performance effects.

                    Memory issues are a horrible mess of multi different problems. So there is not one single magic bullet that going to fix it all. Some in fact need applications to be altered to use memory and system resources more sparely by freeing the resources they don't in fact need.

                    Comment


                    • #40
                      idk... the swap partition saved my ass so many times. Some tests with virtual machines, compiling large software with LTO, rendering larger scenes in Blender, ...

                      I don't really know why people disable the swap partition/files. It does not hurt when you don't need it and it helps when you need it. At least it helped me quite a lot. In all other cases you are in trouble either way.

                      But if there is something that can make the kernel behave better when the RAM is full... I am fine with it. But a user process that just kills all my services and applications ahead of time does not sound like that.

                      Comment

                      Working...
                      X