Announcement

Collapse
No announcement yet.

New Low-Memory-Monitor Project Can Help With Linux's RAM/Responsiveness Problem

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Summary of Comment sections of Linux Bad Low Memory Handling Articles:

    Person A: My Linux system with 4GB ram and 4GB swap becomes unresponsive when my swap starts filling. I disabled swap and my system is more stable now, but it still gets unresponsive on low memory situations. I don't have this problem with Windows. This is the one thing Windows is better at.
    Person B: I had none of these problems on my system!
    Person C: SWAP solves all my problems on my server with 32GB ram and 32GB swap, you people are crazy!
    Person D: What are you people talking about? Your system should freeze at Low memory situations!

    Comment


    • #42
      Originally posted by Overlordz View Post
      Aside from the route of detecting low memory in order to automatically kill off processes, would it be feasible for the OS to set aside a minimal amount of dedicated resources (whether that's cpu and/or memory) to ensure that the machine remains interactive so the user can decide what to do? i.e. enough to switch to a tty and run bash/top/kill. Think of it like this: your 486 could have easily ran bash, top, and kill, and even a simplistic GUI. Your modern desktop probably has several orders of magnitude more CPU/RAM/resources than said 486. Why not (have the option to) set aside a small fraction of those resources to keep the system interactive.
      There's also a program on github (forgot the name at the moment sorry) that will keep various essential binaries in memory and forbid the system from moving them out, so that no matter what the system should be somewhat responsive. I'm not sure if I felt it worked or not anymore.

      edit: found it: https://doc.coker.com.au/projects/memlockd/
      Last edited by geearf; 22 August 2019, 11:01 AM.

      Comment


      • #43
        Originally posted by BOYSSSSS View Post
        Summary of Comment sections of Linux Bad Low Memory Handling Articles:

        Person A: My Linux system with 4GB ram and 4GB swap becomes unresponsive when my swap starts filling. I disabled swap and my system is more stable now, but it still gets unresponsive on low memory situations. I don't have this problem with Windows. This is the one thing Windows is better at.
        Person B: I had none of these problems on my system!
        Person C: SWAP solves all my problems on my server with 32GB ram and 32GB swap, you people are crazy!
        Person D: What are you people talking about? Your system should freeze at Low memory situations!
        https://superuser.com/questions/1194...rcommit-memory
        There is a key solid differences between Windows and Linux. Windows never uses overcommit and Linux systems default to overcommit on.

        1) Linux has overcommit on. Windows has overcommit off by default with no means to turn it on. Yes change a setting under Linux you can change this behavour vm.overcommit_memory = 2 in /etc/sysctl.conf to make overcommit off under Linux.
        2) Linux has fixed sized swap by default, Windows has dynamically growing swap by default. Yes add swapspace tool or equal and you can replicate windows dynamically growing swap under Linux.
        https://manpages.debian.org/stretch/...ce/swapspace.8

        Some workloads benefit from overcommit others it a path to doom. Notice person A has turned off swap they have most likely left overcommit on.

        Person B and C most likely running workloads over-commit compatible or have turned overcommit off and forgot about it.

        Person D is in fact right and wrong. Low memory situations really should not happen yes it should stall out when you hit low memory situations because the operating system end up between a rock and a hard place.

        In fact both Windows and Linux behave horrible and stall if you get them into low memory event. Yes windows you have to go in and configure your virtual memory to a static size so it cannot keep on growing it until it used up all disc space or in fact truly run out of disc space to see windows behave like linux..

        The fact that both windows and Linux freeze up in low memory situations means that is not the difference. The Windows one of having eaten yourself all the way out of disc before freezing up is harder to pull off but can have worse effects as well.

        Basically the differences are not what people think they are and the methods people are attempting to fix it by doing are the wrong thing.

        You it low memory because you have filled all ram and all swap you in fact need more swap or need to stop application allocating memory as much. Disable overcommit reduces the memory application will allocate.

        Yes being stuck in ram that is faster when you have run out of memory means faster processing of the stall so make the stall less noticeable but does not mean the stall is not still happening.

        Hard reality here people are disabling swap to make Linux behave more like windows but when they are in Windows they have many times more swap enabled. Lets do the exact opposite and wonder why Linux is not behaving like Windows. If you were in fact making Linux swap system behave like windows Linux happens to perform a lot like windows. Disable swap solution is really pure stupid if you are attempting to replicate windows performance pattern with Linux as this is doing exact opposite to how windows is handling the problem.


        Last edited by oiaohm; 22 August 2019, 11:44 AM.

        Comment


        • #44
          Originally posted by oiaohm View Post
          Hard reality here people are disabling swap to make Linux behave more like windows but when they are in Windows they have many times more swap enabled. Lets do the exact opposite and wonder why Linux is not behaving like Windows. If you were in fact making Linux swap system behave like windows Linux happens to perform a lot like windows. Disable swap solution is really pure stupid if you are attempting to replicate windows performance pattern with Linux as this is doing exact opposite to how windows is handling the problem.
          When I used Windows (on a new laptop with 4gb ram) I used 500 - 1024mb of page file, on Ubuntu I had 4gb swap. I never had any problem with Windows on that laptop or when I did (which was rare) my system did not become unresponsive I could still close apps to free up memory. On Ubuntu I had constant out of memory trouble and unresponsive system. I was even wondering if I had trouble with the hdd because I had the baloo process hold my cpu at 90% or so. But I didn't have any trouble with Windows and 1gb of page file.
          My trouble finally went away after I switched to Manjaro (which made the trouble with the baloo process go away) and saved up money for an SSD and 4gb ram. Now everything is fine (although I'm worried about the swap partition trashing my SSD. Wonder if I should disable it) although I haven't been using that laptop for sometime. The point being Windows performs better with 1gb virtual memory than Linux with 4gb swap.
          That's why it so annoying when I look at these comments acting like its ok for your system to become unresponsive although you have 3gb of free swap,and saying that this is how it should be.
          Finally someone wants to make things better, but people who haven't had trouble with swap fill the comment section with comments like "Swap is great. I didn't have any trouble" Reminds me of the Graveyard Keeper game where people were complaining (including me) that the event system didn't work and you couldn't advance in the game, but in every discussion there were people saying "It works for me, so the developer shouldn't waste his time to fixing it"

          Comment


          • #45
            Originally posted by BOYSSSSS View Post
            When I used Windows (on a new laptop with 4gb ram) I used 500 - 1024mb of page file, on Ubuntu I had 4gb swap. I never had any problem with Windows on that laptop or when I did (which was rare) my system did not become unresponsive I could still close apps to free up memory. On Ubuntu I had constant out of memory trouble and unresponsive system. I was even wondering if I had trouble with the hdd because I had the baloo process hold my cpu at 90% or so. But I didn't have any trouble with Windows and 1gb of page file.
            My trouble finally went away after I switched to Manjaro (which made the trouble with the baloo process go away) and saved up money for an SSD and 4gb ram. Now everything is fine (although I'm worried about the swap partition trashing my SSD. Wonder if I should disable it) although I haven't been using that laptop for sometime. The point being Windows performs better with 1gb virtual memory than Linux with 4gb swap.
            That's why it so annoying when I look at these comments acting like its ok for your system to become unresponsive although you have 3gb of free swap,and saying that this is how it should be.
            Finally someone wants to make things better, but people who haven't had trouble with swap fill the comment section with comments like "Swap is great. I didn't have any trouble" Reminds me of the Graveyard Keeper game where people were complaining (including me) that the event system didn't work and you couldn't advance in the game, but in every discussion there were people saying "It works for me, so the developer shouldn't waste his time to fixing it"
            Problem is I know where the problem is. Overcommit allows more memory to be allocated than it should.

            Like on my 4G ram with 4G ram swap system. If I turn overcommit off with

            sysctl vm.overcommit_ratio=100
            sysctl vm.overcommit_memory=2
            The result will be unable to run chrome because it allocating too much memory. When you look closer its allocating with particular extensions over 10G on Linux.

            Under windows that has no overcommit it cannot be allocating this much memory.

            You will find in overcommit_memory=2 Linux behaves like windows. Problem we have applications like chrome for Linux that don't behave like they do on Windows so when they are given memory restrictions they just die.

            vm.overcommit_memory=0 the default allows applications to allocate more ram/memory than you have then attempt to use it.


            So we are not looking at a swap problem. Turning off swap is not really solving it.

            We have programs not being able to tolerate correctly the Linux kernel saying no more memory. We have the Linux kernel handing out too many allocations because programs will not accept no more memory messages. And we run out of memory when applications attempt to use the memory they have been given.

            This is not a swap thrash problem. Memory overcommit and desktop really does not work out the best.

            Comment


            • #46
              Originally posted by spstarr View Post
              no more daemons, the kernel should do this right.
              I wanted to say this in the previous article but didn't have a chance. So I'll say it now: Use the magic keys combo: Alt SysRq F when you run into this problem. Typically Google Chrome would trigger this.

              In theory, a good distro would have good user limits in place, in practice most don't or have huge ones in order to better accommodate servers.
              In my experience this is a Linux problem not present in Freebsd. With Freebsd no matter what is happening, you can always switch to a console, login as root and kill the process. In Linux the same thing can easily take 30 minutes or more unless you use the magic key above to force execute the OOM which is what the people want to run in the first place when they know its Chrome eating all the memory again.

              Remember than in a desktop killing a process isn't the end of the world, but in a server it might mean lost production data, so i guess Linux prefers the conservative approach.
              I have experimented with smaller swap files and while that accelerates a little bit the OOM process, it still takes way too long to bother. Zram helps, and no swap also works regardless of the doom sayers, but you need more physical ram. Of course no swap means no hibernation (you need a tiny bit more than your physical ram for that anyway, which takes way too long for people with lots of ram (unless ssd i guess)).

              Of course Red Hat will find yet another excuse to have systemd handle it, throw everything to systemd and relegate Linux to a secondary role... Glad i use a distro without it.

              Comment


              • #47
                Originally posted by oiaohm View Post
                Structure issues don't only stop with Hugepages. Problem here is the trade off of the different settings need to be known. Desktops do have like applications allocating insane amount of virtual memory so fragmenting the page tables. Some of that defrag need swap as well.
                Fragmentation is only a real problem within pages than over the entire Virtual memory where the page table can make it virtually contiguous without swap help, also swap can do nothing for page table itself fragmentation, they are not swap-able , not to point desktop is not the place where users run machines unstoppable for half a year, neither is the kind where memory is dominated by huge databases susceptible to high fragmentation. At the end we have large kernel data structures like DMA buffers that mostly cannot be swapped and need to be defraged or made less susceptible to fragmentation by nature... Desktop and servers are simply huge different profiles.

                Originally posted by oiaohm View Post
                The effects of enabling and disabling swap need to be understood to make the correct choice.
                The truth is swap has negligible effect on desktop systems, apart from exacerbate another bug when off. The only good argument for swap in desktop is to handle anonymous page with really discus-sable benefits vs disadvantages.

                Originally posted by oiaohm View Post
                Something that there has not been very much research on is the over-commit value.
                overcommit_memory set to 2 is fun watching chrome and other things barf because they have attempted allocated 10G of virtual memory on a 4G system.
                We do have problems with kernel structures in a lot desktop workloads because application developers have taken the point of view they can allocate as much memory as they like and overcommit will give it to them. Yes so they don't have to be cleaning their kernel structure usage up.
                This has nothing really much relevant to do with swap in desktop and kernel structures in general have nothing to do with swap, kernel space is not page-able in Linux.

                Originally posted by oiaohm View Post
                The low memory performance issue is 2 problems not 1. Yes is running out of memory its also that we have a lot of structure allocations for stuff that will never be used that gets fragmented.

                RomuloP yes these problems effect servers with THP quicker than desktop but they are still effecting desktop badly. Leaking memory allocations, device handles, file handles does need to be taken way more serous-ally as these things do have performance effects.

                Memory issues are a horrible mess of multi different problems. So there is not one single magic bullet that going to fix it all. Some in fact need applications to be altered to use memory and system resources more sparely by freeing the resources they don't in fact need.
                Sure, you are right, but it was my point, swap also is not a magic bullet, much more in desktops it is not the solution for memory outage, it can remedy the problem with some extra relevant penalties in GUI ambient, but it is just like taking nails with a fork, so, even if some out there are disabling swap or not, it has not a so deep and technical critic relevance in desktops.

                Disabling it is far from being a real problem and memory outage failing because of the lack of it is just not a wrong setup effect, is just a bug showing even worst without a component that is not even a duct tape for the problem.
                Last edited by RomuloP; 23 August 2019, 07:08 PM.

                Comment


                • #48
                  On Windows I had to re-enable swap because some UE4 games claimed they were out of memory despite having tons of memory and 16GB VRAM

                  Other than that, I always disable the paging file. 16GB of wasted space IMO.

                  In Linux I always set the swappiness to 0 so it should only do it when it really needs to.

                  Comment


                  • #49
                    Originally posted by Artemis3 View Post
                    In theory, a good distro would have good user limits in place, in practice most don't or have huge ones in order to better accommodate servers.
                    In my experience this is a Linux problem not present in Freebsd. With Freebsd no matter what is happening, you can always switch to a console, login as root and kill the process. In Linux the same thing can easily take 30 minutes or more unless you use the magic key above to force execute the OOM which is what the people want to run in the first place when they know its Chrome eating all the memory again.
                    https://www.kernel.org/doc/Documenta...mit-accounting
                    Heuristic overcommit handling. Obvious overcommits of address space are refused. Used for a typical system. It ensures a seriously wild allocation fails while allowing overcommit to reduce swap usage. root is allowed to allocate slightly more memory in this mode. This is the default.
                    Freebsd overcommit is not the same as the Linux one. Freebsd is using a set of hard rules once you run past what can be managed by swap and ram freebsd decide its time to kill or stop giving out allocations. Linux with Heuristic can go well well past this line. Artemis3 oomd from facebook would not exist if the huge limits work well for servers. Server stalling out for 30 min is not a suitable out come either. OOM killer on Linux is waking up when there is not enough resources left for it to process.
                    Originally posted by RomuloP View Post
                    not to point desktop is not the place where users run machines unstoppable for half a year
                    Sorry some desktop users do end up running unstopped for half a year on laptops by do some work hibernate, restore do some more work then hibernate in a never ending loop. Result is no clean reboot in middle for quite sometime.

                    So yes fragmentation does effect some desktop users.

                    Originally posted by RomuloP View Post
                    The truth is swap has negligible effect on desktop systems, apart from exacerbate another bug when off. The only good argument for swap in desktop is to handle anonymous page with really discus-sable benefits vs disadvantages.
                    Problem is ways people use desktops are not always black and white. Those hibernating require swap and that hibernation also means they can be running for 6 months without a clean reboot with all the security problems that brings.

                    Lot of ways with secureboot we have to rethink hibernate anyhow.

                    Swap when you are not using programs pushing into absolute out of swap and ram swap allows you to run programs that really do require more ram than you have at a price. If you don't in fact get to starvation swap works ok.

                    Comment


                    • #50
                      Originally posted by profoundWHALE View Post
                      On Windows I had to re-enable swap because some UE4 games claimed they were out of memory despite having tons of memory and 16GB VRAM
                      Because you were really out of memory by disabling swap under windows.

                      Windows closet thing to overcommit is request in advance an swap space allocation. No swap nothing to request allocation against. Yes once all the swap possible has been allocated windows is going to kill something. (note I said allocated not used)

                      Originally posted by profoundWHALE View Post
                      Other than that, I always disable the paging file. 16GB of wasted space IMO.
                      Fun part about windows even if you disable page file is not 100 off. If Windows 7-10 really gets resource desperate create a page file then delete it after things are sorted even if page file is set to off. So its not 100 percent disabled when you tell windows to disable it its more don't use it unless really with back against wall.

                      Understanding behaviour of windows is not always straight forwards as it does not always do exactly what you think the settings would have it do. Yes the pagefile off is one of those things you set it off yet windows will still create temporary one. Pagefile/swap off completely does not match current versions of windows ever.

                      swappiness to 0 is close to the same setting as pagefile off on windows. I have to give you credit for at least suggesting a setting to set something close to how it is under windows. As in pagefile off on windows closest on Linux is swappiness 0 with a swap partition/file. Of course this does to change Linux overcommiting well and truly too much before taking corrective action. Heck there are a lot of case where is purely just over committing too much.

                      Comment

                      Working...
                      X