Announcement

Collapse
No announcement yet.

Yes, Linux Does Bad In Low RAM / Memory Pressure Situations On The Desktop

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    I dump 128GB memory in my desktop and disable swap. Because some things are just worth overdoing in the name of preventing issues down the road.

    On a laptop with 8GB memory, I replace the HDD (if available) with an SSD and create a 4GB swap partition. That usually holds up acceptably.

    Comment


    • #72
      Originally posted by trek View Post
      To have a responsive system under low memory pressure I usually run memory-intensive applications (browser) in a cgroup limited to 90% of the RAM size. When it approaches to that limit, browser become unresponsive, but not the rest of the system. However I need to enable swapaccount=1 boot option and the BFQ scheduler makes things even better, in case read-only pages are continuously unloaded and re-read from disk.

      There is definitively a bug in the browsers, because they not pose a limit to unrealistic resource requirements of some website, even if they are complete virtual machines. What people would said if the JVM haven't had a (configurable) memory limit for programs it runs inside?
      Linux distros should make this easier.
      I'm a Mathematica and SciPy user and sometimes will blow up my system memory (32GiB) due to excessive dataset/quick-but-inefficient algorithms.
      I played with the cgroup trick a few times but I'm not even sure I'm doing it right or not.

      There should be a one-click (or one-script) solution to put a limit to application memory usage, instead of waiting for the system indefinitely (sometimes the kernel kills the OOM process quickly, but sometimes it doesn't), or doing a hard reboot (Oh gee modern systems even do not have the reset button any more)

      Comment


      • #73
        Originally posted by atomsymbol

        I don't understand why you believe that it is impossible to (1) establish common rules for how important application classes are in terms of their importance not to be killed in case of OOM and (2) enable the user to specify additional rules specifying personal preferences in OOM situations.

        For 90% of Linux users, the common rules would be sufficient to protect them from reaching a system state that is unusable.

        One approach would be to introduce a new syscall that each process has to make at least once during its lifetime to state its class and importance in terms of keeping the system usable. A process that does not invoke the syscall would be reported by a warning message. The window manager and the terminal emulator would report themselves as more important than compiler jobs and video players, and more important than software testing tasks during software development since no developer wants to initiate a test run, go to a different room to make a cup of coffee while the test finishes and return back to a completely unresponsive machine because a large unexpected memory leak occurred during the test run.

        Using ulimit to limit the memory consumption of a process is a less flexible approach because in majority of use cases it isn't taking into account other applications running on the machine and more importantly ulimit can cause applications to unexpectedly terminate or misbehave when the user mispredicts the peak memory consumption of the application.
        That was literally the point of the IMHO part of my post you quoted. I don't understand how you can have such an obviously high level of education with no reading comprehension.

        IMHO, this is really a problem that should be solved by a daemon that a user can configure it to kill/halt/suspend-to-disk programs in a specific order because the kernel can't read my mind to know what I consider to be the more important task. It would also need a blacklist of things to not kill ever like the actual desktop environment.

        Comment


        • #74
          Originally posted by Sonadow View Post
          On a laptop with 8GB memory, I replace the HDD (if available) with an SSD and create a 4GB swap partition. That usually holds up acceptably.
          Wear...

          Comment


          • #75
            He, he, in year 2040. Desktop machines would probably have 4TB of RAM, so who cares

            Comment


            • #76
              I've been using Linux since 1997 and BSD (mostly Free and Open) since 1999. Linux has *always* struggled in low memory situations. I've been reading articles like this since the 90s. Its a long term weakness. IMO the BSDs do considerably better here. Sure, in benchmarks FreeBSD and Linux are mostly neck and neck - but crank up the pressure and BSD still shines. I generally have a decent amount of RAM in my systems and never really run into these problems so I half assumed this has long since been rectified. Yet here we are, in the year of our Lord 2019 still discussing "Linux sucks under pressure." Sigh.

              Consider this a plug for BSD ;-P

              Comment


              • #77
                Originally posted by slavko321 View Post
                What should happen is:
                - swap, if enabled and available
                - call OOM immediately
                That would be your greatest headache, and in a company it could be the point to trigger an "automatically open front-door for you"..
                The Algorithm of OOM Killer, tries to kill the bigger consuming processes..

                And your problem maybe is not related to that Applications neither, or could be but for newer processes..
                So you are killing them and could be there severe problems with that..
                For Instance,
                If you do that, with big middle-wares crashing them, will leave lots of shared memory segments, without been freed.. at same time, you probably will be killing the Business databases that are accessed by that middle-wares..because the OOM Killer, is like 'Hitler Valkyrie special forces'..

                Depending on the Business.. you could very fast find a way out, of the company..
                If you are using the SysVinit Shared memory model, it will mean, that you will need to track all segments, by hand, and delete them by hand..
                Later when bringing up the database most certainly, will be there problems with shared memory segments too..
                Could be there problems also with '/var/tmp' ( depending on the database, or middle-ware since the software were not shutdown correctly.. )

                So you will repeat the steps also for the database, but it could be there a lot of other things,
                Like for example the million requests, that you lost, and they are not in the database, because you thought that the 'kill way' was the fast path..
                The Business will ask you for them...

                But even if you get them back, in all you could loose hours of downtime..
                In a server,
                OOM Killer, is the last resort, it should not be triggered by you, also you should avoid it been triggered..

                But in a Desktop, depending of what you are doing it could be the fast solution..

                Originally posted by slavko321 View Post
                The disk trashing behaviour makes NO sense and I really wonder what is being read/written - there is NO swap enabled.
                With that, I agree..but is there a option to has a emergency recreate the same situation that occurs in windows..creating a swap file to deal with the problem..
                Also, if your application has problems not checking a malloc() againt NULL, is a bad practice, your program could then find a segmentation fault, or even be reading the disk to try again to lanuch that process.. and so the led of the disk will blink..

                For the memory problem,
                The last forking process, or maybe the dirty cache pages, should be dropped instead, because a lot of times like 99% of the time, the system runs out of memory but has tons of GBs in cache, that could be dropped, in this cases..
                Amazing.. "welcome to linux.."
                Last edited by tuxd3v; 06 August 2019, 11:32 PM. Reason: complement..

                Comment


                • #78
                  He isn't wrong. This falls under the kernels purview, I think.

                  Comment


                  • #79
                    Originally posted by sarmad View Post
                    I don't understand how people think this is normal. What I would expect from a system running out of memory is for that system to simply show you a dialog box telling you "Out of Memory". Then the user can close apps and free memory and try again. Why is that hard to do? What am I missing?
                    Well,
                    You are missing the biggest companies that contribute to the Linux Kernel.. Intel and so on..
                    They need to sell hardware

                    So the Memory management, in Linux was always( or almost always ), a plague, there are no substantial effort, to just do it..
                    Also some features that existed in the 2.4.X series disappeared, I think that in 2.6.14( is memory doesn't fail-me..), in relation to swappiness control..

                    Also exists the Idea that the community reports upstream, of memory problems, are a myth..
                    This is a long war journey fought at least since the 2.4 series..

                    Comment


                    • #80
                      Originally posted by latalante View Post
                      Never but never disable swap. Without a swap, as you can clearly see the system is not faster. It is much, much slower.
                      Highly accurate. It is better to try a lower swappiness value, like 20, or 10.


                      I thought the RAM thing was just my perception, funny, I installed Linux (Mandrake) for the firs time a couple of years ago in a 512MB RAM Desktop computer, because Windows XP was running too slow (half of the RAM was for the antivirus). Times change.

                      I'm currently having this issue with a new laptop that has only 4GB RAM, I intend to buy 4GB more (if US Dollar calms), but at the moment the best thing I could come up with, was to set swappiness to 1 instead of 60. I avoid at all cost to hibernate or suspend with more than 2GB being used, unless I want to wait for a 3 min boot, and an awful performance in general (after the S3, S4 state, what I do it's to disable,and reenable the swap, so important stuff doesn't get accumulated there, which makes everything slow and horrible). I can reproduce the same behavior in my oldest laptop which has 3GB.

                      Comment

                      Working...
                      X