Announcement

Collapse
No announcement yet.

Yes, Linux Does Bad In Low RAM / Memory Pressure Situations On The Desktop

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Speaking of default behavior, KDE/GNOME should disable the "hibernate" option if you have less disk space than your total RAM, e.g: 10 GiB of free disk space in a 32 GiB RAM system

    Comment


    • #72
      To have a responsive system under low memory pressure I usually run memory-intensive applications (browser) in a cgroup limited to 90% of the RAM size. When it approaches to that limit, browser become unresponsive, but not the rest of the system. However I need to enable swapaccount=1 boot option and the BFQ scheduler makes things even better, in case read-only pages are continuously unloaded and re-read from disk.

      There is definitively a bug in the browsers, because they not pose a limit to unrealistic resource requirements of some website, even if they are complete virtual machines. What people would said if the JVM haven't had a (configurable) memory limit for programs it runs inside?

      Comment


      • #73
        I dump 128GB memory in my desktop and disable swap. Because some things are just worth overdoing in the name of preventing issues down the road.

        On a laptop with 8GB memory, I replace the HDD (if available) with an SSD and create a 4GB swap partition. That usually holds up acceptably.

        Comment


        • #74
          Originally posted by trek View Post
          To have a responsive system under low memory pressure I usually run memory-intensive applications (browser) in a cgroup limited to 90% of the RAM size. When it approaches to that limit, browser become unresponsive, but not the rest of the system. However I need to enable swapaccount=1 boot option and the BFQ scheduler makes things even better, in case read-only pages are continuously unloaded and re-read from disk.

          There is definitively a bug in the browsers, because they not pose a limit to unrealistic resource requirements of some website, even if they are complete virtual machines. What people would said if the JVM haven't had a (configurable) memory limit for programs it runs inside?
          Linux distros should make this easier.
          I'm a Mathematica and SciPy user and sometimes will blow up my system memory (32GiB) due to excessive dataset/quick-but-inefficient algorithms.
          I played with the cgroup trick a few times but I'm not even sure I'm doing it right or not.

          There should be a one-click (or one-script) solution to put a limit to application memory usage, instead of waiting for the system indefinitely (sometimes the kernel kills the OOM process quickly, but sometimes it doesn't), or doing a hard reboot (Oh gee modern systems even do not have the reset button any more)

          Comment


          • #75
            Originally posted by atomsymbol View Post

            I don't understand why you believe that it is impossible to (1) establish common rules for how important application classes are in terms of their importance not to be killed in case of OOM and (2) enable the user to specify additional rules specifying personal preferences in OOM situations.

            For 90% of Linux users, the common rules would be sufficient to protect them from reaching a system state that is unusable.

            One approach would be to introduce a new syscall that each process has to make at least once during its lifetime to state its class and importance in terms of keeping the system usable. A process that does not invoke the syscall would be reported by a warning message. The window manager and the terminal emulator would report themselves as more important than compiler jobs and video players, and more important than software testing tasks during software development since no developer wants to initiate a test run, go to a different room to make a cup of coffee while the test finishes and return back to a completely unresponsive machine because a large unexpected memory leak occurred during the test run.

            Using ulimit to limit the memory consumption of a process is a less flexible approach because in majority of use cases it isn't taking into account other applications running on the machine and more importantly ulimit can cause applications to unexpectedly terminate or misbehave when the user mispredicts the peak memory consumption of the application.
            That was literally the point of the IMHO part of my post you quoted. I don't understand how you can have such an obviously high level of education with no reading comprehension.

            IMHO, this is really a problem that should be solved by a daemon that a user can configure it to kill/halt/suspend-to-disk programs in a specific order because the kernel can't read my mind to know what I consider to be the more important task. It would also need a blacklist of things to not kill ever like the actual desktop environment.

            Comment


            • #76
              Originally posted by Sonadow View Post
              On a laptop with 8GB memory, I replace the HDD (if available) with an SSD and create a 4GB swap partition. That usually holds up acceptably.
              Wear...

              Comment


              • #77
                He, he, in year 2040. Desktop machines would probably have 4TB of RAM, so who cares

                Comment


                • #78
                  I've been using Linux since 1997 and BSD (mostly Free and Open) since 1999. Linux has *always* struggled in low memory situations. I've been reading articles like this since the 90s. Its a long term weakness. IMO the BSDs do considerably better here. Sure, in benchmarks FreeBSD and Linux are mostly neck and neck - but crank up the pressure and BSD still shines. I generally have a decent amount of RAM in my systems and never really run into these problems so I half assumed this has long since been rectified. Yet here we are, in the year of our Lord 2019 still discussing "Linux sucks under pressure." Sigh.

                  Consider this a plug for BSD ;-P

                  Comment


                  • #79
                    Originally posted by slavko321 View Post
                    What should happen is:
                    - swap, if enabled and available
                    - call OOM immediately
                    That would be your greatest headache, and in a company it could be the point to trigger an "automatically open front-door for you"..
                    The Algorithm of OOM Killer, tries to kill the bigger consuming processes..

                    And your problem maybe is not related to that Applications neither, or could be but for newer processes..
                    So you are killing them and could be there severe problems with that..
                    For Instance,
                    If you do that, with big middle-wares crashing them, will leave lots of shared memory segments, without been freed.. at same time, you probably will be killing the Business databases that are accessed by that middle-wares..because the OOM Killer, is like 'Hitler Valkyrie special forces'..

                    Depending on the Business.. you could very fast find a way out, of the company..
                    If you are using the SysVinit Shared memory model, it will mean, that you will need to track all segments, by hand, and delete them by hand..
                    Later when bringing up the database most certainly, will be there problems with shared memory segments too..
                    Could be there problems also with '/var/tmp' ( depending on the database, or middle-ware since the software were not shutdown correctly.. )

                    So you will repeat the steps also for the database, but it could be there a lot of other things,
                    Like for example the million requests, that you lost, and they are not in the database, because you thought that the 'kill way' was the fast path..
                    The Business will ask you for them...

                    But even if you get them back, in all you could loose hours of downtime..
                    In a server,
                    OOM Killer, is the last resort, it should not be triggered by you, also you should avoid it been triggered..

                    But in a Desktop, depending of what you are doing it could be the fast solution..

                    Originally posted by slavko321 View Post
                    The disk trashing behaviour makes NO sense and I really wonder what is being read/written - there is NO swap enabled.
                    With that, I agree..but is there a option to has a emergency recreate the same situation that occurs in windows..creating a swap file to deal with the problem..
                    Also, if your application has problems not checking a malloc() againt NULL, is a bad practice, your program could then find a segmentation fault, or even be reading the disk to try again to lanuch that process.. and so the led of the disk will blink..

                    For the memory problem,
                    The last forking process, or maybe the dirty cache pages, should be dropped instead, because a lot of times like 99% of the time, the system runs out of memory but has tons of GBs in cache, that could be dropped, in this cases..
                    Amazing.. "welcome to linux.."
                    Last edited by tuxd3v; 08-06-2019, 11:32 PM. Reason: complement..

                    Comment


                    • #80
                      He isn't wrong. This falls under the kernels purview, I think.

                      Comment

                      Working...
                      X