No announcement yet.

Yes, Linux Does Bad In Low RAM / Memory Pressure Situations On The Desktop

  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    I recall iOS had a signal it could send to apps requesting they minimize memory usage, would be nice if kill supposed that. But yes the oomk could be tuned better.


    • #52
      Originally posted by skeevy420 View Post
      I consider it user error and not a deficiency of the OS. The user pushing something too hard isn't necessarily a computer problem for that matter. No matter how many safe guards are in place, it doesn't help it if a user pushes something beyond its limits. In this case they removed a safe guard and pushed it beyond its limit.

      How is the kernel supposed to know if it needs to halt Plasma, Firefox, Mplayer, LibreOffice, KSP, Kate, Yakuake, makepkg or what to keep the system in a usable manner? All it can do is juggle stuff with what little resources it has available.

      IMHO, this is really a problem that should be solved by a daemon that a user can configure it to kill/halt/suspend-to-disk programs in a specific order because the kernel can't read my mind to know what I consider to be the more important task. It would also need a blacklist of things to not kill ever like the actual desktop environment.
      I don't understand why you believe that it is impossible to (1) establish common rules for how important application classes are in terms of their importance not to be killed in case of OOM and (2) enable the user to specify additional rules specifying personal preferences in OOM situations.

      For 90% of Linux users, the common rules would be sufficient to protect them from reaching a system state that is unusable.

      One approach would be to introduce a new syscall that each process has to make at least once during its lifetime to state its class and importance in terms of keeping the system usable. A process that does not invoke the syscall would be reported by a warning message. The window manager and the terminal emulator would report themselves as more important than compiler jobs and video players, and more important than software testing tasks during software development since no developer wants to initiate a test run, go to a different room to make a cup of coffee while the test finishes and return back to a completely unresponsive machine because a large unexpected memory leak occurred during the test run.

      Using ulimit to limit the memory consumption of a process is a less flexible approach because in majority of use cases it isn't taking into account other applications running on the machine and more importantly ulimit can cause applications to unexpectedly terminate or misbehave when the user mispredicts the peak memory consumption of the application.


      • #53
        I guess this is why linux was never an alternative back then when XP died. Linux needed way more resources even on lxde


        • #54
          Originally posted by Saverios View Post
          Okay, I'll bite. What should happen when you have no more memory and no swap file and an application asks for more memory? Isn't it up to the application to handle memory unavailability gracefully?
          It is actually impossible for the applications on linux to handle out of memory situations gracefully.
          Malloc will basically always succeed; even in low memory conditions. It can fail, but that is extremely rare.
          Now when your app actually tried to use the memory that you just allocated, the kernel goes into a frenzy because there are not enough memory.
          It now has no option but to swap or let the oom-killer throw out passengers from the plane...

          This over commit can be switched off, but that is generally a bad idea.
          Last time I checked, the over commit was working on the size of the address space of the process and not on the RSS size.(Probably an artifact from a time when sbrk was used to allocate memory).
          So all processes that use address space randomization or map memory at arbitrary addresses, will most likely break when over commit is off. Or at least fool the the kernel about the actual memory usage.

          I wish people will stop writing slow and bloated software. But today programmer convenience trumps everything else ...
          This is what you get in a world with automatic memory management.
          Last edited by Raka555; 06 August 2019, 05:35 PM.


          • #55
            Though, what I'd REALLY like to see is something that triggers SIGSTOP to a process whenever it:
            A. Is consuming the majority of the total used RAM.
            B. The total available RAM and SWAP has exceeded 95%.
            By doing this, it's a win-win: your system remains stable and usable, and you still get a chance to recover and/or SIGCONT the process. There could be a single file in the /etc folder that controls the behavior of this (so for example, maybe you want to increase the limit to 99%, or use SIGKILL instead of SIGSTOP, or ignore SWAP percentages).
            I think [email protected] and elatllat are on to something. There should be a signal that tells the application to release some of its memory when the system is low on memory. The application knows what it needs to have on memory and what it can get rid of.


            • #56
              Originally posted by Saverios View Post
              Okay, I'll bite. What should happen when you have no more memory and no swap file and an application asks for more memory? Isn't it up to the application to handle memory unavailability gracefully?
              But the kernel needs to provide that information back, when you are trying to reserve more memory..
              In that situation, your application, should handle it... but it needs to know first that there are a problem..

              Also in that Situation,
              All other applications should continue to work.. and of-course the mouse and so on..


              • #57
                Firefox extensions can be the devil indeed - I ran into one like that myself; took resources per tab, and I had a lot of tabs.

                But worse, web apps such as Discord and (especially) Twitter, which soaks huge amounts of session store in subframes which keep changing. If you have a non-trivial number of tabs, I'd wager a large amount of the writing to disk in Firefox is for the session.


                • #58
                  Originally posted by tildearrow View Post
                  Sadly, this happened to me even with swap on.
                  Linux kernel doesn't recover very well when swapping...I would even go further and say that it never recover completly, without external help.. sometimes, you can solve, sometimes don't..
                  Too much bloatware, too much memory consumption..

                  When you look into windows XP, and you remember running it in 96MB Ram... you start to question a lot of things..


                  • #59
                    Originally posted by edoantonioco View Post
                    I guess this is why linux was never an alternative back then when XP died. Linux needed way more resources even on lxde
                    [[email protected] ~]>  free -m
                                  total        used        free      shared  buff/cache   available
                    Mem:            935         188         386          28         361         589
                    Swap:          4095           0        4095
                    Linux 5.2 (32-bit), agetty, haveged, dbus, dhcpcd, wpa_supplicant, udevd, Xorg, urxvtd, notion, tmux, mksh, bash, vim, chromium 76.0.3809.87 with open phoronix forum.

                    Windows XP with the latest version of google chrome (48) working, it's about 470MB RAM (2x more than used memory).
                    Terrible crap, eyes hurt from just looking at fonts, action is better left unsaid.


                    • #60
                      Originally posted by dimko View Post
                      He switches off swap, FIRST LINE OF DEFENCE against low memory situation and THEN complains shit doesn't work.
                      In a desktop with a bootup of less than 400MB or so,with 4GB of RAM, should run every traditional tool, without any problem..