Announcement

Collapse
No announcement yet.

Facebook Developing "OOMD" For Out-of-Memory User-Space Linux Daemon

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    A memory pressure API has long been missing. The browsers all have API that can hook into that and free memory used for caches and JIT. I believe most other platforms including Android has memory pressure API, so it is one place Linux is behind.

    Comment


    • #22
      Originally posted by Mathias View Post
      OOM is the worst thing on Linux. With my poor 8GB Ram it is quite easy to hit that limit.

      IMO Swapping is the worst thing to do in an OOM scenario. It never happened to me that swapping freed enough memory in a sane amount of time for the system to become responsive again.
      Actually if the system halted, showed a dialog with the top 10 RAM consuming processes and I could choose what to kill,that'd be ideal for my workstation scenario. Right now what I do in an OOM scenario is use the Magic SysReq OOM Killer (Alt+Print+f) to kill a process, 70% of the time that's the correct one, 30% it's firefox which gives me a few seconds to kill the rogue process.
      Yeah, that is another big problem on Linux. They memory is never preemptively unswapped even if the machine has lots of free memory and lots of free CPU time, it just sits there continuing to suck. And the unswapping even if you force it by disabling swap for a while, is stupidly slow, like ~10x times slower than loading data from disk to memory should be. I always wanted to debug those two issues, but never found a good place to start, and fear it might be one of the "policy" issues in kernel project.

      Comment


      • #23
        Never thought I'd say this... But I agree with facebook. Just the other day I had an issue with this exact same issue, a game I was playing used too much memory, and instead of just closing the game, the whole system froze up and more or less crashed.

        Nevermind inadequate, the thing isn't even functional.
        Last edited by rabcor; 22 October 2018, 05:45 PM.

        Comment


        • #24
          Originally posted by bug77 View Post
          My hopes are with Amazon to develop an OOMD that will order (and install) RAM for you when it detects a low-memory scenario
          That is more or less how Turing machines are supposed to have (potentially) infinite memory. If the machine reaches one end of the tape, then an operator will come with a new reel and glue the end to the start of the new tape.

          Comment


          • #25
            The real truth is most distros have their memory configuration set for low latency and not memory pressure. Linux works amazing in memory constrained scenario's it's just no linux distro is really set up well for that scenario. If they did they would end up with a desktop environment that behaves like windows, constantly swapping and trashing storage with huge amounts of latencies resulting in many seconds of lag in everything.... Anyway if you are constantly dealing with memory pressure it has all of the tools to deal with it. Google search, it's where the answers are.

            Comment


            • #26
              Originally posted by Solid State Brain View Post

              For what it's worth, I found that increasing the size of vm.min_free_kbytes helps noticeably in dealing with system responsiveness during out of memory conditions, at least under desktop usage scenarios. With the default setting used in most distributions the system hangs up for minutes as you noticed too, but by increasing it to say 384-512 MB (with 32GB of RAM it should not be a problem) swapping does not appear to lock the computer anymore. The default setting that appears to be used in most distributions is 64 MB and might be too low for modern desktop usage scenarios.

              I am not claiming this is a definitive or particularly elegant solution, but it works for me; I researched about it when some time ago I needed the PC to remain responsive during complex 3D rendering tasks and Linux would hang up whereas under similar conditions Windows would remain usable.

              Some more information:

              Linux kernel source tree. Contribute to torvalds/linux development by creating an account on GitHub.

              Nice approach. My approach has been to run heavy/nasty applications in their own cgroup and restricting the max amount of memory. This helps in situations where either the application genuinely needs too much memory, or is just leaking right and left.

              With your knob, it complements the overall system stability by one more step. So thank you for mentioning it.

              Comment


              • #27
                Originally posted by chithanh View Post
                That is more or less how Turing machines are supposed to have (potentially) infinite memory. If the machine reaches one end of the tape, then an operator will come with a new reel and glue the end to the start of the new tape.
                That's all great and fine until you need to seek back to beginning of reel

                Comment


                • #28
                  Originally posted by nanonyme View Post
                  That's all great and fine until you need to seek back to beginning of reel
                  The turing machine is an implementable theoretical construct, but a theoretical construct nonetheless. When used as intended, it has no seek time.

                  Same way that modern computing hardware and compilers apply optimizations to the theoretical models they're based on.

                  Comment

                  Working...
                  X