Announcement

Collapse
No announcement yet.

Clear Linux Set To Begin Offering EarlyOOM For Better Dealing With Memory Pressure

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • tomas
    replied
    Originally posted by stormcrow View Post

    He's pointing out that there shouldn't be a workaround. This is a problem in the Linux kernel itself, and it should be addressed at the level of the problem (kernel space) and not in user space with yet another daemon. The fact that systemd is going to integrate a competing system bugs me as well since this is the wrong place to be addressing the problem, even as a work around. Workarounds have particularly LONG half lifes.
    Did you read my follow-up post?
    In order for something to be labeled a "workaround" there must be some notion of what a "proper solution" would be and what the "root cause" of the "problem" is. At least on a conceptual level. What is your perception of what the "problem" is and what a "proper" solution to that "problem" is? How can the "problem" be solved by the kernel? From my viewpoint this is about user space allocating too much of something that can be considered to be a finite resource, i.e. memory. The solution to that is either for user space to start releasing memory it does not need (cached etc) and hopefully it will be enough so that the system can continue functioning. But if user space anyway continues requesting more and more memory, the only option left will eventually be to somehow start killing processes, preferably "the offending ones" if that is possible to decide, and hopefully that will be enough in order for the system to continue functioning.

    Finally, if this problem would have been easy to solve, don't you think that would already have happened by now? I mean, it's not like other operating systems like Windows or MacOs handle this significantly better, do they?

    Leave a comment:


  • stormcrow
    replied
    Originally posted by tomas View Post

    And what is the solution? Why do you see this as a workaround?

    He's pointing out that there shouldn't be a workaround. This is a problem in the Linux kernel itself, and it should be addressed at the level of the problem (kernel space) and not in user space with yet another daemon. The fact that systemd is going to integrate a competing system bugs me as well since this is the wrong place to be addressing the problem, even as a work around. Workarounds have particularly LONG half lifes.

    Leave a comment:


  • vb_linux
    replied
    Originally posted by birdie View Post

    That's funny and sad simultaneously. Before the earlyoom proposal no one in Fedora gave an f about this issue and no one worked on including FB's oomd in systemd. Now, when we do have a working solution without any if's Lennart starts opposing to it: "If it's not from me, it's "bad"".

    And what's wrong with a 100ms interval in earlyoom? There are systems and situations where this interval is 100% warranted and anything bigger than that will make the system unresponsive before earlyoom has enough time to react.
    Lennart is not developing it, FB is. They do provide the logic on why they think it is better and Lennart is paraphrasing them:

    "then also determine what to kill taking the swap use into account and little else (which it apparently does not). This doesn't make any sense to have though if there is no swap."

    "Don't bother with the OOM score the kernel calculates for processes, it doesn't take the swap use into account. That said, do take the configurable OOM score *adjustment* into account, so that processes which set that are respected, i.e. journald, udevd, and such. (or in otherwords, ignore /proc/$PID/oom_score, but respect /proc/PID/oom_score_adj)."

    "they also will do the systemd work necessary. time frame: half a year, maybe one year, but no guarantees."

    Leave a comment:


  • tildearrow
    replied
    Originally posted by tomas View Post

    I'm afraid you do not seem to understand what "this problem" is.
    All systems can can run out of memory by applications requesting more and more memory.
    Either it's happening because the system is simply overloaded with too many applications requesting memory, or it might be an application that is buggy and has gone wild requesting more and more memory. So the problem can not easily be "fixed" in the kernel like it would be some known flaw. It's just simply a situation that can occur that a system must somehow handle. The OOM-killer in the kernel is one such attempt to somehow handle such a situation. This new EarlyOOM is another.
    The problem is not running out of memory.
    The problem is having the computer freeze for like 10 minutes BEFORE we run out of memory and the OOM killer kicks in.
    It should be kicking in immediately, with little to no freezes.

    Leave a comment:


  • kokoko3k
    replied
    Originally posted by tomas View Post

    I'm afraid you do not seem to understand what "this problem" is.
    All systems can can run out of memory by applications requesting more and more memory.
    Either it's happening because the system is simply overloaded with too many applications requesting memory, or it might be an application that is buggy and has gone wild requesting more and more memory. So the problem can not easily be "fixed" in the kernel like it would be some known flaw. It's just simply a situation that can occur that a system must somehow handle. The OOM-killer in the kernel is one such attempt to somehow handle such a situation. This new EarlyOOM is another.
    True, but it would be nice for the kernel to be able to trigger a kill processes when just ram (not ram + swap) is full and no caches/buffer can be freed.

    Leave a comment:


  • tomas
    replied
    Originally posted by tildearrow View Post

    The solution is fixing the root cause of this problem in the kernel.

    I see this as a workaround since it does not solve the actual issue but is only a prevention measure. It feels like patching a broken window with tape instead of replacing the window.
    I'm afraid you do not seem to understand what "this problem" is.
    All systems can can run out of memory by applications requesting more and more memory.
    Either it's happening because the system is simply overloaded with too many applications requesting memory, or it might be an application that is buggy and has gone wild requesting more and more memory. So the problem can not easily be "fixed" in the kernel like it would be some known flaw. It's just simply a situation that can occur that a system must somehow handle. The OOM-killer in the kernel is one such attempt to somehow handle such a situation. This new EarlyOOM is another.

    Leave a comment:


  • tildearrow
    replied
    Originally posted by tomas View Post

    And what is the solution? Why do you see this as a workaround?
    The solution is fixing the root cause of this problem in the kernel.

    I see this as a workaround since it does not solve the actual issue but is only a prevention measure. It feels like patching a broken window with tape instead of replacing the window.
    Last edited by tildearrow; 08 January 2020, 11:04 AM.

    Leave a comment:


  • kokoko3k
    replied
    Originally posted by birdie View Post

    That's funny and sad simultaneously. Before the earlyoom proposal no one in Fedora gave an f about this issue and no one worked on including FB's oomd in systemd. Now, when we do have a working solution without any if's Lennart starts opposing to it: "If it's not from me, it's "bad"".

    And what's wrong with a 100ms interval in earlyoom? There are systems and situations where this interval is 100% warranted and anything bigger than that will make the system unresponsive before earlyoom has enough time to react.
    No need to hurry, since it lives into memory that is locked/not swappable.

    Leave a comment:


  • Guest
    Guest replied
    Originally posted by grigi View Post
    Making apps actually honour a "low-memory" signal would be extremely useful. e.g. to force a GC run on java apps when memory starts being pressured. Or telling the browser to unload uncompressed images, or defragment its heap, etc...
    Android has been doing this since v1.0. Honestly, there's no need to reinvent the wheel, and we could just use Android on mobile devices, but without the closed source Google services, and with hardware that has proper open source driver support.

    Leave a comment:


  • Guest
    Guest replied
    Originally posted by tildearrow View Post
    Why are they doing this? Why not try to fix the problem instead? (or besides doing this?)

    I feel bad how everybody is going for the workaround and not the solution :<
    Sometimes, large amounts of memory are an unavoidable requirement for some kinds of work. Users don't necessarily know that when they start such programs or work. For power users, such a thing would be unnecessary and most likely annoying, so as long as it can be disabled, it's fine.

    Leave a comment:

Working...
X