Announcement

Collapse
No announcement yet.

Yes, Linux Does Bad In Low RAM / Memory Pressure Situations On The Desktop

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by pomac View Post
    The interesting part is that they are looking in to ways to dump more memory - ie things that shouldn't be prio over actual apps.

    However, Linux is nicer than solaris and most (if not all) BSD:s in this case - they will all crash =)
    Bullshit, BSD (FreeBSD at least) may kill one or few memory-hogging programs, system may become sluggish but running out of RAM does not make it crash. Been there, tried that.
    Leave an apologist to try and find some angle tho..

    Comment


    • #62
      Wait, what? If swap is disabled, what is the system doing when the browser exceeds 4GB? Seems like the process should die with out of memory. Or is it still dropping and swapping code pages for processes from binaries on disk?

      Some processes should never be booted out of memory (gui, consoles, shells) and ps/kill should always be accessible.
      Last edited by xorbe; 06 August 2019, 06:48 PM.

      Comment


      • #63
        Originally posted by down1 View Post

        In the time it takes for you to read the message a program may have took the remaining memory, it now may be impossible to gracefully exit any program as it may need more memory to exit gracefully, and you can't load the task manager because that requires... memory.
        Well then in that case the user can make a judgement and decide to pick one program for ungraceful termination. That's a far better situation than the current situation of ungracefully terminating all programs by doing a hard reset.

        Comment


        • #64
          Originally posted by Raka555 View Post


          This over commit can be switched off, but that is generally a bad idea.
          Last time I checked, the over commit was working on the size of the address space of the process and not on the RSS size.(Probably an artifact from a time when sbrk was used to allocate memory).
          So all processes that use address space randomization or map memory at arbitrary addresses, will most likely break when over commit is off. Or at least fool the the kernel about the actual memory usage.
          I'm not sure what you're talking about. Overcommit is based on the number of pages the process has mapped, excluding file-based shared or read-only mappings, which are backed by the file and hence never occupy swap space. It doesn't matter where in the virtual address space the mapping lives, only its size matters. This should be obvious since otherwise basically no 64-bit process could ever run with overcommit turned off, so it wouldn't be very useful (stack is typically mapped very very far above text and data).
          RSS means the amount of physical memory the process is actually using at the moment. It doesn't even make any sense to base "overcommit" on this, that would be meaningless. What overcommit being off means is that the system will refuse to allocate virtual memory if that would result in memory+swap being exceeded, assuming that virtual memory had to be fully backed by actual pages. Eg normally a mapping takes no space until the process actually writes to it -- so if a portion of the mapping is just left at zero, never being written, it will not consume any RAM or swap, so overcommit will allow such mappings to exceed the available space.

          Comment


          • #65
            I honestly don't see how this issue is so hard to fix. Have the OOM killer have reserved memory (.1% or 8MB or something) to work out of, examine the system, kill any programs not needed. That, or choose an entire application to yeet to a drive like Windows does. I think the main issue right now is that you can't really do that well because nobody has code for it yet. For example I don't think the DRM subsystem has a way of restoring an application's GPU state, so fixing that portion might be difficult, and not standard per-hardware. But it's not impossible. Just kinda fucked up to have to code. No application should have to worry about the actual system's issues.

            Just something we have to decide how to do it then actually add it. It's something that is probably just easier to through more resources at the computers for the big companies so nobody has given a shit about it.

            Comment


            • #66
              i have seen Linux completely lock up in low memory/memory exhaustion situations. I mean, no GUI response whatsoever and also no response over network either. There are indeed very severe problems with memory exhaustion even on machines with 4 GB of RAM including lockups and unresponsive systems. Often its completely unrecoverable. Totally unacceptable and quite awful. Why cant they get this right.

              Comment


              • #67
                Originally posted by aht0 View Post

                Bullshit, BSD (FreeBSD at least) may kill one or few memory-hogging programs, system may become sluggish but running out of RAM does not make it crash. Been there, tried that.
                Leave an apologist to try and find some angle tho..
                They don't know what they are talking about. A FreeBSD desktop does NOT suffer this problem. I was recently running 4GB of ram on a desktop and what would happen is the oom killer would just crash random programs. You can also tell it what programs you never want to kill so you can kill the query script killing a box but not the database for instance. The sluggish stuff is a Lunix only problem. Maybe they can look to see how BSD does it for guidence in how to do it right. Lol

                Comment


                • #68
                  Problem is solved with https://github.com/rfjakob/earlyoom
                  The in kernel oom killer is way to slow, but that one works well, have not had this problem for years..
                  Usually kills the biggest chromium tab..

                  Comment


                  • #69
                    Speaking of default behavior, KDE/GNOME should disable the "hibernate" option if you have less disk space than your total RAM, e.g: 10 GiB of free disk space in a 32 GiB RAM system

                    Comment


                    • #70
                      To have a responsive system under low memory pressure I usually run memory-intensive applications (browser) in a cgroup limited to 90% of the RAM size. When it approaches to that limit, browser become unresponsive, but not the rest of the system. However I need to enable swapaccount=1 boot option and the BFQ scheduler makes things even better, in case read-only pages are continuously unloaded and re-read from disk.

                      There is definitively a bug in the browsers, because they not pose a limit to unrealistic resource requirements of some website, even if they are complete virtual machines. What people would said if the JVM haven't had a (configurable) memory limit for programs it runs inside?

                      Comment

                      Working...
                      X