Announcement

Collapse
No announcement yet.

Fedora 32 Looking At Using EarlyOOM By Default To Better Deal With Low Memory Situations

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by F.Ultra View Post
    Well you can always disable memory overcommit completely to make malloc() return NULL on OoM.
    Have you tried running a kernel with overcommit off ?
    I tried that already back in 2012: https://ewontfix.com/3/
    You start getting memory allocation failures long before you reach the actual RAM limit in your machine. There is a reason why overcomit exits.
    Overcomit is not bad. It is our ability to deal with it in our apps that is bad.
    Originally posted by F.Ultra View Post
    I would however say that there would not be a whole lot that most userspace applications could do anyway in that case to relieve the situation.
    1) Languages with garbage collection could run the GC and give some memory back to the OS.
    2) Free some memory and see if it helps.
    3) Programs can try to shutdown as clean as is in their power with the memory they have.
    4) Those on an allocation spree might be made aware that they are the culprit.
    Last edited by Raka555; 03 January 2020, 10:47 PM.

    Comment


    • #42
      Originally posted by polarathene View Post
      nohang I think sends SIGTERM, and if the process doesn't behave under some conditions it will follow up with SIGKILL. Is an OOM specific signal useful there to behave differently? (free up some memory to avoid being killed/terminated if possible I guess?)
      At the moment my program is unaware of memory pressure on the OS and I want to try and free memory to help if I was aware of this.
      If every running program releases memory back to the OS, the crisis might be over...

      Comment


      • #43
        Originally posted by F.Ultra View Post
        People keeps on claiming that WIndows "does this right", is that a recent change?
        I wrote a naive program in rust for reading binary data into memory to process, and while I wrote this at work on Windows 10 in 2017 it ended up filling up 128GB RAM in short time frame. I think at one point Windows properly terminated it, but another it somehow caused Windows to end the session closing all my open programs and losing any unsaved state..

        Rewrote the program to use a buffer and process the data in a stream(writing back to disk the processed version and appending to it), didn't exceed 500MB.

        Windows(at least 10, pretty sure I had bad experiences on 7/8 and earlier) probably still does the best job compared to my experiences with macOS and Linux by default for desktop users.

        Comment


        • #44
          Originally posted by Raka555 View Post
          If every running program releases memory back to the OS, the crisis might be over...
          Chrome does something about this apparently. It will gobble up memory when available, but when under pressure, it seems to kill process/tabs(or something else does), and I think I've seen it do something to free memory from processes it manages without killing them too.

          Personally, if a bad behaving process that has a high rate of memory allocation can be detected, perhaps that could be targeted/suspended or even introduce some sort of throttling(how much memory allocation speed do applications in general need to be responsive?).

          Windows has that separate instance from the main OS for privilege escalation I think? Would it be unreasonable to have something like that where the main OS could be suspended before consuming all resources, which frees up CPU and disk I/O while reserving some memory for a minimally focused OS that can inform a user(desktop) graphically about the issue and how to proceed? Perhaps that's not really viable/practical to halt the majority of processes/system to intervene, sort of like running your OS from a VM/container.

          Comment


          • #45
            Originally posted by polarathene View Post

            Chrome does something about this apparently. It will gobble up memory when available, but when under pressure, it seems to kill process/tabs(or something else does), and I think I've seen it do something to free memory from processes it manages without killing them too.

            Personally, if a bad behaving process that has a high rate of memory allocation can be detected, perhaps that could be targeted/suspended or even introduce some sort of throttling(how much memory allocation speed do applications in general need to be responsive?).

            Windows has that separate instance from the main OS for privilege escalation I think? Would it be unreasonable to have something like that where the main OS could be suspended before consuming all resources, which frees up CPU and disk I/O while reserving some memory for a minimally focused OS that can inform a user(desktop) graphically about the issue and how to proceed? Perhaps that's not really viable/practical to halt the majority of processes/system to intervene, sort of like running your OS from a VM/container.
            I am looking at it from a developer perspective.
            As things stand today, I am powerless at what is happening regarding memory management.
            I can't write a snazzy allocator for my language because the kernel isn't coming to the party. I need feedback from the kernel.

            Comment


            • #46
              Originally posted by polarathene View Post

              Chrome does something about this apparently. It will gobble up memory when available, but when under pressure, it seems to kill process/tabs(or something else does), and I think I've seen it do something to free memory from processes it manages without killing them too.
              Chrome will also be guessing at the actual memory pressure. I would like the kernel to inform apps like chrome exactly what the situation is.
              At the moment all the kernel does is kill chrome when it thinks chrome is at fault.

              Comment


              • #47
                Considering how light of a footprint Linux has on RAM compared to... ...certain other operating systems... I'm surprised this is even a problem.

                My free -h with more open tabs than God in Firefox, while editing a high resolution image in Gimp and playing my favorite musing in the Spotify client

                Code:
                $ free -h
                              total        used        free      shared  buff/cache   available
                Mem:            62G        3.0G         57G        270M        2.0G         58G
                Swap:            0B          0B          0B
                So, 3Gig used, of which 2 gigs are cache/buffers. So in reality, 1 gig used. That should cover most non-specialty, non-VM desktop loads right there.

                Just size your RAM appropriately, and you don't have to worry about OOM, or even having a swap partition!

                Linux would be better off if it spent more time worrying about supporting all new hardware from major vendors on launch day, and worried less about how to get shit to run well on old 15 year old RAM constrained systems.

                Comment


                • #48
                  Originally posted by set135
                  I am curious as to what people are doing when they encounter problems in this area.
                  In 2016, I had a work machine where suspending/hibernating was unreliable for some reason, so if I had work with state that I didn't want to lose, I'd just leave it running, this seemed to be a problem over weekends with Firefox(lot of tabs). I'd have enough free memory on Friday, but on Monday it'd be rather unresponsive. I think this was compounded with potential issues in the linux kernel at the time, Gnome, hardware(might have been too new as it was a recently purchased/built DIY machine from a local shop), lightDM(I remember issues with this having no keyboard/mouse response sometimes, glitchy looking cursor, a cursor after unlock with Gnome DE requiring toggling TTY to restore, etc). Firefox though must have had some memory eating bug(I think it was an extension at fault or related to session/profile saves that FF did periodically, when I used on my home PC I noticed this could shoot up 10-20GB of memory for 200-ish tabs? Can't recall specific number, I know I've had over 1k tabs open at one point, and a reboot would restore the session using only the active tabs on windows for memory, but the session save spiked memory usage again).

                  I'm a developer, and sometimes run software that eats a bit of RAM. 2016 wasn't a good year, had those problems across 3 different systems with varying hardware. 2017 onwards changed to Manjaro with KDE and while I've encountered kernel panics, bringing my system to a halt is rarer, it does happen though. Sometimes I can recover from it if I wait long enough or can bring up a TTY, but sometimes I'll just have to reboot(system is running like a snail but unresponsive completely to any USB or network input).

                  SYSREQ is disabled by default on my installs, and I've just never got around to bothering to enable it. My home PC has 32GB RAM, which when nearing full is mostly consumed by Chrome, although QOwnNotes sometimes manages to eat up to 4GB RAM(probably a memory leak of some sort), and Dolphin also has some weird memory leak that gets similar 2-4GB allocated over time(days/weeks).

                  Right now, 14GB is in use, QOwnNotes has 3.2GB allocated and is the biggest memory allocated to a single process, krunner is next at 500MB(I have only been invoking it to perform some quick calculations or currency conversions, no clue why it's that high. two chrome processes are up next with ~500MB each, plasmashell 340MB, further down is dolphin and kwin_x11 with ~130MB each. Some text editors(kate, code-oss) use 50-100MB each process(VSCode probably runs some electron processes each eating up more RAM), and a bunch of chrome processes eating up the bulk of the remaining RAM. ksysguard reports a total of 504 processes.

                  2.2GB/4GB of swap is used, uptime is 40 days, it got close to 30GB RAM at some point, I guess it just hasn't seen any point to move the swap data back into RAM.

                  I do have another application I remember using quite a bit of RAM for processing, it does photogrammetry work, the more photos, the more RAM it would use, can't recall if it ever caused an OOM issue or not, I think it did in 2018. A previous job that used that on Windows machine utilized 128GB RAM and paged 400GB or so to an NVMe Samsung 960 Pro 2TB iirc, that took about 1-2 weeks to process several thousand high res images into 3D data, using an AMD Threadripper 1950x, 2 Titan XP GPUs and a 1080Ti.

                  That would remain fairly responsive so long as the CPU cores/threads were pinned, so anything else on the system could have some cores/threads to use. It's good that it didn't trigger an OOM(I'm pretty sure it was making 90%+ use of the 128GB RAM, and using some disk on the NVMe for equivalent of swap), but we did lose an almost finished processing job from Windows restarting to install forced updates in the weekend, and another issue with 10-gigabit ethernet card having some driver bug that blue screened the system.

                  Comment


                  • #49
                    Originally posted by mattlach View Post
                    My free -h with more open tabs than God in Firefox
                    How many tabs is that exactly to only use 1-3GB of memory? I've used both Firefox and Chrome, and I do remember Firefox using less memory before it adopted e10s for multi-process similar to Chrome. I stopped using FF due to various issues(one point, potentially due to an extension caused a 10-20GB memory increase, and around 5 mins unresponsive/laggy system due to high CPU I/O iirc).

                    My current system has 33 Chrome windows open, 627 tabs. According to the smem command shared here, it's using 7.2GB of RAM, 199 processes. Most I've had at one point I think is 3k tabs, but they weren't all loaded(session restore).

                    Originally posted by mattlach View Post
                    Linux would be better off if it spent more time worrying about supporting all new hardware from major vendors on launch day, and worried less about how to get shit to run well on old 15 year old RAM constrained systems.
                    There's a lot of linux webservers that run with limited RAM, I have one VPS atm that has 2GB of RAM, running Keycloak(IAM) uses around 400-600MB, and an instance of Discourse via Docker after letting it setup and logging into the admin control panel page was using something like 1.6GB of RAM? I had to add swap to get that to work, and if I want anything else I'll probably need to look into zram or pay some more money for a 4GB instance just for that. The server didn't have any OOM problems though, it killed the container(s) to keep itself functional iirc.

                    Hardware support is largely dependent upon the vendors submitting working code though isn't? AMD notably had problems with 2200G and 2400G for at least 6 months and the other 18 months after release. Intel Skylake took over a year before it's quirks were resolved in a kernel release.

                    Comment


                    • #50
                      Low memory is not something I have experienced (plenty of RAM in current laptops), but wish they worked more on a solution for Linux DEs freezing due to heavy disk I/O.

                      Comment

                      Working...
                      X