Announcement

Collapse
No announcement yet.

Yes, Linux Does Bad In Low RAM / Memory Pressure Situations On The Desktop

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    this has been an issue with linux desktop for the past several years. it use to be that you could run linux on a quarter gig of ram with no risk of issue for most average tasks.

    the problem is the browsers. firefox or chrome, take your pick, have all but given up on trying to keep resources under control. i use firefox regularly and it can eat up 4 gigs or so with almost no effort.

    to be fair, i do think that the problem isn't really firefox's codebase, its the extensions or websites that firefox is running, i have had a firefox extension that if it was active, all memory would eventually be consumed and the system would lock up. and websites don't concern themselves with making sure your system has enough resources to run 250 other tabs at the same time. i know firefox can be tuned to reduce the amount of pages that are cached in memory and such, but i think there needs to be better and more restrictive default settings and more confinement of the extensions

    with that said, i would expect linux to not let the system lock up so hard as it does. if a browser is consuming 7.5 out of 8 gigs and hasn't touched 7 of those GB for a long time, those pages should have been slowly trickled to swap. or at least just the browser would have its memory swapped out all at once and only the browser would slow to a crawl. the rest of the system should stay usable.

    Comment


    • #22
      Originally posted by Awesomeness View Post
      You'd expect to not be able to close currently running applications? How would you expect to free up memory then?
      It obviously has to close something to free up memory, but in the case of a web browser it'll start using up memory as soon as it's freed which defeats its purpose until it realizes it should kill the browser.

      This isn't just a kernel problem. I don't have a swap partition at all and this isn't a problem for me because I manage my system appropriately and I run stuff within its limits.

      Comment


      • #23
        Originally posted by anarki2 View Post
        But, but, Windows is a resource hog and everyone should just switch to Ubuntu and breathe new life into their Pentium 1's laying around!
        Alright troll, back under bridge.

        He switches off swap, FIRST LINE OF DEFENCE against low memory situation and THEN complains shit doesn't work.
        Well done...
        Last edited by tildearrow; 08-06-2019, 03:30 PM.

        Comment


        • #24
          Originally posted by Saverios View Post
          Okay, I'll bite. What should happen when you have no more memory and no swap file and an application asks for more memory? Isn't it up to the application to handle memory unavailability gracefully?
          While you can tune it, Linux by default does lazy overcommit memory allocation. It will happily give you your malloc:ed pointer while it tries to sort shit out on the actual page faults...
          If it fails it will probably shoot itself to bits or some program it thinks is offending.
          In reality, with normal overcommit behavior, applications have more or less zero chance of resolving a memory pressure situation unless probing the OS for more information.

          Otoh, not overcommiting memory will result in big fat crashes as applications are usually worthless at handling memory pressure situations gracefully.

          Out of memory is a shitty situation whichever way you look at it.
          The big question is if the default heuristic overcommit should land in an "this is bull" conclusion and kill the offending program instead of fragging itself to bits.
          Last edited by milkylainen; 08-06-2019, 03:37 PM.

          Comment


          • #25
            Originally posted by skerit View Post

            That's optimistic. Anyway, what's a few more months? It's been like this for as long as I remember.
            OS X is king with memory performance wrt small ram footprints. It was designed for the desktop first.

            Comment


            • #26
              Originally posted by dimko View Post

              Alright troll, back under bridge.

              He switches off swap, FIRST LINE OF DEFENCE against low memory situation and THEN complains shit doesn't work.
              Well done...
              Sadly, this happened to me even with swap on.

              Comment


              • #27
                Originally posted by Saverios View Post
                Okay, I'll bite. What should happen when you have no more memory and no swap file and an application asks for more memory? Isn't it up to the application to handle memory unavailability gracefully?
                The system should start by messaging processes and clearing up cached pages. At least all multi user systems I have seen up to now do not do this which means that using something like mmaps to map huge files partially into memory becomes pointless because they never become unmapped again even if you read the data only once. The only way is to manually close the mmap and open it again with a offset. If the system would clear cached pages, it could unload the ones which there used the least amount automatically making more space for other stuff. Not 100% sure if this is because of the linux kernel messing up or the sys admins mis configuring their systems but it is bloody annoying if you deal with bigger than RAM data files.

                Comment


                • #28
                  I find OpenBSD does great here. The default ulimit of 512M is pretty strict so resource hogs like Firefox or Iridium simply crash and free all the memory back to the OS. Everything else keeps running correctly and smoothly.

                  Is there a problem here? If a hypertext viewer uses more than half a gig of ram, it is broken and should be stopped .

                  In all honesty, Linux has never been fantastic with memory. In the RHEL4 / Fedora Core 4 days, it was quite common for a laptop to only have 512M of RAM; Fedora Core 4 ran terribly. Then Fedora Core 5's installer didn't even work on 512M, just uncompressing the package data caused it to run out and abort.

                  The actual trick was to avoid the GUI for installing or day to day use (Gnome 2 was bloated, Gnome 3 is... unacceptable) and this trick seems to not have changed since back then. Wayland has reduced the elegance of window managers but I don't think "working light" will ever change.
                  Last edited by kpedersen; 08-06-2019, 03:36 PM.

                  Comment


                  • #29
                    Originally posted by dimko View Post
                    He switches off swap, FIRST LINE OF DEFENCE against low memory situation and THEN complains shit doesn't work.
                    Well done...
                    Umm, no. He is complaining that the situation gets out of hand.
                    It's a technical analysis situation, not that he cannot enable swap.

                    The kernel stalls and he is complaining over the stalls when there is no more memory.
                    There should be an out of memory situation instead of a pressure stall.

                    To me this seems like a behavior "bug" or a configuration issue.
                    Maybe what he is looking for is to turn off Linux overcommit behavior:

                    echo 2 > /proc/sys/vm/overcommit_memory

                    It will probably make the system a bit slower though. And a bit heavier on memory as Linux will resolve all allocations instead of lazy-picking them.
                    There are several other tunables. I wonder if the default zero behavior should better detect runaway situations instead of stalling to bits.

                    Comment


                    • #30
                      Originally posted by kpedersen View Post
                      I find OpenBSD does great here. The default ulimit of 512M is pretty strict so resource hogs like Firefox or Iridium simply crash and free all the memory back to the OS. Everything else keeps running correctly and smoothly.

                      Is there a problem here? If a hypertext viewer uses more than half a gig of ram, it is broken and should be stopped
                      ulimits are tunables. Linux could do exactly the same. There is no problem in setting resource limits for programs and process groups.
                      You could also stop overcommiting memory. The question at hand is probably why the kernel pressure-stalls on heuristic overcommits instead of killing the applications as you suggest.

                      It seems the heuristic overcommit never lands in the "this is impossible"-scenario and starts killing stuff off. Instead it frags itself to bits.

                      Comment

                      Working...
                      X