Announcement

Collapse
No announcement yet.

Yes, Linux Does Bad In Low RAM / Memory Pressure Situations On The Desktop

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Awesomeness View Post
    You'd expect to not be able to close currently running applications? How would you expect to free up memory then?
    It obviously has to close something to free up memory, but in the case of a web browser it'll start using up memory as soon as it's freed which defeats its purpose until it realizes it should kill the browser.

    This isn't just a kernel problem. I don't have a swap partition at all and this isn't a problem for me because I manage my system appropriately and I run stuff within its limits.

    Comment


    • #22
      Originally posted by anarki2 View Post
      But, but, Windows is a resource hog and everyone should just switch to Ubuntu and breathe new life into their Pentium 1's laying around!
      Alright troll, back under bridge.

      He switches off swap, FIRST LINE OF DEFENCE against low memory situation and THEN complains shit doesn't work.
      Well done...
      Last edited by tildearrow; 06 August 2019, 03:30 PM.

      Comment


      • #23
        Originally posted by Saverios View Post
        Okay, I'll bite. What should happen when you have no more memory and no swap file and an application asks for more memory? Isn't it up to the application to handle memory unavailability gracefully?
        While you can tune it, Linux by default does lazy overcommit memory allocation. It will happily give you your malloc:ed pointer while it tries to sort shit out on the actual page faults...
        If it fails it will probably shoot itself to bits or some program it thinks is offending.
        In reality, with normal overcommit behavior, applications have more or less zero chance of resolving a memory pressure situation unless probing the OS for more information.

        Otoh, not overcommiting memory will result in big fat crashes as applications are usually worthless at handling memory pressure situations gracefully.

        Out of memory is a shitty situation whichever way you look at it.
        The big question is if the default heuristic overcommit should land in an "this is bull" conclusion and kill the offending program instead of fragging itself to bits.
        Last edited by milkylainen; 06 August 2019, 03:37 PM.

        Comment


        • #24
          Originally posted by skerit View Post

          That's optimistic. Anyway, what's a few more months? It's been like this for as long as I remember.
          OS X is king with memory performance wrt small ram footprints. It was designed for the desktop first.

          Comment


          • #25
            Originally posted by dimko View Post

            Alright troll, back under bridge.

            He switches off swap, FIRST LINE OF DEFENCE against low memory situation and THEN complains shit doesn't work.
            Well done...
            Sadly, this happened to me even with swap on.

            Comment


            • #26
              Originally posted by Saverios View Post
              Okay, I'll bite. What should happen when you have no more memory and no swap file and an application asks for more memory? Isn't it up to the application to handle memory unavailability gracefully?
              The system should start by messaging processes and clearing up cached pages. At least all multi user systems I have seen up to now do not do this which means that using something like mmaps to map huge files partially into memory becomes pointless because they never become unmapped again even if you read the data only once. The only way is to manually close the mmap and open it again with a offset. If the system would clear cached pages, it could unload the ones which there used the least amount automatically making more space for other stuff. Not 100% sure if this is because of the linux kernel messing up or the sys admins mis configuring their systems but it is bloody annoying if you deal with bigger than RAM data files.

              Comment


              • #27
                I find OpenBSD does great here. The default ulimit of 512M is pretty strict so resource hogs like Firefox or Iridium simply crash and free all the memory back to the OS. Everything else keeps running correctly and smoothly.

                Is there a problem here? If a hypertext viewer uses more than half a gig of ram, it is broken and should be stopped .

                In all honesty, Linux has never been fantastic with memory. In the RHEL4 / Fedora Core 4 days, it was quite common for a laptop to only have 512M of RAM; Fedora Core 4 ran terribly. Then Fedora Core 5's installer didn't even work on 512M, just uncompressing the package data caused it to run out and abort.

                The actual trick was to avoid the GUI for installing or day to day use (Gnome 2 was bloated, Gnome 3 is... unacceptable) and this trick seems to not have changed since back then. Wayland has reduced the elegance of window managers but I don't think "working light" will ever change.
                Last edited by kpedersen; 06 August 2019, 03:36 PM.

                Comment


                • #28
                  Originally posted by dimko View Post
                  He switches off swap, FIRST LINE OF DEFENCE against low memory situation and THEN complains shit doesn't work.
                  Well done...
                  Umm, no. He is complaining that the situation gets out of hand.
                  It's a technical analysis situation, not that he cannot enable swap.

                  The kernel stalls and he is complaining over the stalls when there is no more memory.
                  There should be an out of memory situation instead of a pressure stall.

                  To me this seems like a behavior "bug" or a configuration issue.
                  Maybe what he is looking for is to turn off Linux overcommit behavior:

                  echo 2 > /proc/sys/vm/overcommit_memory

                  It will probably make the system a bit slower though. And a bit heavier on memory as Linux will resolve all allocations instead of lazy-picking them.
                  There are several other tunables. I wonder if the default zero behavior should better detect runaway situations instead of stalling to bits.

                  Comment


                  • #29
                    Originally posted by kpedersen View Post
                    I find OpenBSD does great here. The default ulimit of 512M is pretty strict so resource hogs like Firefox or Iridium simply crash and free all the memory back to the OS. Everything else keeps running correctly and smoothly.

                    Is there a problem here? If a hypertext viewer uses more than half a gig of ram, it is broken and should be stopped
                    ulimits are tunables. Linux could do exactly the same. There is no problem in setting resource limits for programs and process groups.
                    You could also stop overcommiting memory. The question at hand is probably why the kernel pressure-stalls on heuristic overcommits instead of killing the applications as you suggest.

                    It seems the heuristic overcommit never lands in the "this is impossible"-scenario and starts killing stuff off. Instead it frags itself to bits.

                    Comment


                    • #30
                      Originally posted by slavko321 View Post

                      The problem is that it doesn't work correctly. Having no swap, when running out of RAM the system TRASHES the hard drive with IO, with NO reason in what is obviously a bug.

                      What should happen is:
                      - swap, if enabled and available
                      - call OOM immediately

                      The disk trashing behaviour makes NO sense and I really wonder what is being read/written - there is NO swap enabled.
                      Executables and shared libraries are paged into memory, and can be paged out even with no swap.

                      I think that’s part of the reason this is being considered a bug. The kernel is dumping those pages and likely immediately reading them back in when execution continues.
                      Last edited by nivedita; 06 August 2019, 03:42 PM.

                      Comment

                      Working...
                      X