Announcement

Collapse
No announcement yet.

Yes, Linux Does Bad In Low RAM / Memory Pressure Situations On The Desktop

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by gamerk2 View Post

    Speaking as a SW Engineer, we pretty much assume the OS will allocate a chunk of memory when we request it. I've never seen a program actually handle the case where it doesn't get a block of RAM, since it's basically assumed the OS will crash at that point. It's the job of the OS to find a memory block, and there's really nothing developers can do if one can't be provided.
    As a scientific programmer I have seen it happen in situations where artificial memory limits are imposed in batch systems. Basically c++ throws a malloc exception and I believe c returns a null ptr. Of course no one is going to write error handling for every string allocation. But for certain things it is possible.

    Comment


    • Create the file /etc/sysctl.d/local.conf :
      vm.overcommit_memory=2
      vm.overcommit_ratio=100
      Reboot. Voila, Linux memory management works without swap fine. Be sure to disable CONFIG_NUMA ,HMM_MIRROR and use TRANSPARENT_HUGEPAGE_MADVISE to fix current mess in the 5.x kernels made by Vmware, redhat, nvidia, oracle and facebook. 8-16GB RAM is enough for a gaming computer to run this way.


      Last edited by debiangamer; 19 August 2019, 05:48 AM.

      Comment


      • Originally posted by debiangamer View Post
        Create the file /etc/sysctl.d/local.conf :
        vm.overcommit_memory=2
        vm.overcommit_ratio=100
        Reboot. Voila, Linux memory management works without swap fine. Be sure to disable HMM_MIRROR and use TRANSPARENT_HUGEPAGE_MADVISE to fix current mess in the 5.x kernels made by Vmware, redhat and nvidia. 8-16GB RAM is enough for a gaming computer to run this way.

        The return of debianxfce!!!

        Comment


        • Originally posted by loganj View Post
          well windows if is running out of ram will start to close applications. of course you'll be asked to close some applications but if you don't respond than windows will make a choice for you.
          but if you have swap on (i forgot the name of it) than you'll be safe but still will have a lot of hdd writes
          Windows starts using "swap" when it runs low on actual memory. If you had it's virtual memory manually disabled and some program then runs out of RAM, that program gets closed with an error message. Windows itself stays up nicely. Most often seen with resource hungry video games.

          Comment


          • Originally posted by skeevy420 View Post

            I consider it user error and not a deficiency of the OS. The user pushing something too hard isn't necessarily a computer problem for that matter. No matter how many safe guards are in place, it doesn't help it if a user pushes something beyond its limits. In this case they removed a safe guard and pushed it beyond its limit.

            How is the kernel supposed to know if it needs to halt Plasma, Firefox, Mplayer, LibreOffice, KSP, Kate, Yakuake, makepkg or what to keep the system in a usable manner? All it can do is juggle stuff with what little resources it has available.

            IMHO, this is really a problem that should be solved by a daemon that a user can configure it to kill/halt/suspend-to-disk programs in a specific order because the kernel can't read my mind to know what I consider to be the more important task. It would also need a blacklist of things to not kill ever like the actual desktop environment.
            Maybe the kernel could be told about pages which contain irreplicable user data, and dump those to a core, in a predictable location for recovery.

            Comment


            • Originally posted by tildearrow View Post

              The return of debianxfce!!!
              Well fsck me running.

              Comment


              • Originally posted by microcode View Post

                Maybe the kernel could be told about pages which contain irreplicable user data, and dump those to a core, in a predictable location for recovery.
                What about automatic recovery?

                Comment


                • This doesn't make any sense. If you run out of memory and don't have a swap file I'm surprised the system runs at all. I sincerely don't understand the complaint.

                  For goodness sake, just create a swap file and you'll be fine. Of course it won't run as fast as it would with more memory, but I know from many years of experience with low memory systems in the old days that the performance is more than acceptable.

                  Comment


                  • Originally posted by andreano View Post
                    Has nobody mentioned earlyoom yet?

                    Edit: My phone browser's page search is broken.
                    A few have. Based on my experience, it is the perfect solution for the original complaint. If you have OOM on a desktop, a user-space solution is smarter. That's what earlyoom is. The best hope for the kernel is the memory pressure code ... and the authors of that expect a user-space solution is best. There is a long and rich history of good technical analysis of OOM killer by very smart people, and to my mind it's a bit silly to race in with "why don't you try this" solutions before seeing the history. Meanwhile, if you actually have a problem, try earlyoom. It's mature and it seems to work well. Any future upstream solution is still going to need a user-space component; we can see the limits of a pure kernel solution. Being pragmatic, why wait, when you can have a working solution now? (if you are a desktop user running into low-memory). Set up a VM, install earlyoom (I complied from git which is trivial, but it's packaged) and try to kill your VM. It's fun.

                    Comment


                    • Originally posted by timofonic View Post
                      What about automatic recovery?
                      That's basically just swap, though I guess you could make a case that it's better than swap. It'd be cool to deschedule low-priority processes, and dump them to disk to be rescheduled later. You could think of it as REALLY aggressive scheduling. ;- )
                      I think it would be good to also have some application-specific mechanism though: The critical data in a LibreOffice instance are large, but compress exceedingly well; but there's also lots of crap in that process's memory that doesn't really need to be saved. LibreOffice already has a lot of recovery code, what's a bit more?

                      Comment

                      Working...
                      X