Announcement

Collapse
No announcement yet.

Yes, Linux Does Bad In Low RAM / Memory Pressure Situations On The Desktop

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by elatllat View Post
    [...]
    Maybe you should try before you recommend.
    Of course I tried.
    It works very well for me. Have a look.
    https://drive.google.com/file/d/1Tov...ew?usp=sharing (I'm not sure if Google servers have processed the video, you can always download.)

    Maybe it's time to update your Linux to a much newer (more modern) version.

    Comment


    • Originally posted by latalante View Post
      Of course I tried.
      It works very well for me. Have a look.
      https://drive.google.com/file/d/1Tov...ew?usp=sharing (I'm not sure if Google servers have processed the video, you can always download.)

      Maybe it's time to update your Linux to a much newer (more modern) version.
      One of the many differences from Ubuntu to Arch must be to blame.

      "cgroups-v2 first appeared in Linux kernel 4.5" [src]

      I only use rolling distributions occasionally in VMs, otherwise I stick to LTSs;

      Code:
      > uname -r
      4.15.0-55-generic
      
      > lsb_release -r
      Release:    18.04
      
      > systemctl --version | head -n 1
      systemd 237
      
      > mount | grep -P "cgroup2|memory"
      cgroup on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
      cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
      I wonder if Ubuntu is using a strange mix of v1 and v2...





      Comment


      • Originally posted by elatllat View Post

        One of the many differences from Ubuntu to Arch must be to blame.

        "cgroups-v2 first appeared in Linux kernel 4.5" [src]
        What I have shown works under Archlinux.
        Kernel booted with a parameter:
        systemd.unified_cgroup_hierarchy=1

        Comment


        • Originally posted by skeevy420 View Post
          When the kernel is in an OOM situation and is also being told that everything is equal, that can make it more difficult for it to decide what to kill and what not to kill.
          Is it? A naive approach that solves probably 99% of the problems would be to just kill the process consuming the most memory. Some slightly less naive but also more complicated approach could be to measure the speed a process grows and kill the one that grew the most in the last second. That should be pretty accurate in killing of "leaks".

          Comment


          • Originally posted by aht0 View Post

            Bullshit, BSD (FreeBSD at least) may kill one or few memory-hogging programs, system may become sluggish but running out of RAM does not make it crash. Been there, tried that.
            Leave an apologist to try and find some angle tho..
            Ok, so it was added somewhere after 2015 then, quite recent I'd say. Either way, back in the day they said that oom killing was wrong and crashing was the right thing to do.

            Would have been a better post if you could say when it was added instead of being a smarty-pants about it.

            Comment


            • Originally posted by skeevy420 View Post
              I also think that a big part of the problem is how damn near every process runs with 0 and there really isn't any good solution for that; like all programs into their own group and using group policies, aliasing programs and append a nice/renice function to them, and other similar fustercluck solutions.
              Yeah, Linux distros are still in the server mindset of not giving priorities or niceness to processes that should really gtfo and stop lagging the system, but in this case it is mostly tangential.

              The OOM killer already takes RAM usage into consideration, so even if all processes have same priority the bigger one will get axed first.
              This answer explains the actions taken by the kernel when an OOM situation is encountered based on the value of sysctl vm.overcommit_memory. When overcommit_memory is set to 0 or 1, overcommit is

              Comment


              • Originally posted by pomac View Post

                Ok, so it was added somewhere after 2015 then, quite recent I'd say. Either way, back in the day they said that oom killing was wrong and crashing was the right thing to do.

                Would have been a better post if you could say when it was added instead of being a smarty-pants about it.
                It might have been not "FreeBSD's problem" but fault in it's ZFS driver. And "smarty-pants"..? Then please don't put up idiotic comments. Starting with putting all BSD's into one kettle. Each is distinct operating system with IT'S OWN kernel, not like Linux distros, which all are using single kernel and differ only in what's put "around it". Thus it's likely that all behave different in any given scenario because forkings happened decades a go and lot's have changed in each meanwhile. You just can't generalize like that or it would make you look like an fanatic idiot. And I went "smarty-pants" based on that: such assumptions simply won't deserve any other response than response on same level.

                Comment


                • Originally posted by aht0 View Post
                  It might have been not "FreeBSD's problem" but fault in it's ZFS driver.
                  Most likely the latter, FreeBSD has a OOM killer just like Linux, their approach to "not enough RAM" is the same.

                  Comment


                  • The stalling problem has been going on for at least 8 years. Ive noticed it as long as that. It happens with non ZFS systems.

                    I think it has something to do with maybe the paging system, allocation, perhaps i/o scheduling or process scheduling getting caught in some sort of a lock up. If you can manage to get some control killing a big process usually fixes the problem, but I have doubts that it is a just problem with processes not being killed instead the memory situation triggering some other bug that causes thrashing and lock up.

                    the LED lights up solidly as mentioned and the system becomes unresponsive and will remain so for hours.

                    This is not acceptable and also should be categorized as a security vulnerability as this is a DENIAL OF SERVICE vulnerability as someone can bring down a system by causing these pressures. This is a very serious problem and the Linux kernel developers clearly do not care that Linux can be brought down.

                    A way to improve Linux would be to allow default OOM behaviour to be augumented by allowing an external process to be notified of an OOM condition and decide what processes should be killed to free up memory, this can be configured to ask the user or to use a list of processes to kill and or a list of ones to be left alone. Monitor could use another API to know when the OOM system is satifisfied enough memory has been freed. Another feature is memory priority that X, console programs would have high memory priority and they would get access to being paged into memory so they can remain responsive , while other process may be suspended while memory is freed. On a desktop machine killing firefox usually suffices because this is the big memory consumer. X and other critical processes have to be left alone so there will still be a running machine.
                    Last edited by Neraxa; 11 August 2019, 02:10 PM.

                    Comment


                    • Many have suggested this is caused by disabling swap however I have seen it with gigabytes of swap enabled and with most of the swap space being unused. What brings it on is things coming to within 200 MB or so of running out of RAM space. Its usually Firefox, and killing Firefox unlocks the system (it can take hours to actually get that done considering the system is in a virtually locked up state). The OOM killer is obviously not getting rid of Firefox itself, it would not obviously since there are gigabytes of swap still free. So, it kind of looks like the OOM killer is not even involved here since there is plenty of swap space available/. Looks like could be a problem involving i/o scheduling, allocation, process scheduling or something.
                      Last edited by Neraxa; 11 August 2019, 08:10 PM.

                      Comment

                      Working...
                      X