Announcement

Collapse
No announcement yet.

"le9" Strives To Make Linux Very Usable On Systems With Small Amounts Of RAM

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by tornado99 View Post
    i.e. have the XanMod team set the sysctl knobs to sensible values?
    Yes, they set 512 MiB of vm.clean_low_bytes by default. This is a good value for 8 GB.

    Comment


    • #32
      Originally posted by HyperDrive View Post
      I fail to see how this hack is better than setting swappiness to 200 (which biases reclaim heavily towards swap instead of page cache eviction), page-cluster to 0 (no read-ahead), and using zram with zstd compression.

      Put down the computer and back away. No. Really. You don't know what you're doing if you're doing either of these things.

      I'm not going to go into how all of that is just plain wrong. Instead, I'll link to an article by one of the memory and cgroup engineers. Most people don't have a clue what virtual memory is about, and it started back in the bad-old-days when virtual memory/swap space was first proposed. In particular, RAM as swap is particularly stupid.

      https://chrisdown.name/2018/01/02/in...e-of-swap.html

      Comment


      • #33
        I volunteer my 512MB laptop as a test subject.

        Comment


        • #34
          Originally posted by stormcrow View Post
          In particular, RAM as swap is particularly stupid.
          He's using ram as compressed swap (zram), which is a best method to speed up the system. This is enabled in Fedora by default for example.
          Tuning swappiness to 200 has also its effect, and this is stated in the article you've linked.
          Sorry, but it seems that's you who don't understand the topic. Reread the article you've linked.

          Comment


          • #35
            This doesn't look like a hack to me. It seems to be a fairly simple and effective change that actually eliminates the root cause of these stalls. I hope it makes mainline.

            If I remember correctly the old BSDs also kind of did it like that, they actually had a fixed size buffer cache allocation originally!

            Comment


            • #36
              Since when was low latency better than reaping a process?

              According to the readme on GitHub "Losing one of many tabs is a better behaviour for the user than an unresponsive system."

              I disagree. This essentially means that if is better to lose data than having to wait for it.

              http://www.dirtcellar.net

              Comment


              • #37
                Originally posted by waxhead View Post
                Since when was low latency better than reaping a process?

                According to the readme on GitHub "Losing one of many tabs is a better behaviour for the user than an unresponsive system."

                I disagree. This essentially means that if is better to lose data than having to wait for it.
                You are not going to have much use for that tab anyway when each mouse move takes a few hours (if it ever comes back).

                Comment


                • #38
                  Originally posted by F.Ultra View Post

                  You are not going to have much use for that tab anyway when each mouse move takes a few hours (if it ever comes back).
                  Really?... it all depends on the content of that tab/program. Not everything happens inside a browser you know - sometimes you need to save things as well.

                  http://www.dirtcellar.net

                  Comment


                  • #39
                    >37 tabs

                    1141 tabs in brave right now, 16GB of ram :P

                    still, I'll be interested to see if this patchset helps with the 15 minute long hangs I get when I do occasionally step over the 130 MB line by accident

                    Comment


                    • #40
                      Originally posted by waxhead View Post
                      Since when was low latency better than reaping a process?

                      According to the readme on GitHub "Losing one of many tabs is a better behaviour for the user than an unresponsive system."

                      I disagree. This essentially means that if is better to lose data than having to wait for it.
                      Funny enough, that's exactly how I have earlyoom configured. If the system hits 90% RAM utilization (just below where things started to thrash), go in with the expectation that it's some jerk website running something like React or Vue gobbling up more than its fair share of RAM and kill the relevant content process so I can restart it if/when I need it again, with it not yet having leaked its way so far up.

                      Code:
                      --prefer '(^|/)firefox .*-contentproc( |$)'
                      Last edited by ssokolow; 14 July 2021, 11:33 PM.

                      Comment

                      Working...
                      X