Announcement

Collapse
No announcement yet.

Latest Slab Cgroup Memory Controller Patches Saving ~1GB RAM Per Host On Facebook Servers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by mikus View Post
    A note on browsers. My use case tends to be a bit weird. Under Chrome, I usually open 5-6 different profiles for different gsuite|o365|other uses, each with their own plugins, settings, everything. Same with firefox using profile manager, never both at once though. I do this as I log into that many different companies, orgs, personal, etc profiles every day at least, and this tends to lead to the memory usage over uptime, which I tend to measure in months. When I can killall -9 firefox|chrome|chromium|brave and see some 40gb of memory drop instantly, I do consider it a memory leak, but seems mostly related to the multi-profile use. This is consistent between browsers, virtual memory and used memory is getting harder to tell apart.
    Have you tried viewing about : performance on Firefox? It should show you the memory use per tab and addon.

    Comment


    • #22
      Originally posted by tuxd3v View Post
      why?
      That guy brought in 64 GB Ram exactly because he thought it would be great.. but the kernel is using cache like a b*ch ( suck*ng more and more.. ).
      No knob to limit cache usage in Linux, there are some knobs spread around, but none works like a barrier( like in kernel 2.4 series )..
      there are knobs to control cache usage in linux. but reducing cache usage would be idiotic, because cache doesn't "use" memory. when memory is needed, cache is dropped. without cache you will have to read from harddrive again, which is slow

      Comment


      • #23
        I've been using linux on desktop for 15+ years, and never noticed any kind of memory issues. Maybe it's just because I don't run any resource gauges on my desktop

        Comment


        • #24
          Originally posted by mikus View Post
          My laptop has 64gb of ram, and at times this is still a problem exhausting memory.
          My laptop has 16GiB and it's perfectly fine for a lot of apps. Typical run: KDE Plasma, Krusader, 2(3) browsers, 2-4 IDEs, Kate with a lot of files, Konsole with a lot of tabs, KTorrent, Telegram, Kontact (full PIM with MariaDB server), MySQL Workbench, periodically Steam.

          Right now with less apps (Krusader, Falkon, Kate, Konsole, KTorrent) — less than 2GiB.

          I'm not using Electron apps though or Firefox on constant basis.

          Originally posted by mikus View Post
          I don't even know how folks use a pc with less than 16gb of ram with linux these days.
          I've used laptop with 4GiB not so long ago. It wasn't nice, but acceptable.

          Originally posted by mikus View Post
          Curious thing is how windoze machines are still typically around 8gb, and people find this acceptable, so what is linux doing wrong?
          And Win10 on the very same laptop without ANY useful apps running uses almost 6GiB. With some apps (game store clients, no games run) — more that 7GiB.
          Last edited by mykolak; 22 June 2020, 08:42 AM.

          Comment


          • #25
            Originally posted by pal666 View Post
            there are knobs to control cache usage in linux. but reducing cache usage would be idiotic, because cache doesn't "use" memory. when memory is needed, cache is dropped. without cache you will have to read from harddrive again, which is slow
            The drop of cache happens only between pooling kernel threads, if you have a applications that requests 1GB Ram, ans you have lets say 15 GB Ram, probably you will ned in swap, exactly because you allocate memory from the free pool..
            From time to time yes some cache is freed, but if you run a applications that needs that memory now...you are done..exactly because there are no kernel knob that work like a barrier, unless you go Cgroups, like facebook did..

            Comment


            • #26
              Originally posted by tuxd3v View Post
              The drop of cache happens only between pooling kernel threads, if you have a applications that requests 1GB Ram, ans you have lets say 15 GB Ram, probably you will ned in swap, exactly because you allocate memory from the free pool..
              From time to time yes some cache is freed, but if you run a applications that needs that memory now...you are done..
              who told you that?

              Comment


              • #27
                Originally posted by tuxd3v View Post
                What is going wrong?
                Everything!!

                toolkits like gtk3/qt4-5, and so on..

                Also Linux doesn't provide a knob for limiting the amount of Ram that can be used as cache..
                So it will be always a Pain, because majority of Ram will be cache..
                In Kernel 2.4 was there a knob for that, but was deleted in 2.6 branch..I think we still miss it today..
                Unless you want to power down unused memory -- and you'd have to evacuate an entire bank to do that -- what's the point of leaving memory unused? If no app or kernel thread needs that memory, just unmap it but keep it around; if something does then use it, it will incur a soft page fault, which is a lot cheaper than actually pulling it in from SSD, much less disk. If an app wants more memory, it can get it very quickly.

                Comment


                • #28
                  Originally posted by mikus View Post
                  I'm usually watching htop all day every day, under kde|cinnamon|mate as this is really nothing new. Usually I notice my desktop lagging, freaking out, and notice htop showing my memory pegged at some 60gb, all used. I'll start killing things using some scripts, firefox first - oh, there went 20-40gb of memory, kill libreoffice, there went another 5gb, kill the stupid electron apps (slack, signal, telegram, teams [yeah, like that]), there went another 5gb of ram.

                  I don't see my system typically at my most basic apps using less than 3.5gb of ram. I've not found a really good visual memory usage explorer that shows differences from virtual/actual memory, so always difficult to tell. When the system starts freaking out visually, I always know however.

                  Virtual vs. real seems a fine line. When I notice, virtual doesn't seem bad, but generally consumed, I notice the system freaking out. Some 15 years watching different systems, first decade+ or so ubuntu, last several arch. Things just sort of run away. Pulseaudio's pavucontrol for years used to grab a good 30-40gb of ram before I realized it had gone crazy, another dumb app doing dumb shit. Luckily I could afford the ram to watch it be dumb or not care for a bit, as it's a useful thing, but it took a while before someone made it not crap the bed.

                  A note on browsers. My use case tends to be a bit weird. Under Chrome, I usually open 5-6 different profiles for different gsuite|o365|other uses, each with their own plugins, settings, everything. Same with firefox using profile manager, never both at once though. I do this as I log into that many different companies, orgs, personal, etc profiles every day at least, and this tends to lead to the memory usage over uptime, which I tend to measure in months. When I can killall -9 firefox|chrome|chromium|brave and see some 40gb of memory drop instantly, I do consider it a memory leak, but seems mostly related to the multi-profile use. This is consistent between browsers, virtual memory and used memory is getting harder to tell apart.

                  Libreoffice gets a bit crazy too, virtual memory in htop I'll see grabbing some 32gb of ram, but really tends to use around 5-6gb for me. Killall again shows me just what it's using, but I have to wonder how much it *really* uses in that virtual memory allocation normally that things tread on each other, and the cpu/memory in the process.

                  I am heavy business 24/7 as a network/security consultant dude, and ask a lot of my desktop/laptop systems, usually running a few vm's, browser profiles as above, very complex spreadsheets in libreoffice, aforementioned crap. Same since starting with ubuntu full-time circa 2005. Throw in youtube, steam, and other random crap it gets busy, but I've never needed consistently so much ram constantly than I do today. I do things like run a 3-6 displays at a time off various gpu's, and have seen about every flaw in linux and every Dist/DE along the way for the past few decades, so no idea what weirdness I hit.
                  What specific number(s) are you monitoring in htop? There are a lot of different numbers indicating memory usage; they don't all matter.

                  My personal laptop has 32 GB; my work laptop, 64. I frequently have hundreds of tabs open (in waterfox). Yes, it appears to consume a lot of memory under those conditions, but I haven't seen it get up toward 32GB or anywhere close; it becomes unusably slow (but the computer as a whole responds just fine). My work laptop I've run 4-5 non-trivial VMs in addition to my session before it starts to bog down.

                  If the amount of available memory as shown by 'free' increases dramatically after you shut down your browser, then something's going wrong with said browser. It isn't likely a Linux problem.

                  Comment


                  • #29
                    Originally posted by rlkrlk View Post
                    Unless you want to power down unused memory -- and you'd have to evacuate an entire bank to do that -- what's the point of leaving memory unused? If no app or kernel thread needs that memory...
                    the thing here is that eventually some app will need big swatts of memory, and it allocates from the free pool, so having available swats of memory will prevent the machine from swapping( taking out of equation here the Fragmentation memory issue.. ).

                    Some servers we operate, we need to be dropping caches with regular periods of time,
                    so that machine doesn't enter in swap, due to memory being waisted in cache..

                    Because sometimes, and in a lot of workloads you do need big swatts of Ram in the free pool,to be used by that applications( the desktop scenario doesn't apply here), I am speaking about servers( and taking out of equation memory fragmentation which is also a big problem..), using cgroups is also a possibility..Facebook is doing that, but it consumes cpu.. and maybe increase latency

                    Comment


                    • #30
                      Originally posted by tuxd3v View Post
                      the thing here is that eventually some app will need big swatts of memory, and it allocates from the free pool, so having available swats of memory will prevent the machine from swapping( taking out of equation here the Fragmentation memory issue.. ).

                      Some servers we operate, we need to be dropping caches with regular periods of time,
                      so that machine doesn't enter in swap, due to memory being waisted in cache..

                      Because sometimes, and in a lot of workloads you do need big swatts of Ram in the free pool,to be used by that applications( the desktop scenario doesn't apply here), I am speaking about servers( and taking out of equation memory fragmentation which is also a big problem..), using cgroups is also a possibility..Facebook is doing that, but it consumes cpu.. and maybe increase latency
                      That memory in cache _is_ available to use in that case. If some other process needs more memory, and there's not enough unused memory, pages will be taken from the cache; that data's already safely on mass storage one way or another (in a file, or paged out) so it can be taken without any additional paging.

                      Comment

                      Working...
                      X