Announcement

Collapse
No announcement yet.

Latest Slab Cgroup Memory Controller Patches Saving ~1GB RAM Per Host On Facebook Servers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • tuxd3v
    replied
    Originally posted by rlkrlk View Post
    That memory in cache _is_ available to use in that case.
    no _its_ not.
    it takes a lot of time to be available, and a poor job is also made , that doesn't return the desired amounts to the free pool, when you need it, because a lot of factors the most basic one is already locks on that page cache, used by other processes, and so on..

    You are only focusing yourself in the after request, and after some time, not when the request is really made for more memory..
    You can see the system gradually entering in swap( I see this everyday, and annoys me.. not having a knob for it like in 2.4 series..), not because of not enough memory, but just because tons are cached, if you drop caches then you are in a better position to not enter in swap, but even then, dropping caches, is another problem that could take a lot of time depending on the amount of page cache that is being used at the moment by processes..

    So using Ram for cache could be a dual issue, I am not against it usage,
    I am against unlimited amounts of cache being used that makes the systems enter in swap, which seems ridicule, since you have tons of it, it happens that its badly distributed, or badly cache limited..

    This problems usually happens in big databases, by big I mean databases with some 15-30TB or more( at least on this ones I observe this a lot of times ), machines with lots of data, will require in certain times of the month special queries that would retrieve statistical data, from that day, or that month or that year( the horror case ), and yes you have the memory for them and even more, it happens that the kernel does a poor job not having a "physical barrier" for page cache..

    I doesn't bother to have some milliseconds more in IO, I bother when the machine almost freezes..
    And need my constant assistance, due to not having a simple knob for that..

    If you have a big IO problem,
    You shouldn't solve it with page cache, but instead invest some millions in first class storage, and first class network, then reserve a reasonable amount of Ram for page cache accordingly with the workloads you expect( for that server.. so its always server specific ), the rest maintain in the free pool, because big queries will need that ram, its a loss of processing power/processing time/human resources, to need to allocate a SysAdmin, to a server when some queries are done, just because the page cache system is broken..and its very frustrating for the sysadmin seeing tons of page cache, the machine going down the toilet to swap, and the machine has lots of mem Ram( of course that I am excluding memory fragmentation, because I know that the problem I am speaking about is not related with that.. fragmentation augments the problem but its not the cause here.. ).

    I hope that now you could understand, that I am not speaking about your laptop workloads case, where the amounts of page cache are reduced because you just don't do a lot of IO searching for information on disc in big amounts....
    So in this cases you would not experience this problems, but remember linux is heavily used in the datacenter,, and I hope someone solves this problems,
    And a simple way, would be to implement the limit that existed in kernel 2.4 for page cache, in this way we can tune the system for the workload it will have..

    Leave a comment:


  • rlkrlk
    replied
    Originally posted by tuxd3v View Post
    the thing here is that eventually some app will need big swatts of memory, and it allocates from the free pool, so having available swats of memory will prevent the machine from swapping( taking out of equation here the Fragmentation memory issue.. ).

    Some servers we operate, we need to be dropping caches with regular periods of time,
    so that machine doesn't enter in swap, due to memory being waisted in cache..

    Because sometimes, and in a lot of workloads you do need big swatts of Ram in the free pool,to be used by that applications( the desktop scenario doesn't apply here), I am speaking about servers( and taking out of equation memory fragmentation which is also a big problem..), using cgroups is also a possibility..Facebook is doing that, but it consumes cpu.. and maybe increase latency
    That memory in cache _is_ available to use in that case. If some other process needs more memory, and there's not enough unused memory, pages will be taken from the cache; that data's already safely on mass storage one way or another (in a file, or paged out) so it can be taken without any additional paging.

    Leave a comment:


  • tuxd3v
    replied
    Originally posted by rlkrlk View Post
    Unless you want to power down unused memory -- and you'd have to evacuate an entire bank to do that -- what's the point of leaving memory unused? If no app or kernel thread needs that memory...
    the thing here is that eventually some app will need big swatts of memory, and it allocates from the free pool, so having available swats of memory will prevent the machine from swapping( taking out of equation here the Fragmentation memory issue.. ).

    Some servers we operate, we need to be dropping caches with regular periods of time,
    so that machine doesn't enter in swap, due to memory being waisted in cache..

    Because sometimes, and in a lot of workloads you do need big swatts of Ram in the free pool,to be used by that applications( the desktop scenario doesn't apply here), I am speaking about servers( and taking out of equation memory fragmentation which is also a big problem..), using cgroups is also a possibility..Facebook is doing that, but it consumes cpu.. and maybe increase latency

    Leave a comment:


  • rlkrlk
    replied
    Originally posted by mikus View Post
    I'm usually watching htop all day every day, under kde|cinnamon|mate as this is really nothing new. Usually I notice my desktop lagging, freaking out, and notice htop showing my memory pegged at some 60gb, all used. I'll start killing things using some scripts, firefox first - oh, there went 20-40gb of memory, kill libreoffice, there went another 5gb, kill the stupid electron apps (slack, signal, telegram, teams [yeah, like that]), there went another 5gb of ram.

    I don't see my system typically at my most basic apps using less than 3.5gb of ram. I've not found a really good visual memory usage explorer that shows differences from virtual/actual memory, so always difficult to tell. When the system starts freaking out visually, I always know however.

    Virtual vs. real seems a fine line. When I notice, virtual doesn't seem bad, but generally consumed, I notice the system freaking out. Some 15 years watching different systems, first decade+ or so ubuntu, last several arch. Things just sort of run away. Pulseaudio's pavucontrol for years used to grab a good 30-40gb of ram before I realized it had gone crazy, another dumb app doing dumb shit. Luckily I could afford the ram to watch it be dumb or not care for a bit, as it's a useful thing, but it took a while before someone made it not crap the bed.

    A note on browsers. My use case tends to be a bit weird. Under Chrome, I usually open 5-6 different profiles for different gsuite|o365|other uses, each with their own plugins, settings, everything. Same with firefox using profile manager, never both at once though. I do this as I log into that many different companies, orgs, personal, etc profiles every day at least, and this tends to lead to the memory usage over uptime, which I tend to measure in months. When I can killall -9 firefox|chrome|chromium|brave and see some 40gb of memory drop instantly, I do consider it a memory leak, but seems mostly related to the multi-profile use. This is consistent between browsers, virtual memory and used memory is getting harder to tell apart.

    Libreoffice gets a bit crazy too, virtual memory in htop I'll see grabbing some 32gb of ram, but really tends to use around 5-6gb for me. Killall again shows me just what it's using, but I have to wonder how much it *really* uses in that virtual memory allocation normally that things tread on each other, and the cpu/memory in the process.

    I am heavy business 24/7 as a network/security consultant dude, and ask a lot of my desktop/laptop systems, usually running a few vm's, browser profiles as above, very complex spreadsheets in libreoffice, aforementioned crap. Same since starting with ubuntu full-time circa 2005. Throw in youtube, steam, and other random crap it gets busy, but I've never needed consistently so much ram constantly than I do today. I do things like run a 3-6 displays at a time off various gpu's, and have seen about every flaw in linux and every Dist/DE along the way for the past few decades, so no idea what weirdness I hit.
    What specific number(s) are you monitoring in htop? There are a lot of different numbers indicating memory usage; they don't all matter.

    My personal laptop has 32 GB; my work laptop, 64. I frequently have hundreds of tabs open (in waterfox). Yes, it appears to consume a lot of memory under those conditions, but I haven't seen it get up toward 32GB or anywhere close; it becomes unusably slow (but the computer as a whole responds just fine). My work laptop I've run 4-5 non-trivial VMs in addition to my session before it starts to bog down.

    If the amount of available memory as shown by 'free' increases dramatically after you shut down your browser, then something's going wrong with said browser. It isn't likely a Linux problem.

    Leave a comment:


  • rlkrlk
    replied
    Originally posted by tuxd3v View Post
    What is going wrong?
    Everything!!

    toolkits like gtk3/qt4-5, and so on..

    Also Linux doesn't provide a knob for limiting the amount of Ram that can be used as cache..
    So it will be always a Pain, because majority of Ram will be cache..
    In Kernel 2.4 was there a knob for that, but was deleted in 2.6 branch..I think we still miss it today..
    Unless you want to power down unused memory -- and you'd have to evacuate an entire bank to do that -- what's the point of leaving memory unused? If no app or kernel thread needs that memory, just unmap it but keep it around; if something does then use it, it will incur a soft page fault, which is a lot cheaper than actually pulling it in from SSD, much less disk. If an app wants more memory, it can get it very quickly.

    Leave a comment:


  • pal666
    replied
    Originally posted by tuxd3v View Post
    The drop of cache happens only between pooling kernel threads, if you have a applications that requests 1GB Ram, ans you have lets say 15 GB Ram, probably you will ned in swap, exactly because you allocate memory from the free pool..
    From time to time yes some cache is freed, but if you run a applications that needs that memory now...you are done..
    who told you that?

    Leave a comment:


  • tuxd3v
    replied
    Originally posted by pal666 View Post
    there are knobs to control cache usage in linux. but reducing cache usage would be idiotic, because cache doesn't "use" memory. when memory is needed, cache is dropped. without cache you will have to read from harddrive again, which is slow
    The drop of cache happens only between pooling kernel threads, if you have a applications that requests 1GB Ram, ans you have lets say 15 GB Ram, probably you will ned in swap, exactly because you allocate memory from the free pool..
    From time to time yes some cache is freed, but if you run a applications that needs that memory now...you are done..exactly because there are no kernel knob that work like a barrier, unless you go Cgroups, like facebook did..

    Leave a comment:


  • mykolak
    replied
    Originally posted by mikus View Post
    My laptop has 64gb of ram, and at times this is still a problem exhausting memory.
    My laptop has 16GiB and it's perfectly fine for a lot of apps. Typical run: KDE Plasma, Krusader, 2(3) browsers, 2-4 IDEs, Kate with a lot of files, Konsole with a lot of tabs, KTorrent, Telegram, Kontact (full PIM with MariaDB server), MySQL Workbench, periodically Steam.

    Right now with less apps (Krusader, Falkon, Kate, Konsole, KTorrent) — less than 2GiB.

    I'm not using Electron apps though or Firefox on constant basis.

    Originally posted by mikus View Post
    I don't even know how folks use a pc with less than 16gb of ram with linux these days.
    I've used laptop with 4GiB not so long ago. It wasn't nice, but acceptable.

    Originally posted by mikus View Post
    Curious thing is how windoze machines are still typically around 8gb, and people find this acceptable, so what is linux doing wrong?
    And Win10 on the very same laptop without ANY useful apps running uses almost 6GiB. With some apps (game store clients, no games run) — more that 7GiB.
    Last edited by mykolak; 22 June 2020, 08:42 AM.

    Leave a comment:


  • pmorph
    replied
    I've been using linux on desktop for 15+ years, and never noticed any kind of memory issues. Maybe it's just because I don't run any resource gauges on my desktop

    Leave a comment:


  • pal666
    replied
    Originally posted by tuxd3v View Post
    why?
    That guy brought in 64 GB Ram exactly because he thought it would be great.. but the kernel is using cache like a b*ch ( suck*ng more and more.. ).
    No knob to limit cache usage in Linux, there are some knobs spread around, but none works like a barrier( like in kernel 2.4 series )..
    there are knobs to control cache usage in linux. but reducing cache usage would be idiotic, because cache doesn't "use" memory. when memory is needed, cache is dropped. without cache you will have to read from harddrive again, which is slow

    Leave a comment:

Working...
X