Originally posted by mikus
View Post
Announcement
Collapse
No announcement yet.
Latest Slab Cgroup Memory Controller Patches Saving ~1GB RAM Per Host On Facebook Servers
Collapse
X
-
Originally posted by tuxd3v View Postwhy?
That guy brought in 64 GB Ram exactly because he thought it would be great.. but the kernel is using cache like a b*ch ( suck*ng more and more.. ).
No knob to limit cache usage in Linux, there are some knobs spread around, but none works like a barrier( like in kernel 2.4 series )..
Comment
-
Originally posted by mikus View PostMy laptop has 64gb of ram, and at times this is still a problem exhausting memory.
Right now with less apps (Krusader, Falkon, Kate, Konsole, KTorrent) — less than 2GiB.
I'm not using Electron apps though or Firefox on constant basis.
Originally posted by mikus View PostI don't even know how folks use a pc with less than 16gb of ram with linux these days.
Originally posted by mikus View PostCurious thing is how windoze machines are still typically around 8gb, and people find this acceptable, so what is linux doing wrong?Last edited by mykolak; 22 June 2020, 08:42 AM.
Comment
-
Originally posted by pal666 View Postthere are knobs to control cache usage in linux. but reducing cache usage would be idiotic, because cache doesn't "use" memory. when memory is needed, cache is dropped. without cache you will have to read from harddrive again, which is slow
From time to time yes some cache is freed, but if you run a applications that needs that memory now...you are done..exactly because there are no kernel knob that work like a barrier, unless you go Cgroups, like facebook did..
Comment
-
Originally posted by tuxd3v View PostThe drop of cache happens only between pooling kernel threads, if you have a applications that requests 1GB Ram, ans you have lets say 15 GB Ram, probably you will ned in swap, exactly because you allocate memory from the free pool..
From time to time yes some cache is freed, but if you run a applications that needs that memory now...you are done..
Comment
-
Originally posted by tuxd3v View PostWhat is going wrong?
Everything!!
toolkits like gtk3/qt4-5, and so on..
Also Linux doesn't provide a knob for limiting the amount of Ram that can be used as cache..
So it will be always a Pain, because majority of Ram will be cache..
In Kernel 2.4 was there a knob for that, but was deleted in 2.6 branch..I think we still miss it today..
Comment
-
Originally posted by mikus View PostI'm usually watching htop all day every day, under kde|cinnamon|mate as this is really nothing new. Usually I notice my desktop lagging, freaking out, and notice htop showing my memory pegged at some 60gb, all used. I'll start killing things using some scripts, firefox first - oh, there went 20-40gb of memory, kill libreoffice, there went another 5gb, kill the stupid electron apps (slack, signal, telegram, teams [yeah, like that]), there went another 5gb of ram.
I don't see my system typically at my most basic apps using less than 3.5gb of ram. I've not found a really good visual memory usage explorer that shows differences from virtual/actual memory, so always difficult to tell. When the system starts freaking out visually, I always know however.
Virtual vs. real seems a fine line. When I notice, virtual doesn't seem bad, but generally consumed, I notice the system freaking out. Some 15 years watching different systems, first decade+ or so ubuntu, last several arch. Things just sort of run away. Pulseaudio's pavucontrol for years used to grab a good 30-40gb of ram before I realized it had gone crazy, another dumb app doing dumb shit. Luckily I could afford the ram to watch it be dumb or not care for a bit, as it's a useful thing, but it took a while before someone made it not crap the bed.
A note on browsers. My use case tends to be a bit weird. Under Chrome, I usually open 5-6 different profiles for different gsuite|o365|other uses, each with their own plugins, settings, everything. Same with firefox using profile manager, never both at once though. I do this as I log into that many different companies, orgs, personal, etc profiles every day at least, and this tends to lead to the memory usage over uptime, which I tend to measure in months. When I can killall -9 firefox|chrome|chromium|brave and see some 40gb of memory drop instantly, I do consider it a memory leak, but seems mostly related to the multi-profile use. This is consistent between browsers, virtual memory and used memory is getting harder to tell apart.
Libreoffice gets a bit crazy too, virtual memory in htop I'll see grabbing some 32gb of ram, but really tends to use around 5-6gb for me. Killall again shows me just what it's using, but I have to wonder how much it *really* uses in that virtual memory allocation normally that things tread on each other, and the cpu/memory in the process.
I am heavy business 24/7 as a network/security consultant dude, and ask a lot of my desktop/laptop systems, usually running a few vm's, browser profiles as above, very complex spreadsheets in libreoffice, aforementioned crap. Same since starting with ubuntu full-time circa 2005. Throw in youtube, steam, and other random crap it gets busy, but I've never needed consistently so much ram constantly than I do today. I do things like run a 3-6 displays at a time off various gpu's, and have seen about every flaw in linux and every Dist/DE along the way for the past few decades, so no idea what weirdness I hit.
My personal laptop has 32 GB; my work laptop, 64. I frequently have hundreds of tabs open (in waterfox). Yes, it appears to consume a lot of memory under those conditions, but I haven't seen it get up toward 32GB or anywhere close; it becomes unusably slow (but the computer as a whole responds just fine). My work laptop I've run 4-5 non-trivial VMs in addition to my session before it starts to bog down.
If the amount of available memory as shown by 'free' increases dramatically after you shut down your browser, then something's going wrong with said browser. It isn't likely a Linux problem.
Comment
-
Originally posted by rlkrlk View PostUnless you want to power down unused memory -- and you'd have to evacuate an entire bank to do that -- what's the point of leaving memory unused? If no app or kernel thread needs that memory...
Some servers we operate, we need to be dropping caches with regular periods of time,
so that machine doesn't enter in swap, due to memory being waisted in cache..
Because sometimes, and in a lot of workloads you do need big swatts of Ram in the free pool,to be used by that applications( the desktop scenario doesn't apply here), I am speaking about servers( and taking out of equation memory fragmentation which is also a big problem..), using cgroups is also a possibility..Facebook is doing that, but it consumes cpu.. and maybe increase latency
Comment
-
Originally posted by tuxd3v View Postthe thing here is that eventually some app will need big swatts of memory, and it allocates from the free pool, so having available swats of memory will prevent the machine from swapping( taking out of equation here the Fragmentation memory issue.. ).
Some servers we operate, we need to be dropping caches with regular periods of time,
so that machine doesn't enter in swap, due to memory being waisted in cache..
Because sometimes, and in a lot of workloads you do need big swatts of Ram in the free pool,to be used by that applications( the desktop scenario doesn't apply here), I am speaking about servers( and taking out of equation memory fragmentation which is also a big problem..), using cgroups is also a possibility..Facebook is doing that, but it consumes cpu.. and maybe increase latency
- Likes 1
Comment
Comment