For enterprise applications, *Cough*Oracle*Cough* and systems where downtime is really bad, we keep swap turned on, with a small partition size. Please remember, swap is a memory paging concept, putting least accessed memory pages out on the slow device. When I got started with *nix, whole processes were moved to swap. In any case, these systems had 128 GB or 256 GB of RAM, so the old adage of having double the amount of RAM as swap was a complete no go. The reason swap was left on was just to buy time for someone to log in and do something via SSH. Our centralized monitoring folks would alarm on *ANY* swap usage, and a high priority ticket was then generated. My home lab workstation/server is 40 cores, 80 threads and 256GB of RAM. I *Still* have a 2GB swap partition. It never gets used, but is there just in case.
Announcement
Collapse
No announcement yet.
Fedora Developers Discuss Ways To Improve Linux Interactivity In Low-Memory Situations
Collapse
X
-
Originally posted by andyprough View PostSo hilarious. I knew the moment I read the headline that this was going to revolve around them dealing with memory problems in Gnome and systemd.
Frankly, we need to get away from the idea that swap is required. Modern systems should be completely functional without paging to disk if the memory is there. And low memory scenarios should always be responsive enough for the user to immediately kill the greedy app they just started.
Everyone complains about Red Hat's motivations and approaches, but the reality is they're always willing to tackle hard problems instead of fobbing the responsibility off onto the users. It's why so many people support them despite the knee-jerk forum kvetching about systemd. Redhat sees a topical issue involving bad system behavior that's existed for years/decades and they'll talk about the best way to solve it. No one else wants to actually do the work to solve it or rock the boat, so they pass the buck. People want old problems to be fixed, not ignored.
The biggest winners in software tend to be people that ship early and ship often. It's not surprising that it's a winning strategy in FOSS, too.
- Likes 5
Comment
-
Originally posted by Ray_o View Posti am not sure if the problem being discussed is similar to mine
when my system gets low on memory, the kernel won't free the cache, rather it's going to swap some of the programs that are currently open, like visual code, which leads to visual code being unresponsive for some amount of time when i switch to it, does anyone deal with the same issue or know how to fix it ?Code:sudo sysctl vm.swappiness=1
Last edited by birdie; 13 August 2019, 01:51 PM.
Comment
-
Originally posted by Terrablit View PostEveryone complains about Red Hat's motivations and approaches, but the reality is they're always willing to tackle hard problems instead of fobbing the responsibility off onto the users.
- Likes 3
Comment
-
Originally posted by andyprough View Post
I'm running the antiX distro with no systemd, no pulseaudio, no Gnome, no flatpak, etc. Best responsiveness I've ever seen in a distro. It's actually quite easy to live without many of RedHat's complicated solutions to problems.
lol
- Likes 5
Comment
-
Why is this always about "low ram systems"? you can run out of memory with any amount of RAM, since there is no unlimited amount available.
Especially when developing I manage from time to time to lockup my system because I have 5 IDEs open and a build job running. (especially since I have one utterly buggy build job that leaks memory from time to time). Usually all is fine, but the moment I do something stupid and run out of memory I can basically kill the machine hard and reboot, potentially loosing work. I would rather it just kills one of the memory hogs and therefore minimizes my loss. Or freeze some processes and let me choose which one to slaughter ... I mean that's what I have the desktop for, right? So the OS can "communicate" with me.
- Likes 2
Comment
-
I don't know how common it is for people to run out of system memory, but the only cases I've had it happen so that system totally halts were caused by shell scripts accidentally falling into infinite recursion. And I must say at least that case looked pretty much as ugly under Windows as it did under Linux (unless they improved something in win since then, don't know, It's been a few years).
- Likes 3
Comment
-
Originally posted by aksdb View PostWhy is this always about "low ram systems"? you can run out of memory with any amount of RAM, since there is no unlimited amount available.
Especially when developing I manage from time to time to lockup my system because I have 5 IDEs open and a build job running. (especially since I have one utterly buggy build job that leaks memory from time to time). Usually all is fine, but the moment I do something stupid and run out of memory I can basically kill the machine hard and reboot, potentially loosing work. I would rather it just kills one of the memory hogs and therefore minimizes my loss. Or freeze some processes and let me choose which one to slaughter ... I mean that's what I have the desktop for, right? So the OS can "communicate" with me.
- Likes 1
Comment
Comment