Announcement

Collapse
No announcement yet.

Linux Mint 21 Is Going To Avoid systemd-oomd

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Nozo
    replied
    Welp, "systemd just works" some say.

    Leave a comment:


  • sinepgib
    replied
    Originally posted by pWe00Iri3e7Z9lHOX2Qx View Post
    Those things I listed aren't "features". They are big building blocks that have been adopted by every Linux distro that matters. Again, Red Hat drives the bulk of these changes across the Linux ecosystem. Canonical essentially has a graveyard of failure when it tries to do the same. Unity, Upstart, Mir, Ubuntu Touch, the list goes on. Snaps aren't bad in concept, except we've ended up in a fractured market where Snap has strong support among closed source commercial software, and Flatpak has strong support on the OSS side. Just like with other examples, we the humble consumers of all this work, would have been better off with one solution that combined the best aspects of both.
    Upstart was quite successful. systemd was better and it won over it, but calling it a failure when it actually got mainstream adoption at the time, including from RHEL, is a bit unfair. What killed it was Canonical's obsession with putting all of their projects behind CLAs. That's why Lennart decided to make systemd rather than send his fixes upstream. Everything else, I agree, even downstream distros went out of their way to elude their stuff. And save from Ubuntu Touch (that I never bothered learning what was about), everything that failed had a common trend: those were incompatible for the sake of being incompatible. Of course it would fail. You can get away with breaking compatibility if you bring something more useful to the table, but why would you expect someone to pay for a costly migration to your inequal-rights project (hey CLAs!) if all you did was put a "Canonical Inside" stamp as your main feature?

    Leave a comment:


  • pWe00Iri3e7Z9lHOX2Qx
    replied
    Originally posted by Mahboi View Post
    Ridiculous. You're citing features as defining growth. If this were true, all we'd need to do is just implement every feature we can think of and Linux would be the best platform ever made.
    Those things I listed aren't "features". They are big building blocks that have been adopted by every Linux distro that matters. Again, Red Hat drives the bulk of these changes across the Linux ecosystem. Canonical essentially has a graveyard of failure when it tries to do the same. Unity, Upstart, Mir, Ubuntu Touch, the list goes on. Snaps aren't bad in concept, except we've ended up in a fractured market where Snap has strong support among closed source commercial software, and Flatpak has strong support on the OSS side. Just like with other examples, we the humble consumers of all this work, would have been better off with one solution that combined the best aspects of both.

    Originally posted by Mahboi View Post
    Canonical did the one thing that the entire Linux community never learned to do in the past 30 years: streamline and standardise instead of scattering into a million options. Much as the MUH FREEDUMS mentality infects Linux, human resources is still a thing. If you have 10 million devs working on 10 million projects, none of these projects will succeed, compared to 10 projects worked on by 100 people. Ubuntu made decisions for themselves and pulled the linux community around its choices. Whether good or bad, they actually drove the community somewhere.
    You have given zero current concrete examples of Canonical driving the broader Linux world forward. Many years ago, they were more focused on the desktop experience than most Linux distros, but that time has passed (with good reason in a for-profit company as there's no money in it). Whether it is Mint or Manjaro or MX, just staying within the "M"s, Canonical isn't really doing "desktop Linux" better than the rest these days. I have no hate boner for Canonical like many on these forums do. I just don't see them moving the ecosystem forward.

    Originally posted by Mahboi View Post
    Valve is working probably as hard on breaking Windows' hold on gaming as any entity, company or other, ever did. Even Torvalds ended up saying in an interview that if an .exe/.msi equivalent ever starts existing on Linux, it will probably be thanks to Valve creating the standard for them someday.
    I have no disagreement about Valve. They have single-handedly (obviously building on other projects like Wine) done more for Linux gaming than any other company, it isn't even close.

    Leave a comment:


  • Mahboi
    replied
    Originally posted by pWe00Iri3e7Z9lHOX2Qx View Post

    What is your rationale for this statement?
    • GNOME? Oh wait, that was Red Hat.
    • The init (+ a bunch of other stuff) system they made systemd? That's Red Hat too.
    • Wayland? Red Hat.
    • Pipewire? Red Hat again.
    • Polkit? Red Hat.
    • [...]
    Outside of Snap and being the only major distro supporting ZFS out of the box, Canonical seems to be doing fuck all to actually move the Linux desktop forward. Red Hat is the driving force across the ecosystem, and I say this as someone who is typing this from Tumbleweed and wishes their was more diversity in that push.



    As others have already mentioned, the launcher being 64 bit is essentially meaningless when a huge swath of the actual game library is still 32 bit.
    Ridiculous. You're citing features as defining growth. If this were true, all we'd need to do is just implement every feature we can think of and Linux would be the best platform ever made.
    Canonical did the one thing that the entire Linux community never learned to do in the past 30 years: streamline and standardise instead of scattering into a million options. Much as the MUH FREEDUMS mentality infects Linux, human resources is still a thing. If you have 10 million devs working on 10 million projects, none of these projects will succeed, compared to 10 projects worked on by 100 people. Ubuntu made decisions for themselves and pulled the linux community around its choices. Whether good or bad, they actually drove the community somewhere.

    Valve is working probably as hard on breaking Windows' hold on gaming as any entity, company or other, ever did. Even Torvalds ended up saying in an interview that if an .exe/.msi equivalent ever starts existing on Linux, it will probably be thanks to Valve creating the standard for them someday.

    Any long-running project needs, and will forever need, to be driven by a determined force, usually a company with your typical top-down authority running it. This is as true for Linux as for anything else. I personally stay weary of some of the more unsavory decisions from Canonical or Valve, but I do recognise that their input has been largely more positive than negative. Same for Red Hat in the enterprise space.

    Leave a comment:


  • Mahboi
    replied
    Originally posted by unis_torvalds View Post

    Mint always increments versions by one, not two. But because they only base on Ubuntu LTS, it's the Ubuntu versions that jump by twos because their LTSes release biennially.
    Thus Mint 19 was based on Ubuntu 18 LTS, and Mint 18 was based on Ubuntu 16 LTS.
    Yes it's confusing, but only if you're constantly thinking in terms of the upstream. If you're just looking at Mint, it's fairly straightforward: Mint 21 succeeds Mint 20 which succeeded Mint 19 which followed Mint 18 and so on. Couldn't be simpler actually. it was just a big coincidence that Mint 20 came out in 2020 the same as Ubuntu 20 LTS, not an intentional alignment of version numbers.
    And my point is that they're wrong?

    "Couldn't be simpler" you say, well it could: just turn off versioning and call it "Mint" and that's it then. Simplistic versioning isn't helping anyone understand and is confusing, this has nothing to do with Ubuntu as an upstream. It's a general point for software: good versioning should allow all valid info to come in one read. Ubuntu does that best. Mint should too.

    Leave a comment:


  • Mahboi
    replied
    Originally posted by user1 View Post

    Remember, it's not just Steam client itself that is 32 bit. There are still tons of native 32 bit ports like tf2. I know that on MacOS Steam client is already 64 bit for quite a while, but that's because since Catalina it completely removed 32 bit compatibility, so all 32 bit Mac ports don't work anymore and Steam was of course forced to switch to 64 bit client. On Linux however, there is probably no point for Valve to switch to 64 bit client because 32 bit games need 32 bit dependencies anyway. Even if you play a 32 bit windows game on Proton, you'll still need 32 bit dependencies like i386 Mesa drivers.
    A sad state of affairs...

    Leave a comment:


  • Danny3
    replied
    If only Linux Mint developers would stop using Ubuntu and use Debian as their main base and then have a KDE Plasma edition.
    That would be the perfect distro!

    Leave a comment:


  • NotMine999
    replied
    Originally posted by Ermine View Post

    1) Software devs just need to produce software which is as filesystem layout agnostic as possible. Ideally, distro customization must happen at distro level. This seems hard, but it is worth it. Having variety of distros enable a) diverse set of layouts suitable to different use cases; b) experimental layouts (like at GoboLinux) which aimed to improve package management.
    In reference to what I emphasized in bold...most definitely an idealistic thought, especially in the Linux World where "user freedom trumps everything".

    Having once been a software developer myself, way back when, I can tell you this: You can try to reach those goals but you will never consistently achieve them. Perhaps the closest an OS & desktop world has come to that was MSDOS and the very early days of Windows. As Windows evolved...more & more shortcuts, undocumented files, special hooks, and what-not came into existence. if you have used Windows continuously since the 1990s you will remember the days of "DLL H311" (rewritten so as not to be obviously offensive). M$ pretty much fixed that with their development environments and distributable runtime files. Nowadays a Windows executable can be installed wherever the installer allows it and runtimes can be installed if needed, but it took years to get to that point.

    Originally posted by Ermine View Post
    But yes, systemd enforced more or less the same layout on everyone subscribed.

    On your experience of distro hopping: that was hard, but you had the reason to switch distro, didn't you?
    I'm not sure about SystemDeath enforcing any filesystem layouts. Even SystemDeath file locations have changed in the lifetime of that software, perhaps to a more standard layout, but still changed. The standardization of the unit files & available variables is helpful, so is the separation of much of the Linux internals into separately configurable and-or controllable bits is useful. But standardized filesystem layouts? Yeah, not sure about that. Is SystemDeath truly that omnipotent?

    Yes, I have a few reasons for changing distros:
    • Redhat way back when was "free". You did not need a license to download updates from their repos, but that changed when Redhat introduced licensing and the licensing cost for me was undesirable.
    • Arch was next because "it worked" for my needs; the "rolling release" approach was attractive. But then it's various package upgrades started to cause "breakage" that required hours to fix...when that work is multiplied across multiple servers. Arch had begun to document in public their "breakage" and what users had to do to fix it. That was a sign to me, back then, of an immaturity in the distro. Redhat (keeping my 1 "free" license...for a while) & Debian got that packaging stuff right on my "test" platforms, so why not Arch?
    • Gentoo followed Arch as an experiment to see if more performance could be squeezed out by tuning packages to just the features they needed, within the limits of the distro's packaging. More performance and narrowed "attack surface" could be obtained, but at the cost of code compiling & package tuning. Hours could be spent tuning a package, recompiling, making the tuning choices work among the other packages that were being used. The time spent was interesting & educational, but Gentoo really needed a solution for packaging precompiled customized binaries. The last straw was when Gentoo "broke" Samba (a big thing in my shop) due to what "upstream Samba" was doing to improve security...and then Gentoo went "radio silent" regarding how to recover the old behavior. Sorry, but "drop dead changes" without any warnings or workarounds is completely unacceptable. The lack of response from Gentoo, not even a "F U", was quite telling to me.
    • Debian was and still is the current choice on both servers & desktops. Package coverage meets my needs. Debian defaults to using SystemDeath, and that's fine for desktop usage so long as it doesn't "break the user's experience". Devuan is comparable to Debian, but without the SystemDeath stuff; the prime choice on servers and 2nd choice on desktops (long term testing shows it works fine on the desktop in my use cases). Both work fine for my uses. Both have a "stable" channel that changes to "oldstable" when bumped by the next "stable" release. Debian has a "testing" channel but Devuan's "testing" seems a bit dodgy. Release upgrades "just work"; there is no mysterious package or feature "breakage". Migrating from Debian to the comparable Devuan version works; I have not tried the reverse, "Devuan-->>Debian".
    As for other comments in this forum about SystemDeath supervising processes better than SysVInit, I can't say that I have ever encountered problems like that across my fleet, even in a Debian "testing" (not "sid"..."cutting edge"..."unstable") release. Perhaps I simply use packages that are "better behaved" than the stuff that others are using. SysVInit meets my needs & uses on my servers. That's my choice based on my needs & use cases....YMMV.

    Originally posted by Ermine View Post
    2) Systemd on desktop: it is unnecessary on the desktop. MX Linux with sysvinit works just fine. As for boot times, it manages to compare to that of systemd distros. Also, systemd has a number of features (e.g. container management) which makes more sense on servers.
    I have tested Devuan on the desktop. It seems to work fine for my needs. Even on my Intel Celeron J3455 processor "test" platforms Devuan gets to the desktop about as fast as Debian, or even a few seconds faster on a "tuned" desktop where I am only loading exactly what I need in a LXDE desktop environment.

    Originally posted by Ermine View Post
    3) You still want to switch sysvinit to something more modern: supervision suites can restart your daemons if they crash, thus improving your server's fault tolerance. Also they offer another features, such as better and more reliable log management. You can get the good of systemd without the bad. I'm talking of daemontools-family suites, like daemontools itself, runit and primarily s6.
    Devuan offers OpenRC and possibly RunIt; I have not explored any of those options in Devuan. I tried OpenRC on Gentoo and it seemed to work fine. I have no experience with RunIt. Others might be more "experimental" than me. SysVInit meets my needs. YMMV. Linux is all about choice, no?
    Last edited by NotMine999; 03 July 2022, 11:04 PM.

    Leave a comment:


  • timrichardson
    replied
    systemd-oomd will probably be good at some point in the next 12 months. However, MGLRU is also on the horizon. I've been doing stress testing in a 4GB Ubuntu VM with the default swap (which is too small). systemd-oom killiing has effectively been disabled with the latest Ubuntu changes (which may still be in proposed), so now we are back to 'livelock'; stalled sessions. The only thresholds which were ever triggered were the memory or swap usage measures; memory pressure has not ever triggered anything (and now swap kill has been turned off, and I don't see kills at all). I don't know why memory pressure is not working. I can get the CPU load average to be > 70, that is, my VM is swapping and doing nothing else, and still memory pressure gets to about 14%, where the manpage for systemd suggests that a threshold of 40% should be an aggressive choice for interactive cgroups. Memory stall happens much earlier, at least when a browser is causing the memory pressure. If memory pressure thresholds need to be completely different for different types of applications, it is going to be very, very hard to get right.

    Something is not right about that KPI.

    Meanwhile the kernel OOM killer looks on in indifference; it does nothing, which we are used to (hence systemd-oomd).

    I installed xanmod 5.18 which includes MGLRU (and I also set up zswap, based on commentary that is more effective with MGLRU). Disabled systemd-oom. Amazing difference. The kernel OOM killer actually works now. It gets involved quite quickly (typically before CPU load is > 20). I'm triggering high memory by loading 100 tabs into Chromium with trackthis.link and the oomkiller kills tabs.
    earlyoom also does this, but the kernel killer kills many fewer tabs. So far it has never killed the session. This is just my testing, but it looks to me that MGLRU and the kernel killer is a good choice. I really hope MGLRU gets the green light for the official linux kernel soon, although I'm sure the delay is for a good reason.
    Last edited by timrichardson; 03 July 2022, 07:01 PM.

    Leave a comment:


  • sinepgib
    replied
    Originally posted by NotMine999 View Post
    I think the distros are the problem.

    In the early days of Linux the distros made a reasonable effort to follow common standards on filesystem layouts (and long before SystemDeath came along), then scripts from the original developer seemed to need few changes and SysVInit worked just fine.

    Then a number of different distros decided to go off into separate "camps" on their filesystem layouts (generally following their upstream primary distro designs) with each "camp" doing something different. That schism caused trouble for many original developer scripts and many of those developers moved over to using SystemDeath features because SystemDeath was common to most of those distros; it reduced the "distro customization & support hassle".

    I personally cannot remember how many scripts I had to edit in fleet-wide migrations from Redhat to Arch, then Arch to Gentoo, and finally Gentoo to Debian. Each of those distros had their own ideas on where certain files were stored, and these were mostly config files for the apps that might have file location pointers stored within them.
    That is a big part of the problem, but not the only one. That is what systemd as an umbrella project attempts to fix, but not what systemd (and several others that sysvinit lovers love to ignore) as init fixes.

    Originally posted by NotMine999 View Post
    SystemDeath makes sense on the desktop where lots of different stuff has to start, much of it dependent upon other stuff, and all of that stuff needs to "fly in formation together" for the desktop world to work.

    On a server where I do not have a GUI (CLI or WebUI only), the "linearity" of the SysVInit design (using Devuan) works fine for my uses. Besides, who uses a GUI on their servers? Linux converts that were formerly Apple & Windows users, that's who, and it's because that is how they were taught; been there and unlearned all of that.
    It makes absolute sense in a server for the simple reason sysvinit does not supervise processes correctly. And that is what I mean with sysvinit being broken, it's absolutely fragile and fault intolerant. An init can't be fault intolerant, specially in a server where you should avoid downtime as much as possible. After all, that's what people criticizes about systemd's complexity. It's still better than sysvinit in that regard, but significantly worse than say s6, that is able to be both simple and able to keep track of services in a non brain dead way.

    Leave a comment:

Working...
X