SysVinit 3.11 Released With An "Important Feature" At Long Last

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • oiaohm
    replied
    Originally posted by ahrs View Post
    Lennart has already said he doesn't want this anyway. He thinks Musl should do what Glibc does.
    This is the wrong point of view and miss what the problem is.

    Now lets say someone did fork systemd and made the code work with Musl/Glibc this might be able to compete for market share against systemd.

    Originally posted by ahrs View Post
    It's easy to support both with an #ifdef mess or #include <musl-compat.h>, etc but when upstream doesn't want it, from the perspective of the patch maintainer it's easier to maintain by ripping out all of the Glibc stuff.
    If it was easy to use #ifdef that would be something.

    Code:
    #ifdef __GLIBC__
    This is to detect __GLIBC__ what is the Musl one. Offically there is no __MUSL__.

    ahrs like it or not musl is not easy to support.

    Today, I would like to discuss a project that I care very deeply about: the musl libc. One of the most controversial and long-standing debates in the musl community is that musl does not define a p…


    Yes musl upstream developers idea is that you build a test program then process the binary to find out what libc you are dealing with.

    The problem here is not systemd alone its also musl. Yes if a ifdef could be done that was like this is not __GLIBC__ and this is __MUSL__ this would be one thing. For MUSL having to build a test binary do basically reversing on the binary to work out that you are dealing with __MUSL__ and what version MUSL you are dealing with is a major pain in the but.

    Yes the fun point musl and klibc and many other non glibc libc items there is no standard defined item so you know what libc you are dealing with.
    #ifndef __GLIBC__ is really not good enough.

    Its always like support musl it is easy but the reality is this is not the case. This is also made worse with all the different musl build options.



    Leave a comment:


  • anda_skoa
    replied
    Originally posted by skeevy420 View Post
    Yeah, if you're on Gentoo or LFS
    Yes, and in the context of embedded (which I was mostly trying to address in my earlier comments) most likely Yocto.

    Originally posted by skeevy420 View Post
    To me, your comment highlights how the size and functionality issues that people seem to have are more related to differences in how distributions package and handle dependencies than they are necessarily inherent to systemd itself.
    Indeed.

    More of an issue when one uses a "standard" distribution than when using a custom build.

    Most of the embedded projects I've been involved with would do the latter and only select the parts of the systemd ecosystem that they actually wanted.


    Depending on your distribution you can run into dependency hell by not using systemd or by trying to replace programs it depends on with something else. Not every distribution is that flexible or even wants to be. I wonder how many people really have an issue with Ubuntu or Arch and not necessarily systemd?

    Originally posted by skeevy420 View Post
    Anyways, all that overlap and blurring seems to lead to a lot miscommunication for everyone.
    That is definitely true.

    It is sometimes difficult to see if people mean "systemd the init process" or "systemd the project".

    People who are easily confused don't even know that there is a difference and think that systemd is one huge process instead of small processes working together.

    Also agree with your earlier point that common packaging does not make it easier to see the modularity.

    Originally posted by skeevy420 View Post
    You know, I just can't find any more fucks to give about init systems so I'm gonna coin the new words "sebulate" and "sebulated" to use instead of the phrase "separate but related". Pronounced "seb-you-late".
    Hehe, I like it


    Leave a comment:


  • Akiko
    replied
    Originally posted by tobias View Post
    So as I said above: "Keep PID1 minimal" is a design decision made in 1980 or so, that makes no sense in todays system architecture. It is disturbing how people still run to the defense of those design decisions anyway.
    This is not entirely true. The worst enemy in development is complexity. You can see it everywhere. Current CPUs (required mitigations in kernels), current graphics hardware (about 10 million lines of code out of 35 million in the kernel is ATI/AMD graphic cards), UEFI (rootkits which start to live in your mainboard flashs and even run as Ring -1 or -2), Browsers, SAP software stack clusterfuck, compilers (gcc/llvm love of doind dead-code elimination on not so dead code), cloud software stacks (remember how Google deleted the 125 billion dollar Australian pension fund?), left-pad again (it is just the perfect example for everything going wrong in modern software development ), F35 fighter jets ... it is an endless story...

    Leave a comment:


  • tobias
    replied
    Originally posted by TheMightyBuzzard View Post
    Way too many cultist userspace programmers have made their code dependent on one specific init system when it has absolutely no reason to be. That should never have happened with anything except system utilities that deal directly with the systemd init system and no other. That's not systemd's fault but it absolutely is the fault of its cultists.
    You provide something that helps developers do their task and devs will depend on your stuff. Do nothing useful and people do not depend on your stuff... There is a lot of useful stuff in the systemd umbrella project, so there are lots of reasons to depend on parts of systemd.

    Those useful parts reuse functionality other parts of systemd provide. IMHO that is totally reasonable and a good thing. Who wants several copies of code that all does the same stuff, but each slightly different from each other? So we have a layered architecture: User space code that depends on plumbing layer code, which depends on more low level plumbing, all the way down to systemd-PID1, which finally depends on the kernel.

    Having this plumbing layer with all the services it provides makes it much easier for other devs working on the plumbing layer to provide more functionality. So the systemd-based plumbing layer will continue to grow. Which will make more user space components rely on a systemd plumbing layer, which will make it harder to not use systemd.

    But what else should user space devs do then run with systemd? There is no alternative plumbing layer. Nobody has bothered to create one, the anti-systemd crowd declared all of it as useless and tried to get rid of it... till they realized that does nkt work as upstream really needs that functionality. At this point they could have decided to provide alternatives, but they decided to extract systemd code, fork it or at least semi-fork it and now they ship outdated and broken versions of systemd themselves. Just look at elogind and all the rest... so we are bound to systemd now for better or worse, simply because there is no alternative.
    Last edited by tobias; 23 October 2024, 04:07 AM.

    Leave a comment:


  • ahrs
    replied
    Originally posted by tobias View Post

    So if PID1 and PID2 "may not crash" anyway and you need restart functionality in PID2 as PID3 may crash, why insist PID1 is minimal and all the actual code goes into PID2? You win nothing by doing that, just make the overall system simpler by merging PID1 and PID2. Voila, you are pretty close to systemd.

    So as I said above: "Keep PID1 minimal" is a design decision made in 1980 or so, that makes no sense in todays system architecture. It is disturbing how people still run to the defense of those design decisions anyway.
    It is a design decision, yes. Why does it make no sense today? Because Systemd does it that way?

    I can't think of any obvious downside to using multiple independent processes that are easier to inspect and modify and use in isolation. Systemd does this itself for a lot of things (Systemd networkd and Journald doesn't run in PID 1, why not? Why are they following a design decision from the 1980s?).

    The only difference is the pstree output:

    Systemd (PID1: Supervising) -> PID 2 (program)
    Init (PID1) -> supervise-daemon (PID2: Supervising) -> PID3 (program)

    It ultimately does not matter which approach you take as long as we both agree that both supervisors are correct and do the right thing.

    There is an argument that stuffing more code into PID1 leads to more bugs (that's where the 1980s decision of writing separate units of code that work well together comes from) but without any concrete studies on this it's hard to say either way.

    Leave a comment:


  • intelfx
    replied
    Originally posted by Akiko View Post
    Oh come on, this is just not true. Depending on your configuration you run 3 to 10 systemd daemons. This is absolutely nothing for a desktop system, but something you feel on an embedded system. In my workstation journald, udevd, logind and the user specific part already eats about 33 MiB of RAM. For an embedded system this is absolute overkill. If you login into a build of a yocto-tiny system you can see that your whole running system eats about 10-20 MiB RAM (depending on the Yocto release, there is quite a difference between sumo, kirkstone and the upcoming styhead).​
    On an embedded system you don't need either logind or the "user specific part". Any embedded integrator worth their salt knows that and knows how to disable both (guess what, they are optional!).

    If you're _that_ tight on RAM, you don't need journald either — guess what, it's optional too (although _slightly_ harder to disable, and you are going to lose noticeable functionality by doing so, but then again, if you're that tight on RAM, you better be prepared to lose functionality).

    The only thing you _really_ require is udev ­— and if you're in the "removing udev" territory, then it's just not a target use-case for systemd. Roll your own minimal init as you've done before and be done with it.

    Originally posted by Akiko View Post
    Okay, so let me tell some of my experience as a developer who worked at one of the biggest Linux distributors building professional distributions for (at that time) unique systems. In my office back then I was surrounded by a AlphaSever DS20 and DS20E, HP C3750 and about 2 years ago I killed my HP C8000. I was surrounded by AMD Sledgehammer engineering samples and UltraSparce systems. I had remote access to SGI Altix systems with 512 and 1024 CPUs (Itanium) to investigate and fix bugs. We had a bug where the customer reported this: SGI Altix 1024 CPUs, 1024 GiB RAM, runs HPC application over weeks, highly tuned to run with exactly 1024 threads, but sometimes a CPU runs two threads and another runs nothing, the HPC runtime increased about 25% because of this ... please fix this scheduling issue, because CPU time is expensive.

    To make it short: On HPC system doing serious work, running for weeks or months, you really want a predictable runtime behavior. Having daemons running on the system, which start to do some work like deleting or swapping out journals and eating into your IO or CPU time can become a nightmare. This is the reason why running systemd on a HPC system may be a bad idea. And yes, I know this is an extreme example. But I want to demonstrate you, that I do not throw bullshit around. I tell this, because I encountered these issues. I really try to see the good and the bad.
    If you're such a big boy doing serious work on serious HPCs, then you really ought to know how to stop unneeded dæmons, isolate your CPUs and pin your threads. And you probably should know that having a bunch of processes that mostly do nothing is not supposed to impact scheduling of CPU hogs in any way — or if it does, then you're in trouble anyway, because guess what, kthreads exist and most of them are unbound.

    I'm a shitty hobbyist admin and I know all that. If you don't, then I call your expertise into question.

    (BTW, most of the times, "unpredictable behavior" is shorthand for "I have no clue how to predict behavior". This is one of them times.​)
    Last edited by intelfx; 23 October 2024, 04:04 AM.

    Leave a comment:


  • tobias
    replied
    Originally posted by ahrs View Post

    PID2 in this scenario would be your process supervisor like supervise-daemon, or Runit, etc. PID3 is the thing that will crash. All of the same "must never segfault" things apply to your process supervisor and if you're not using one then "PID1 reboots the system when PID2 stops. So PID2 crashing effectively reboots the system. That is not much different from PID1 crashing itself" is probably the correct thing to do.
    So if PID1 and PID2 "may not crash" anyway and you need restart functionality in PID2 as PID3 may crash, why insist PID1 is minimal and all the actual code goes into PID2? You win nothing by doing that, just make the overall system simpler by merging PID1 and PID2. Voila, you are pretty close to systemd.

    So as I said above: "Keep PID1 minimal" is a design decision made in 1980 or so, that makes no sense in todays system architecture. It is disturbing how people still run to the defense of those design decisions anyway.

    Leave a comment:


  • Akiko
    replied
    Originally posted by tobias View Post
    Sure. But is that design decision still relevant today?​
    For the common user? Nope, definitely not. But Linux is one of the rare software pieces that basically runs everywhere, and promotes this, too. So for niche systems it is relevant. And it is not helpful if a lot of software you need to build a working distributions migrates away from "runs everywhere" because of a single software that tries to be everything. You end up with a kernel that runs everywhere, but can not get a working userspace anymore. I really try to take all scenarios into account. And the "runs everywhere" approach of Linux is still pushed forward. Just look at the RT extension which is becoming mainstream now. Now you can do funny realtime stuff on "bloated*" micro-controllers. (*ARMv7/ARMv8 based micro-controllers able to run "normal" kernels)​

    Originally posted by Weasel View Post
    With a million daemons running in the background?

    systemd needs to be exorcised. Waste of CPU power and memory.
    Oh come on, this is just not true. Depending on your configuration you run 3 to 10 systemd daemons. This is absolutely nothing for a desktop system, but something you feel on an embedded system. In my workstation journald, udevd, logind and the user specific part already eats about 33 MiB of RAM. For an embedded system this is absolute overkill. If you login into a build of a yocto-tiny system you can see that your whole running system eats about 10-20 MiB RAM (depending on the Yocto release, there is quite a difference between sumo, kirkstone and the upcoming styhead).​

    Originally posted by F.Ultra View Post
    Comments like the above is why it is so hard to take anti-systemd people seriously.
    Wow, that got "interesting" quite fast... I am NOT anti-systemd. I just do not jump into hypes and take everything without a critical look into it. Let me get you some examples:
    I am a C++ developer for decades now, even teaching c++20 and C++23. I love it. But man, do I hate some aspects of C++, like default constructors and operators or even implicit conversions. I do Rust programming and yes, I see the benefits, and some are really good, but man do I hate the over-complex syntax (async Rust) and the runtime bloat (from the perspective of embedded) and the over-reliance of online hosted crates. Now after the .io top level domain may be going away I hope some more people see why this is not a good idea. I mean after cpan, npm, pypi are now good malware/adware providers I thought people understand this, nope. After left-pad fucked the whole world, nope... Okay, back to topic. I love coding using Zig. You know, having no runtime (just attach to kernel APIs) is great, the buildsystem being part of the language itself is great, but man do I hate it having no constructor/destructor mechanism to do RAII. defer is nice, but still is not enough. See? I try to use my brain and try to understand the pros/cons.

    Originally posted by F.Ultra View Post
    I have no experience with embedded, but for servers/hpc, systemd is a godsend.
    Okay, so let me tell some of my experience as a developer who worked at one of the biggest Linux distributors building professional distributions for (at that time) unique systems. In my office back then I was surrounded by a AlphaSever DS20 and DS20E, HP C3750 and about 2 years ago I killed my HP C8000. I was surrounded by AMD Sledgehammer engineering samples and UltraSparce systems. I had remote access to SGI Altix systems with 512 and 1024 CPUs (Itanium) to investigate and fix bugs. We had a bug where the customer reported this: SGI Altix 1024 CPUs, 1024 GiB RAM, runs HPC application over weeks, highly tuned to run with exactly 1024 threads, but sometimes a CPU runs two threads and another runs nothing, the HPC runtime increased about 25% because of this ... please fix this scheduling issue, because CPU time is expensive.

    To make it short: On HPC system doing serious work, running for weeks or months, you really want a predictable runtime behavior. Having daemons running on the system, which start to do some work like deleting or swapping out journals and eating into your IO or CPU time can become a nightmare. This is the reason why running systemd on a HPC system may be a bad idea. And yes, I know this is an extreme example. But I want to demonstrate you, that I do not throw bullshit around. I tell this, because I encountered these issues. I really try to see the good and the bad.

    Leave a comment:


  • ahrs
    replied
    Originally posted by oiaohm View Post
    Of course the above musl patch set has the problem that it takes out glibc stuff and put in musl stuff so effectiving breaking building this patched version of systemd for musl on glibc due to now it using musl specific features.
    Lennart has already said he doesn't want this anyway. He thinks Musl should do what Glibc does.

    It's easy to support both with an #ifdef mess or #include <musl-compat.h>, etc but when upstream doesn't want it, from the perspective of the patch maintainer it's easier to maintain by ripping out all of the Glibc stuff.

    Leave a comment:


  • ahrs
    replied
    Originally posted by tobias View Post

    Sure. But is that design decision still relevant today?



    So we have a few useless lines of C code and run those as PID1 and run the interesting stuff (e.g. service management) as PID2. How does that make the overall system simpler?

    I care that my system as a whole runs fine. If the service management crashes I will need to reboot. Which PID service management has is not important.

    But let's assume we have a simple PID1 that starts service management and the service management crashes. What now?

    1. PID1 reboots the system when PID2 stops. So PID2 crashing effectively reboots the system. That is not much different from PID1 crashing itself
    2. PID1 does nothing. The system is dead now and I need to turn it off.
    3. PID1 restarts service management... that means PID1 is not minimal anymore though. We are on a slippery slope towards moving the service management into PID1 :-)
    PID2 in this scenario would be your process supervisor like supervise-daemon, or Runit, etc. PID3 is the thing that will crash. All of the same "must never segfault" things apply to your process supervisor and if you're not using one then "PID1 reboots the system when PID2 stops. So PID2 crashing effectively reboots the system. That is not much different from PID1 crashing itself" is probably the correct thing to do.

    Leave a comment:

Working...
X