Announcement

Collapse
No announcement yet.

SysVinit 3.11 Released With An "Important Feature" At Long Last

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • oiaohm
    replied
    Originally posted by Weasel View Post
    On init or shutdown. So who cares?

    It's not continuously when it's up and running.
    Now you just wrote what one of the start up/shutdown script problems. Does a sysvinit script have to end? most of them do but technically no they do not if they do not you have a resource usage leak. Yes this use to cause machines not start up but since parallel start up the system still starts.

    There is advantage to systemd ini based service field they are not turing complete things things so you don't have leaks here.

    The major advantage about systemd is resource cost predictability due to not using scripts.

    Weasel its really easy to go who cares you have not watch the different embedded developers giving their presentation on why they custom cut down systemd. Start up cost on their restricted hardware is important users don't like waiting around for embedded systems to start. shutdown cost is important.

    Here is the other thing if start up and shutdown has forced you to have X amount of ram and it greater than Y amount of ram you need to be using when the system is up and running this means startup and shutdown is making the device more expensive than it need to be. Welcome to embedded costing the init and shutdown usage is important. Fact systemd is lighter here means for embedded they can use cheaper hardware. Yes even slower CPU because you have limited window users are happy to wait for the device to start.

    Weasel like it or not a lot of people/parties do care about the init cost because this has effects that cost money. To be real a lot of embedded developers have their hardware not shutdown just have the power cut.

    This is part of the problem people are choosing systemd it better for particular use cases.

    The running cost when up and running is only one metric the init cost is another very important metric. Ideal that mostly never can done is that the init never any more resources then the continuous mode does and the init is fast so user is not waiting around. Systemd is closer to the Ideal than sysvinit is.

    varlink work going on with systemd will move dbus off the mandatory features list for good. This is the thing systemd is progressive reducing the parts you need to have continuously running to have a systemd system.

    Sysvinit is not fixing up it init/shutdown cost problem. Yes weasel people defending sysvinit like you are just as bad as pro systemd people who are not truthful about the limitations. Yes who cares about init/shutdown cost is something you get from sysvinit backing people not to have to admit sysvinit is defective here and this has cost effects for embedded developers so leading to more embedded developers to use systemd.

    Systemd provides a lot of advantages for the continuous cost you are paying and is working on reducing the continuous cost because the systemd developers do admit this is a problem.

    Weasel please on this don't do the who cares line. Comparing resource cost of a init solutions you have to compare. init and continuous cost. Like it or not sysvinit init cost is very high. Script based service management systems have all had high init costs.

    As I said sysvinit does not have a non zero cost and do not use who cares are argument to attempt to ingore one side or the other costs. Yes I could do with systemd continuous cost who care due to the extra features system server management provides as a argument to ingore this cost if you who cares argument is valid weasel .

    Weasel in a debate remember this if you are writing or saying who cares you have 90%+ that the person on the other side of the debate does in fact care and you are just about to have the debate not get to a solution because you have just attempt to dismiss something out of hand you should not have.

    Systemd and sysvinit have their costs and they are important to where they are suitable or not suitable to use neither is a perfect all rounder at this stage but systemd is closer to ideal perfect all rounder than sysvinit...

    .

    Leave a comment:


  • Weasel
    replied
    Originally posted by oiaohm View Post
    Using busybox or sysvinit scripts are also not a non zero cost. Really systemd people have not self owned themselves as much as you think.
    On init or shutdown. So who cares?

    It's not continuously when it's up and running.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by Akiko View Post
    Oh come on, this is just not true. Depending on your configuration you run 3 to 10 systemd daemons. This is absolutely nothing for a desktop system, but something you feel on an embedded system. In my workstation journald, udevd, logind and the user specific part already eats about 33 MiB of RAM. For an embedded system this is absolute overkill. If you login into a build of a yocto-tiny system you can see that your whole running system eats about 10-20 MiB RAM (depending on the Yocto release, there is quite a difference between sumo, kirkstone and the upcoming styhead).​
    There is a problem with your numbers here. jounald is optional because some embedded developers have found you can remove it completely and system still runs.

    To make it short: On HPC system doing serious work, running for weeks or months, you really want a predictable runtime behavior. Having daemons running on the system, which start to do some work like deleting or swapping out journals and eating into your IO or CPU time can become a nightmare.​
    Again this is not knowing systemd and journald

    This guide contains steps on how to configure a logging solution that might deliver better performance than the logging solution in the default configuration.


    Lets say you don't remove journald but you set journald storage mode to none and setup no forwarding this effective sets logging to /dev/null. Yes this massive reduces the ram journald uses it also means you don't have logging. Why keep journald even if you set storage mode to none because you might be forwarding to syslog and the like. Journald can work as a logging system can also work as a logging message validator.

    Yes storage mode none forwarding to syslog what journald is doing here is making sure not application is sending messages saying it some other process. Yes the cgroup/service that he message is coming from is being logged by journald to prevent message silly business. Think you are running multi instances in HPC of the same task as different services in HPC with journald your logged messages will have exactly what service the message was coming from where a straight up connected syslog that you can run systemd with will be lacking this protection.

    Basically journald validator or something like rsyslog validate client process is recommend so you have more useful logging..

    The worst behaviors of journald are only the case when journald is doing the storage and journald does not have to do storage.

    HPC configuration of systemd is different to desktop configuration like desktop configuration of systemd is different to embedded.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by Weasel View Post
    Couldn't care less if you think it's trivial or not. The fact is, it's more than zero, so it wastes. And that's systemd they said is the opposite.
    Using busybox or sysvinit scripts are also not a non zero cost. Really systemd people have not self owned themselves as much as you think.

    Leave a comment:


  • Weasel
    replied
    Originally posted by Akiko View Post
    Oh come on, this is just not true. Depending on your configuration you run 3 to 10 systemd daemons. This is absolutely nothing for a desktop system, but something you feel on an embedded system. In my workstation journald, udevd, logind and the user specific part already eats about 33 MiB of RAM. For an embedded system this is absolute overkill. If you login into a build of a yocto-tiny system you can see that your whole running system eats about 10-20 MiB RAM (depending on the Yocto release, there is quite a difference between sumo, kirkstone and the upcoming styhead).​
    I missed the part where the argument wasn't about efficiency and waste?

    I didn't bring it up, systemd shills did. So I'm stating facts like they're self-owning themselves.

    Couldn't care less if you think it's trivial or not. The fact is, it's more than zero, so it wastes. And that's systemd they said is the opposite.

    Leave a comment:


  • F.Ultra
    replied
    Originally posted by Akiko View Post
    Wow, that got "interesting" quite fast... I am NOT anti-systemd. I just do not jump into hypes and take everything without a critical look into it. Let me get you some examples:
    I am a C++ developer for decades now, even teaching c++20 and C++23. I love it. But man, do I hate some aspects of C++, like default constructors and operators or even implicit conversions. I do Rust programming and yes, I see the benefits, and some are really good, but man do I hate the over-complex syntax (async Rust) and the runtime bloat (from the perspective of embedded) and the over-reliance of online hosted crates. Now after the .io top level domain may be going away I hope some more people see why this is not a good idea. I mean after cpan, npm, pypi are now good malware/adware providers I thought people understand this, nope. After left-pad fucked the whole world, nope... Okay, back to topic. I love coding using Zig. You know, having no runtime (just attach to kernel APIs) is great, the buildsystem being part of the language itself is great, but man do I hate it having no constructor/destructor mechanism to do RAII. defer is nice, but still is not enough. See? I try to use my brain and try to understand the pros/cons.
    You sound like a reasonable person so I appreciate that, the reason for me adding "anti systemd" was your attitude that systemd must do this dumb thing when it in fact didn't, aka you showed prejudice here that was unwarranted and _that_ is usually a huge sign of anti-systemd trolls, combine that with your needless remark about Poettering and you might perhaps see why I came at it the way I did.

    Originally posted by Akiko View Post
    Okay, so let me tell some of my experience as a developer who worked at one of the biggest Linux distributors building professional distributions for (at that time) unique systems. In my office back then I was surrounded by a AlphaSever DS20 and DS20E, HP C3750 and about 2 years ago I killed my HP C8000. I was surrounded by AMD Sledgehammer engineering samples and UltraSparce systems. I had remote access to SGI Altix systems with 512 and 1024 CPUs (Itanium) to investigate and fix bugs. We had a bug where the customer reported this: SGI Altix 1024 CPUs, 1024 GiB RAM, runs HPC application over weeks, highly tuned to run with exactly 1024 threads, but sometimes a CPU runs two threads and another runs nothing, the HPC runtime increased about 25% because of this ... please fix this scheduling issue, because CPU time is expensive.

    To make it short: On HPC system doing serious work, running for weeks or months, you really want a predictable runtime behavior. Having daemons running on the system, which start to do some work like deleting or swapping out journals and eating into your IO or CPU time can become a nightmare. This is the reason why running systemd on a HPC system may be a bad idea. And yes, I know this is an extreme example. But I want to demonstrate you, that I do not throw bullshit around. I tell this, because I encountered these issues. I really try to see the good and the bad.
    Yes but that is no different from how a normal SysV setup would work with syslog daemons running and whatnot so ofc just like you would reconfigure such a system for hpc you would do if it was running under systemd where you e.g would remove all those deamons (they are not neccessary). This is why we have core isolation, cgroups, and priority levels.

    My own reference here is working in the financial industry with latency requirements measured in nanoseconds.
    Last edited by F.Ultra; 23 October 2024, 12:02 PM.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by tobias View Post
    So we have PID1, PID2 and PID3 now, with PID2 and PID3 sharing some code to restart processes.
    There is a problem here.
    Systemd man says for --system switch: Run a system instance, even if PID != 1 but currently the check for PID = 1 is still done even if this switch is used. Used distribution Debian 10 Expected beh...

    Process with the PID1 value can do things with process control that processes without this value cannot.

    There is loophole but it turns into turtles all the way down.

    You PID1 could start PID2 in a PID namespace with cgroups so that PID2 is a PID1 for all the processes in the cgroup tree it is.

    Yes PID1 value it does not matter if you get this by PID Namespace or by being the host PID1 you get the process control power by having it. This is linked back to how cgroups were implemented.

    Yes you terminate the PID1 of a PID namespace be it PID2 or PID1000.... that complete cgroup of that PID namespace is instant die.

    By the way the Linux kernel runs a PID0 process and it the one that get really upset when it cannot find a PID1 todo tasks for it. Yes this is why when booting the pid1 started in initrd is able to be replaced by a different PID1 latter in the boot process.

    sysvinit did not have proper service management. Solaris with it zones was designed with a PID2 that allowed higher process control that SMF(Solaris systemd like thing) runs as.

    Remember PID1 is the one that cleans up the dead/zombie processes that can be holding file-handles and the like. This is one of the issues you don't want to attempt restart a process while a zombie of it prior run has to be cleaned up because now can be causing the new process you attempt to start to turn into a new zombie and repeat until you run out of processes and the system stops.

    This is not as simple as lets just split what PID1 does into more than 1 process as to be able todo this would require kernel changes.

    Interesting point is that the PID namespace thing shows it possible to have more than 1 PID1 just we cannot define a backup PID1 in host mode. Lets say we could start another copy of systemd as like PID2 tell the kernel if PID1 dies for any reason switch to PID2 and keep on working this would fix most of the reliability issue but this would require kernel changes. Remember you system booting already does change from PID1 in the initrd to PID1 from the root directory.

    Leave a comment:


  • tobias
    replied
    Originally posted by ahrs View Post

    A service manager, a la Systemd, does, yes, but a process supervisor does not. A process supervisor simply runs the same task repeatedly. Service management is delegated to the init system. It's a very different way of doing things even though the outcome is the same.
    So we have PID1, PID2 and PID3 now, with PID2 and PID3 sharing some code to restart processes. That gets more and more complex. In the end the code proving the functionality you want needs to be *somewhere*. We do agree that the functionality provided is similar. Lets assume providing this functionality requires a certain amount of complexity to do. That is the minimim amount of complexity we have in the system independent of how we implement that functionality. Any practical implication will be at least as complex as that, but in practice any implementation will be more complex as the implementation itself will add something on top. We just disagree about that "on-top complexity".

    I understand your position to be that we need small and simple units of code to review, separated by strong process boundaries. My position is that those process boundaries themselves add complexity and can be avoided unless there is also a security boundary between those bits of code.

    To me systemd does enough using SW design to keep bits of functionality separated from other bits of functionality, so that I can review small bits of code at a time. That way they avoid the process separation "on top complexity", but they of course add SW design "on top complexity".

    I doubt we will agree on what is less "on top complexity" overall. I think our individual backgrounds play into this too strongly.

    Leave a comment:


  • ahrs
    replied
    Originally posted by tobias View Post

    A service supervisor can by definition manage services on the system. That makes it a security critical task, independent of what user it runs as.
    A service manager, a la Systemd, does, yes, but a process supervisor does not. A process supervisor simply runs the same task repeatedly. Service management is delegated to the init system. It's a very different way of doing things even though the outcome is the same.

    Leave a comment:


  • intelfx
    replied
    Originally posted by Akiko View Post
    You don't really read the thread, don't you? I already explained because of systemd becoming a standard that other software dropped init scripts (you now have to rewrite) and even introduced a dependency to systemd (getting this removed is even harder). You need to do more customization.
    Oh, I did read the thread. And I exceeded my daily facepalm quota while doing so.

    If you're at a "remove udev" level of customization, then the requirement to write a bunch of init scripts (won't be many, for obvious reasons) for your custom init won't even be a blip on the amount of work you already need to do.

    Originally posted by Akiko View Post
    I clearly stated that I had a customer who did serious HPC work. Why do you twist my words?
    Because it's a distinction without difference. If you worked for a customer who did serious HPC work, then you still oughta know all of this.

    Originally posted by Akiko View Post
    Yeah, I see your problem. You got the "I have a hammer and everything looks like a nail" problem, well, in that case "I'm an admin and now I can fix everything by configuration/administration". See, I gave you an example and you should have looked up what an Altix system is, when it was used, what kernels where used at that time by professional distributions (we talk about certifications here which take months and a change of single software would nullify the certification), and then look up what was possible in these kernel versions. I know that today you have a lot more features you can use.
    No sir, I have an "I'm a generalist" problem. Which means that I have knowledge and know how to apply it at (almost) every level of the technology stack, simultaneously. And it irks me when people who clearly have less of that knowledge talk and opine as if they had more.

    And I know enough about that brand to realize instantly that it was a meaningless word salad example, simply because they had't contained _any_ of the tech we're discussing here now, and therefore I didn't need to waste my time looking any deeper. Which you have just proven, thanks.

    Originally posted by Akiko View Post
    Do you want to fight and get personal or do you actually want to learn something?
    I always want to learn something. Your posts here simply don't contain anything worth learning.

    Originally posted by Akiko View Post
    If the later, just dig into the rabbit-hole "what the introduction of CPU caches, out of order execution and speculative execution means for predictable behavior".
    Dude. I taught a university-level course on parallel/concurrent programming, with an addend on microarchitectural effects in this context. And I taught a course on Linux architecture. And I can confidently say that you've just thrown out a word salad that's totally, absolutely, incontrovertibly irrelevant to the scale of effects we are talking about.

    That is to say, given a process that wakes up as infrequently as your typical systemd daemon (of the variety that actually will be present on a HPC cluster, if it was designed by someone other than a complete inept noob), its average effects on microarchitectural state will be exactly nil.

    And you still haven't said anything about unbound kthreads, which suggests you don't know anything about them.

    I'll give you a hint, though. You know what will have greater microarchitectural effects? The goddamn timer tick. Unless the CPU is running full tickless, that is, (which it actually should, if you've really got a HPC cluster of the scale and sensitivity you're talking about), in which case there simply won't be any other processes scheduled on that CPU, by definition, because full tickless CPUs are (must be) non-schedulable.

    Originally posted by Akiko View Post
    And I will ignore you now until you show some decent human behavior.
    Oh by all means, go ahead! My responses in this thread are not for you. They are to combat dis-/misinformation for everyone else.​

    Leave a comment:

Working...
X