Announcement

Collapse
No announcement yet.

Systemd Continues Getting Bigger, Almost At 550k Lines Of Code

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by Vim_User View Post
    What due you do with your binary log if the logfile itself becomes corrupted? With text based logging you may still be able to partially retrieve information from it, but how do you do that if journalctl simply answers with "log file corrupted" or something similar? How superior is it in that case?
    You restore the log files from a backup? I'd rather take a logging system that tells me that my logs are corrupted and provides structure to them than a logging system where it's just text and you can't even tell if your logs can be trusted.

    If you really must have your logs in textual format, it's pretty easy to configure journald to output its logs to text files or eg. this: http://www.freedesktop.org/wiki/Software/systemd/json/ The binary format is not a problem.

    Comment


    • Originally posted by Vim_User View Post
      If that is a fact you can show us the numbers that confirm that fact, can't you. Otherwise it is nothing but a wild guess.
      It is a guestimate, but so many things point in that direction: the total lack on any development on even critical components for non-systemd platforms is a good indicator that systemd opponents are a tiny minority.

      But it is simply a fact that the majority (as in big and important and having many users) of all Linux distros are changing to systemd. Even on Gentoo and Slackware people are busy implementing systemd, it is just a matter of time before they too becomes de facto systemd platforms.


      Originally posted by Vim_User View Post
      No, they did not. They were overruled by their leader after having a tie in the votes. If it weren't for that leader the discussion would still going on. Don't make it sound like that was a clear decision.
      Yes they did. The technical committee did made the decision that systemd is the new default init system for Debian Linux. The decision was made exactly in accordance with the democratic processes as outlined in their rules. It is a clear decision: there is no doubt about the fact that systemd is the new default init system for Debian Linux.


      Originally posted by Vim_User View Post
      Huh? Logging is always initiated by the init system, what else should start it? And if you start for example syslog-ng with systemd, how is it not controlled by it?
      You should have read on; the point is that it should be initiated and controlled by the init system in such a way that there will be logging info from all processes from the moment the system is boot strapped, until the exact last micro second the system shuts down. This is something systemd can do because of its design, but that script based init systems that rely on syslogd can't.


      Originally posted by Vim_User View Post
      Sorry, but that only shows that you have no clue at all how sysvinit works for example on Debian. It also shows that you are guilty of the same thing you accuse systemd opponents to be: Not knowing what you speak about when it comes to other init systems that systemd.
      Don't believe me? Well, how about a Debian/Hurd developer:

      "During gsoc last year I had to patch our procfs to finally be able to safely shut down Debian/Hurd systems using sysvinit. The problem was, that sysvinit at certain runlevel transitions (like shutting down, or I guess, switching to single user mode), sysvinit assumes that it is okay to stop and kill (almost) all processes on the system (that's what killall5 does). This might be okay on monolithic systems, but on (multiserver) microkernel systems like the Hurd, where your root filesystem and your network driver and stack are running as userspace processes, it is clearly not."



      The fact is that even on Linux sysvinit systems, it requires hack upon hack to make sysvinit work. This is probably why the sysvinit code base is so bloated despite the fact it is only capable of doing simple things. (and not even doing them correctly).

      Originally posted by Vim_User View Post
      What due you do with your binary log if the logfile itself becomes corrupted? With text based logging you may still be able to partially retrieve information from it, but how do you do that if journalctl simply answers with "log file corrupted" or something similar? How superior is it in that case?
      The same thing as when a text file is corrupted; read as much as you can, which journalctl actually do very well. The systemd journal is exactly designed in such a way that if a process mangles and corrupt an entry, it doesn't affect the rest (append based). But unlike normal syslog files, there is actually a default log consistency check, so you can even know that corruption has happened. The journal file is a huge improvement in every way above the old text dumps. The power of the indexed journal and journalctl, even makes it much simpler to use standard Linux text tools such as "grep" to find what you need.

      All this information and much more too is readily available on the systemd site:

      "The systemd journal stores log data in a binary format with several features:

      Fully indexed by all fields
      Can store binary data, up to 2^64-1 in size
      Seekable
      Primarily append-based, hence robust to corruption
      Support for in-line compression
      Support for in-line Forward Secure Sealing
      "

      Comment


      • Originally posted by Vim_User View Post
        What due you do with your binary log if the logfile itself becomes corrupted? With text based logging you may still be able to partially retrieve information from it, but how do you do that if journalctl simply answers with "log file corrupted" or something similar? How superior is it in that case?
        I just noticed this verify option for journalctl and decided to give it a go, it's showing corruption, what might cause that? and what if anything should I do about it? should I investigate further?


        Corruption applies on a line-by-line basis. If one line is corrupt, you jump to the next line and try again. Journalctl will output the first full line it can actually get a handle on. And since the text files are rotated on a user-defined-basis you can ease corruption by having it rotate frequently, say every... 10mbs or so? This way even an unclean shutdown should only affect the most recent log. And if you are experiencing LOTS of corrupted logs I'd be looking at filesystem bugs, or a failing drive.

        EDIT: Also, check out: http://www.reddit.com/r/linux/commen...nd_corruption/

        And remember, journalctl doesn't have to REPLACE syslog. You can use journal 99% of the time and have it set up to also forward all logs to syslog if you're really worried about corrupted binary logs, that way you have fast binary logs but visible text log backups.
        Last edited by Ericg; 24 May 2014, 11:34 AM.
        All opinions are my own not those of my employer if you know who they are.

        Comment


        • Originally posted by interested View Post
          It is a guestimate, but so many things point in that direction: the total lack on any development on even critical components for non-systemd platforms is a good indicator that systemd opponents are a tiny minority.

          But it is simply a fact that the majority (as in big and important and having many users) of all Linux distros are changing to systemd. Even on Gentoo and Slackware people are busy implementing systemd, it is just a matter of time before they too becomes de facto systemd platforms.
          Your claim was about users, not developers. For your claim about systemd and Slackware: None of the few Slackware developers work on that. There are a few Slackware users who ported it to Slackware for experimenting with it, nothing more.
          Yes they did. The technical committee did made the decision that systemd is the new default init system for Debian Linux. The decision was made exactly in accordance with the democratic processes as outlined in their rules. It is a clear decision: there is no doubt about the fact that systemd is the new default init system for Debian Linux.
          What I meant with clear is: Don't make it sound like they all voted for systemd, they weren't they were splitted exactly in half and systemd only was chosen because the head of the technical committee is a systemd proponent. If he would be an Upstart proponent the new init system would be Upstart, simple as that.
          You should have read on; the point is that it should be initiated and controlled by the init system in such a way that there will be logging info from all processes from the moment the system is boot strapped, until the exact last micro second the system shuts down. This is something systemd can do because of its design, but that script based init systems that rely on syslogd can't.
          That didn't answer the question. Why could systemd with syslog{d,-ng,whatever} not provide early logging? Why could the logging daemon not be controlled by systemd?
          Don't believe me? Well, how about a Debian/Hurd developer:

          "During gsoc last year I had to patch our procfs to finally be able to safely shut down Debian/Hurd systems using sysvinit. The problem was, that sysvinit at certain runlevel transitions (like shutting down, or I guess, switching to single user mode), sysvinit assumes that it is okay to stop and kill (almost) all processes on the system (that's what killall5 does). This might be okay on monolithic systems, but on (multiserver) microkernel systems like the Hurd, where your root filesystem and your network driver and stack are running as userspace processes, it is clearly not."



          The fact is that even on Linux sysvinit systems, it requires hack upon hack to make sysvinit work. This is probably why the sysvinit code base is so bloated despite the fact it is only capable of doing simple things. (and not even doing them correctly).
          He clearly states that this is problem with micro-kernel systems, but it still not the truth that sysvinit on Debian will randomly shut down services. You actually may want to have a look at how that works on Debian.
          The same thing as when a text file is corrupted; read as much as you can, which journalctl actually do very well. The systemd journal is exactly designed in such a way that if a process mangles and corrupt an entry, it doesn't affect the rest (append based). But unlike normal syslog files, there is actually a default log consistency check, so you can even know that corruption has happened. The journal file is a huge improvement in every way above the old text dumps. The power of the indexed journal and journalctl, even makes it much simpler to use standard Linux text tools such as "grep" to find what you need.

          All this information and much more too is readily available on the systemd site:

          "The systemd journal stores log data in a binary format with several features:

          Fully indexed by all fields
          Can store binary data, up to 2^64-1 in size
          Seekable
          Primarily append-based, hence robust to corruption
          Support for in-line compression
          Support for in-line Forward Secure Sealing
          "
          http://www.freedesktop.org/wiki/Soft...journal-files/
          OK, I stand corrected on that.

          Comment


          • Originally posted by Vim_User View Post
            Your claim was about users, not developers. For your claim about systemd and Slackware: None of the few Slackware developers work on that. There are a few Slackware users who ported it to Slackware for experimenting with it, nothing more.
            I would call users who repacks software to suit a particular distro and provide patches and tests for developers, even though they aren't employed by the distro. They do that because they like Slackware and rather have a hunch that Slackware will change to systemd sometime in the future.


            Originally posted by Vim_User View Post
            What I meant with clear is: Don't make it sound like they all voted for systemd, they weren't they were splitted exactly in half and systemd only was chosen because the head of the technical committee is a systemd proponent. If he would be an Upstart proponent the new init system would be Upstart, simple as that.
            But those are the Debian rules. The decision was clear at that time, even though some voted against it. Since then Upstart have become EOL and on life support only. Several of those who voted for Upstart would prefer systemd instead of staying with sysvinit.


            Originally posted by Vim_User View Post
            That didn't answer the question. Why could systemd with syslog{d,-ng,whatever} not provide early logging? Why could the logging daemon not be controlled by systemd?
            You can't get early logging from syslogd directly, you have to make another intermediary logging system to do that. This is "journald". But once they already had made a syslogd daemon capable of receiving and reading early log info, it was only a tiny step to enable the logging daemon to write that info into a logfile. By doing that they had an historic chance of remedying some old and well know shortcomings of Linux logging.

            Sure, you can run with two logging daemons, with journald sending everything to syslogd. That way you can have early logging and text logs. It is only a one line command in the config file to make it so. But having the indexed journal is such a massive improvement that it doesn't make a lot of sense to do.



            Originally posted by Vim_User View Post
            He clearly states that this is problem with micro-kernel systems, but it still not the truth that sysvinit on Debian will randomly shut down services. You actually may want to have a look at how that works on Debian.
            Well, Hurd (as a distro) is actually a Debian umbrella project these days. They use the Debian repos and packages and so on. So the comment very much pertain to how Debian sysvinit works.

            But if you are still not convinced, how about this snippet from the actual GNU/sysvinit source code:
            "1/*
            2 * kilall5.c Kill all processes except processes that have the
            3 * same session id, so that the shell that called us
            4 * won't be killed. Typically used in shutdown scripts."




            There are several reasons why sysvinit issues a general process genocide order, among these that it doesn't track processes very well, and that it depends on the daemons to shut down nicely when ordered to, something which they not always do.

            Killing processes correctly is one of the many things systemd does and sysvinit doesn't.

            Comment


            • Originally posted by Ericg View Post
              If you're gonna prove him wrong curaga, you have to actually USE his claim. Redo your experiment with systemd, post THAT pic. I've never tried killing pid1 systemd, but it could be setup to run different / have a catch in the kernel to NOT cause a crash.
              This has been discussed in many systemd threads before - systemd tries to re-exec itself if it crashes, but re-exec is not 100% reliable. The kernel is the same, there is no "if systemd" hack there.

              Which in the context of the original question (why would service/cgroup manager in pid 2 be better) does not change things. Pid 2 failing to re-exec does not kill the world; pid 1 failing to re-exec does.

              Comment


              • Originally posted by Ericg View Post
                If you're gonna prove him wrong curaga, you have to actually USE his claim. Redo your experiment with systemd, post THAT pic. I've never tried killing pid1 systemd, but it could be setup to run different / have a catch in the kernel to NOT cause a crash.
                sigkill can not be handled by userspace
                only one way it goes and everything else is a kernel/hardware bug

                sigterm can, didn't read what was used

                then again sigsegv can be handled but is a sign of a greater problem and should be used only to exit as clean as possible (in critical systems, that is)
                Last edited by gens; 24 May 2014, 03:09 PM.

                Comment


                • Originally posted by Chousuke View Post
                  Point 1 is an empty argument unless you can show how the systemd suite of services (Unlike what people seem to think, systems is far from a monolith) is needlessly more complicated than the collection of daemons and shell script hacks needed to provide the same feature set. All sysvinit has going for it is the fact that it has been used for a long time.

                  As for 2, the mention of shellscripts being easy to debug is enough for me to dismiss the rest for now.
                  *sigh* You're not reading what I even wrote. I never was supporting sysvinit, I CLEARLY said BSD rc init, Since you're so incapable of distinguishing the two let's get how BSD rc init ACTUALLY runs.

                  There is no such thing as runlevels in BSD init. On startup and shutdown rc reads the contents of /etc/rc.conf and the appropriate shutdown or startup script. Then, it checks for services that are configured to boot on startup via rc.conf and then goes into the rc.d directories and searches for those. It then starts or stops these services as per the command to power on or off from root. During shutdown, after it ensures all processes called from rc.conf are stopped, it kills any processes not needed for shutdown. This is as compared to the inconsistent directories where sysvinit is stored. In sysvinit it is a truly chaotic mess. But BSD's rc init works fine! And guess what? It could be ported to Linux easily. Combine it with a good process supervisor for event based startup and monitoring and you're good to go.

                  Also my arguments 1 and 3 were similar, but 3 was a sort of catchall conclusion to sum myself up and tie everything together. So no, 1 =/= 3. In addition all shell scripts are is automation and I find that way easier to debug. Then again I'm not your average user.

                  For those who say leaving init alone and having the process supervisor in a different PID is a bad thing, you all again ignore the principle of attack surface. Its not the size of the process that makes a difference, It's what is doing. The less PID 1 is doing, the better. Because if, let's say the supervisor is in PID 2, and it dies, init can still restart it. If systemd locks up or dies, you have to reboot to control services. It's the same in military theory, if I have a convoy of tanks and I spread them out over a wide area, then air strikes from the opposing force require more resources. systemd is rather closely grouped by comparison and its just not what fault tolerance is about. I also left Linux for other reasons, but this and how it's becoming less unique, more like OSX or Windows in terms of design is pushing it, plus the amount of developing I am doing and seeing the poor choices GNU/Linux has made and how it is harming innovation. It really boils down to Linux going from a decent, well designed UNIX clone into a war machine of the FSF and companies like Red Hat and Canonical. They're pushing it as a Windows drop in replacement which it was never designed for. I use BSD and IRIX mostly these days because I align with their philosophies (I'm aware IRIX is outdated, but hey it works fine still and is still ahead of its time.)

                  Comment


                  • Originally posted by TeamBlackFox View Post
                    For those who say leaving init alone and having the process supervisor in a different PID is a bad thing, you all again ignore the principle of attack surface. Its not the size of the process that makes a difference, It's what is doing. The less PID 1 is doing, the better. Because if, let's say the supervisor is in PID 2, and it dies, init can still restart it.
                    This reads to me as if attack surface is about things dying. But if you're going to attack something, it's not to kill it, but to use it to your advantage - gain privileged information, turn the machine into a spam sending zombie, or similar. In this case, it doesn't matter whether you exploit PID1, PID2 or PID3485. Killing the machine (killing PID1) is actually counter-productive from this perspective, because a killed machine can't send spam.

                    So this "omg, PID1 attack surface is the worstestes thing ever!!!" is a red herring. Minimizing the attack surface of any process not PID1 is a lot more important (because, like I said above, a killed machine is useless to someone with malicious intent). Primarily, processes most in need of protection are those facing the network - like web and mail servers. Or on end-user machines, the web browser. Cos I don't know about you, but if my machine crashes because PID1 was killed, big freakin' deal. But my web browser's vulnerability being exploited to start a keylogger which records and sends my critical passwords to a malicious entity (something that probably doesn't even require root)... now *that* is serious.

                    Comment


                    • Originally posted by Gusar View Post
                      This reads to me as if attack surface is about things dying. But if you're going to attack something, it's not to kill it, but to use it to your advantage - gain privileged information, turn the machine into a spam sending zombie, or similar. In this case, it doesn't matter whether you exploit PID1, PID2 or PID3485. Killing the machine (killing PID1) is actually counter-productive from this perspective, because a killed machine can't send spam.

                      So this "omg, PID1 attack surface is the worstestes thing ever!!!" is a red herring. Minimizing the attack surface of any process not PID1 is a lot more important (because, like I said above, a killed machine is useless to someone with malicious intent). Primarily, processes most in need of protection are those facing the network - like web and mail servers. Or on end-user machines, the web browser. Cos I don't know about you, but if my machine crashes because PID1 was killed, big freakin' deal. But my web browser's vulnerability being exploited to start a keylogger which records and sends my critical passwords to a malicious entity (something that probably doesn't even require root)... now *that* is serious.
                      This is funny to me. This tells me you take hack or attacker as hacking to spam or become part of a botnet. You''ve honestly never been hit with a denial of service attack from a competitor. I have, I had a server which I assisted with, an anime related site, we had a preexisting competitor who did not like what we were doing with starting up a site which put his market share at risk. So, he decided to denial of service attack us. His intent wasn't to hack our machine, but to hurt us to make us not recover. This attack could also come from the site itself. This was before I setup web content on separate partition with nosuid, noexec and nosymlink parameters set. If someone gained control of the blind user, they could use an exploit in he process manager to get root and therefore have control over the entire tree. They could delete data, harm the OS etc. That's what I meant in terms of attack surface.

                      Comment

                      Working...
                      X