Announcement

Collapse
No announcement yet.

Systemd Continued Commanding Linux Systems In 2015

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by cjcox View Post

    What system do you use to manage your deployments? I have to use puppet and it's very painful. Puppet (for example) assumes that all service startup go through systemctl, which messes up legacy things done through init.d instead (in puppet these will fail). Of course, we can blame puppet (heaven knows we can't blame systemd without being crucified).... In the Red Hat case the easiest way to deal with most (but not all) of the system resource problems is by telling all of puppet to handle service resources the old way... why? Because the old way in this transition period handles both cases. The other cases in your puppet modules you have to edit case by case.

    Ok... I know, you systemd guys are thinking "we are right" (can they think any other way?)... and therefore it's a puppet problem. I'm just pointing out that systemd (to use systemd speak) "has exposed many problems in other peoples software". And we, the sysadmins and devops of the world, have to pay the price. I know that all systemd proponents have convered their ears at this point... I fully understand it's my problem to contend with, but to say "it isn't because of systemd" is just incorrect. It's just life and life got a whole lot more painful during this transition.

    I am glad you haven't seen the lockup on shutdown... if you ever do see this, will you be honest enough to post? Just because I'm not afraid to point systemd failings doesn't make me your enemy. Or does it?

    Again, these are full lockups, so difficult to diagnose (especially with systemd...sorry, but must be said).
    This is one of a few mildly valid complaints. But allow me to throw an actual argument at it instead of calling you stupid or an imbecile, much like everyone else has seemingly begun to do (or finished doing).

    I've never heard of or used puppet. I don't know or understand how it works as a result. What I can take from simple logic... is that it is indeed the software called puppet's fault here. I can claim this because there is other software that integrates well with systemd without explicit support from systemd. In particular, a lot of previous service management layers used in whatever distro were made to work with systemd to help with moving over.

    Now, let's say systemd completely fucked puppet up. puppet couldn't keep up with the transition, it's a pile of heaping dog crap now that doesn't work well, there wasn't enough time/money to maintain the service manager, whatever... but it worked fine with the old SysV. I understand that changes, especially to something so close to the core of any server administrators toolchain, can have drastic effects. There was *bound* to be someone who was negatively affected by it. This logic, however, is why I'm stuck with Netscape 5.1 email clients and Windows 98 on my work computers. Change needed to or needs to happen. I'm not saying it should happen all the time, quite the contrary since important changes can cost people a lot of money and time that may not seem that worth it. SysV has been around for a long time though and its flaws have been pointed out continuously. There was plenty of warning, systemd didn't just come from nowhere. What can systemd do, from both the software and developer perspective, to help alleviate those transitional pains?

    While your complaint might be valid, your (and many others here on this forum) solution is to just have systemd not exist. You give no constructive feedback other than saying the old solution was fine while hundreds of people are giving quite logical reasoning as to why it wasn't fine and why the new system is better on various fronts. Simply giving a comparison on what SysV did compared to what systemd other than something silly like, "It worked" would be a hundred times more useful than any post on this forum thread currently.

    PS. A full system lockup is likely not caused by something like systemd or SysV for that matter. A lockup is more than likely a driver issue that causes a lockup that can't even reach the panic function. I've seen these various times while messing with my input driver and virtually anything can cause it from kernel space... corrupt heap/stack, null dereference, overflow/underflow... it's kinda hard to understand the circumstances of which the lockup was caused by and I find the best way is via a virtual serial IO port if you're running via a VM which can at the least give you partial output of a panic message (often cut off before the end).

    If systemd was the cause, you'd see it quite clearly in the log. The kernel wouldn't lockup because of systemd, it doesn't run in kernel space, it's almost impossible. Not saying that various other symptoms couldn't occur... just that a lockup wouldn't be one of them in a general scenario.

    Comment


    • #72
      Originally posted by Candy View Post
      Don't get me wrong. I am no system administrator.

      I simply use the desktop edition of Fedora (and previously other distributions targeted for the same audience). The log files I deal with are basicly the ones on my own desktop installation. They had always been around ~10mb (and even far less than that ... say 4-5mb) (considering the fact that I often delete the contents after a while or simply re-install a backup to keep the system clean).

      Therefore loading up a file of that size inside an editor was a quite common and quick task. Can it be done differently ? Yes clearly. Of course things might look differently if you need to deal with servers and hundrets of user accounts that keep spitting out huge amounts of log contents. So we may end up talking about different use cases here.

      I do see that journald may be an interesting thing for servers and for administrators that need to wade through a lot of content. But for the ordinary workstation installation (aka desktop installation) using journald (which easily generates a couple of hundret megabytes of binary blobs) is plain overkill.
      i'm not getting you wrong and neither am i dissing your opinion.

      although, i might again sound like i would be. but, based that i am on linux from times when 10mb of logs would be covering all of your drive space i found your comment a bit funny. all these are getting bigger relative with available space. although, percentage is way smaller now. 100mb is nothing on any machine after 2005

      now few constructive remarks. even if you are desktop user, Fedora covers that with really nice log view app installed by default and even more so with cockpit (which is socket activated, so you really don't need to worry about straining your computer)

      next thing journald offers lot of nice log filtering. like last boot only or from/to time.

      also, here is one more tip. only time when loading text file is slow like you say is simply because you enabled line wrapping in that editor. if you disable line wrapping file will load instantly

      Comment


      • #73
        I am always getting a little headache when there is a discussion about "journald", because I think most arguments miss a point.

        The main argument against journald seems to be "it use a binary format"... which is quite funny, because everything use binary formats... yes, text/ASCII is a binary format too, just a very simple one used by a lot of tools.

        The argument should be more about the indexed structure of the journald format. What happens if a few bytes get corrupted in this format? Do we loose some indexing? Do we loose everything behind the flipped byte if we hit the right spot? Is there a "just give me the stored text, forget about the index" option when outputting the stuff?

        I don't have an answer to this, but I think it would help the discussion to look into this.

        If we just loose the index features, what the hell are we talking about... it would mean we just loose something we never had with ASCII logfiles.

        If we loose everything behind the flipped byte, maybe we can look into a logfile format that supports an index AND can recover after a byte-flip on the next line of the log. A guarantee that "in case of corruption, you will be able to get all logfile lines without a corruption" would calm down a lot of people.

        Has someone done some experiments to look into this?

        Comment


        • #74
          Originally posted by Henning View Post
          I am always getting a little headache when there is a discussion about "journald", because I think most arguments miss a point.

          The main argument against journald seems to be "it use a binary format"... which is quite funny, because everything use binary formats... yes, text/ASCII is a binary format too, just a very simple one used by a lot of tools.

          The argument should be more about the indexed structure of the journald format. What happens if a few bytes get corrupted in this format? Do we loose some indexing? Do we loose everything behind the flipped byte if we hit the right spot? Is there a "just give me the stored text, forget about the index" option when outputting the stuff?

          I don't have an answer to this, but I think it would help the discussion to look into this.

          If we just loose the index features, what the hell are we talking about... it would mean we just loose something we never had with ASCII logfiles.

          If we loose everything behind the flipped byte, maybe we can look into a logfile format that supports an index AND can recover after a byte-flip on the next line of the log. A guarantee that "in case of corruption, you will be able to get all logfile lines without a corruption" would calm down a lot of people.

          Has someone done some experiments to look into this?

          I neither have experiments nor the time to do them but I can say what I've read about the journals general handling of corruption:
          If journald detects corruption it will just rotate and start a new file that is clean from corruption. Now, if you run journalctl, the current log files and the rotated log files are merged into one stream, for the corrupted files the journal will read from them as much as possible. Since the corrupted files (like all log files) are handled read-only after rotation no information that is still recoverable is lost, so maybe more could be read from them if the parsing of corrupted logs is improved in the future. That does seem like the most sensible policy to me.
          A major question is of course how well the reading works in relation to the corruption of the file. But I do not now the answer for that one.

          Comment


          • #75
            Originally posted by Candy View Post
            Sorry I must be mistaken. I seriously thought that this was a Forum, where people could speak up their mind.
            I said some stuff is clearly bullshit, and you basically say you're entitled to bullshitting? Thanks for agreeing with me you're another systemd crazy person (making up "facts").

            Comment


            • #76
              Originally posted by computerquip View Post
              I've never heard of or used puppet.
              Puppet is a very common/normal way to configure servers. If it doesn't handle systemd well (other person mentioned it has support for systemd, it is just in some cases the support has bugs) then inform them so they can fix it.

              That's pretty much with any software. Any change made can lead to bugs. If you don't report it isn't always picked up and fixed.

              Comment


              • #77
                Actually, the whole point of systemd is giving admins ability to control system attitude in universal & logical ways. There is no need for half-assy approaches when half services start here, half start there, some are elsewhere, and ... when I encounter some program, in traditional systems it can take heck a lot of efforts just to get idea how it started at all and why it appears.

                It can be started from cron. No, it wouldn't be listed by sysv init crap tools in this case. It can be started from e.g. rc.local. It can be started by sysv init. It can be started by some other program, or maybe script. And ... and in classic *nix like system there is virtually no way to track where this shit came from. It is so fucking cool to check like a dozen of places where it could potentially come from, using plenty of totally different tools/configs. Systemd is a major departure from this smouldering wreck. It uses cgroups, so we can track origins & hierarchy. If one uses fork(), service can easily escape tracking by PPID. So you see program, you see PPID=1 so it is reparented to init and ... no even slightest idea about anything else. Great system management, eh? Systemd uses cgroups, modern feature of Linux kernel. At this point it is not a big deal how many fork()s happens and who is formally PPID of process.

                So what? The right way is either to contact upstream for proper systemd unit, or create one yourself, or just ditch your abandonware, if nobody is going to support it and you're unable to support it yourself. Because otherwise you're boarding FAILBOAT and doomed to have a lot of unpleasant experience. Nobody is going to support ancient crap indefinitely unless you're either ok with doing it yourself or going to pay long bucks. And even long bucks do not warrant infinite support. Unless you're ok with unsigned long long amount of bucks.
                Last edited by SystemCrasher; 01-01-2016, 09:02 AM.

                Comment

                Working...
                X