Announcement

Collapse
No announcement yet.

Systemd 219 Released With A Huge Amount Of New Features

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    SemiOT: Does someone use the systemd automount feature? Currently i use autofs for my networkshares. the problem is with my laptop, if i'm not at home he trys to connect to them but that can only fail and the filebroser is not responding until the timeout and then autofs try it again.

    Comment


    • #62
      I'm not a fan of systemd in general for various reasons I don't want to repeat from my previous posts. But from a purely practical point of view the abomination could be barely tolerable if not for the hard dependency on journald. Seriously, journald needs to be killed with fire and whole half-baked idea of using binary logs in a *NIX system must burn with it.

      Comment


      • #63
        Long live to OpenRC

        My Debian machines work fine with OpenRC and non-binary logs. I couldn't live with binary logs.

        Comment


        • #64
          Originally posted by malkavian View Post
          I couldn't live with binary logs.
          Why not?

          From an end user point of view there is only pluses to journald and its binary logs.

          Its just that instead of grep you use journalctl and end up with a hell of a lot more detailed information and much better searching abilities.

          Comment


          • #65
            Originally posted by SystemCrasher View Post
            And good luck to verify correctness of plaintext logs where it could be hard to spot some kinds of damage or malicious activity at all.
            Normal plain text logs okay, but it doesn't seem so hard to have a kind of rolling CRC included in "plaintext" logs.
            Advantage: you can still open your logfile with vi even though it isn't anymore really a 'plaintext' logfile.
            Disadvantage: if you update your logfile with vi, you'll 'corrupt' it.

            Comment


            • #66
              I'd say the way journald handles corrupted logs is actually pretty smart. It has the advantages of a fsck-like tool, without the possible corruption when using fsck.

              When a corrupted file is detected, it is protected from further corruption because the journal starts writing new log entries to a new log file. Still, the old "corrupted" file is kept around read-only in the log directory, just like old rotated log files. The advantages of fsck-like behaviour are then implemented in the journal file parsing/reading logic. So anything an fsck-like tool could recover from the corrupted file is automatically read from the corrupted read-only file whenever I use journalctl to read the logs. So while having "corrupted" files on disk sounds useless, in the journald case it quite the opposite: Maybe 99,9% of relevant data from that file can still be read? Deleting "corrupted" journal logs is the wrong thing to do in that case.

              Then, after some time, my log directory is full of old and not-quite-that-old logs, and the oldest log file gets deleted, no matter if it's corrupted or not. Until then, journalctl tried to read as many log entries as possible from that file whenever I used journalctl to read logs from that file.

              Pretty cool, and I see no use in an explicit fsck feature. That would have no further advantages. Even worse, when fscking a corrupted file, some partially corrupted information may get lost that could otherwise be (partially) salvaged after updating to a newer journald version with improved corrupted file parsing logic.

              Comment


              • #67
                It should not be impossible to get journald write to log files if it can write to syslog. Having a standardized central logging system is a good idea regardless of result output. Writing to log files directly is highly fragile and depends on a specific implementation detail that small appends can be made atomically

                Comment


                • #68
                  Originally posted by danwood76 View Post
                  Why not?

                  From an end user point of view there is only pluses to journald and its binary logs.

                  Its just that instead of grep you use journalctl and end up with a hell of a lot more detailed information and much better searching abilities.
                  I love grep, awk and other utilities that let me work as I want with the logs. I like the DIY philosophy and if I use advanced utilities I want to know what and how I am doing.

                  Comment


                  • #69
                    Originally posted by duby229 View Post
                    Yeah, it should work on anything SATA. Blocks on an SSD are different from sectors on a harddrive though. It's highly unlikely to get bad blocks on an SSD until they start write wearing. On Windows SSD's tend to start write wearing near the beginning of the drive. I haven't experienced any wearing on linux yet so not sure what it would like like on linux.
                    Ever heard about wear-leveling? The OS has no influence at all where the SSD is actually writing its data.

                    Comment


                    • #70
                      @ihatemichael:

                      Did you end up figuring out whether your drive was bad? Did it turn out that "journalctl --verify" simply alerted you to a condition you likely wouldn't have discovered had you used e.g. syslog-ng?

                      In other words: Was the invective really necessary/justified?


                      Also, I kind of liked this quip:

                      "I guess I can learn something if I don't let my stubborness get in the way."

                      Though I can obviously only speak for myself, my guess would be that it is true for quite a lot of us. In any case, I should probably write that quote down and place it prominently, say, in front of my keyboard, just below my monitor...

                      Comment

                      Working...
                      X