Phoronix: Where we litter every announcement with the past years prior announcements and bury the actual announcement in the article to read what you really want to read.
Announcement
Collapse
No announcement yet.
Systemd 219 Released With A Huge Amount Of New Features
Collapse
X
-
Originally posted by Ericg View PostNo.. you don't KNOW whether or not rsyslog/syslog-ng were giving you corrupted logs because there wasn't a check FOR corrupted logs. You would've never known, at least with journald you get told "Hey, for whatever reason, maybe my fault, maybe disk's fault, part of this log is lost."
But I've never experienced corrupted logs with rsyslog/syslog-ng and I've been using Linux for more than 15 years.
On the contrary, with journald I'm experiencing corrupted logs on a daily basis and I have to remove them manually from /var/log/journal just because they annoy me.
I tend to do `journalctl --verify` just to check if there are any corrupted logs, and most of the time a few entries with FAIL would show up.
And before someone says that I'm powering off my machine in a bad way, no, I'm doing `systemctl poweroff` or just `poweroff`.
Don't get me wrong, I like systemd but corrupted logs and no way to fix them is unacceptable.Last edited by ihatemichael; 16 February 2015, 09:17 PM.
Comment
-
Whoa, really welcome set of changes!
It looks like if Lennart finally decided to obsolete stuff like Docker and properly integrate with system features like ones btrfs could provide. Appears to be very promising and powerful combo.
So soon it will be possible to get decent VM/containers management by default in most Linux distros. Good thinking!
Comment
-
Originally posted by ihatemichael View PostI think you might be right about that, sorry.
But I've never experienced corrupted logs with rsyslog/syslog-ng and I've been using Linux for more than 15 years.
On the contrary, with journald I'm experiencing corrupted logs on a daily basis and I have to remove them manually from /var/log/journal just because they annoy me.
I tend to do `journalctl --verify` just to check if there are any corrupted logs, and most of the time a few entries with FAIL would show up.
And before someone says that I'm powering off my machine in a bad way, no, I'm doing `systemctl poweroff` or just `poweroff`.
Don't get me wrong, I like systemd but corrupted logs and no way to fix them is unacceptable.
*Before someone jumps down my throat, it very well could be a bug in journald, and the bug report's answer of "If we fsck it we might fsck it up further" is a valid fear. Whether its the RIGHT answer or not is a different question but the fear itself is valid. Look at the early days of ext* fsck and btrfs fsck-- usually did more damage than it fixed.All opinions are my own not those of my employer if you know who they are.
Comment
-
Originally posted by ihatemichael View PostI think you might be right about that, sorry.
But I've never experienced corrupted logs with rsyslog/syslog-ng and I've been using Linux for more than 15 years.
On the contrary, with journald I'm experiencing corrupted logs on a daily basis and I have to remove them manually from /var/log/journal just because they annoy me.
I tend to do `journalctl --verify` just to check if there are any corrupted logs, and most of the time a few entries with FAIL would show up.
And before someone says that I'm powering off my machine in a bad way, no, I'm doing `systemctl poweroff` or just `poweroff`.
Don't get me wrong, I like systemd but corrupted logs and no way to fix them is unacceptable.
You should get a new hard drive then :P
Having Arch running on a USB Flash with BTRFS and never have corrupted logs
Comment
-
The journal logs so much more information than syslog that even with a few regular corrupted log pieces you will still have more useful information to work with than if you had used syslog.
Joking aside, of course if logs get corrupted easily, something needs to be done.
Comment
-
Originally posted by ihatemichael View PostDon't get me wrong, I like systemd but corrupted logs and no way to fix them is unacceptable.
First, filesystem should not corrupt files just because you've shut machine down. Most of time its true. So maybe you're running on faulty hardware, etc? If its not a case, possibly you found some nasty bug in some filesystem, etc. Then it could be reasonable to file it to bug tracking system.
Then, there should be tool to repair broken logs, which will try to salvage as many records as it can. Smth similar to fsck. On side note: good luck to recover, say, damaged gzipped text logs. And good luck to verify correctness of plaintext logs where it could be hard to spot some kinds of damage or malicious activity at all.
Comment
-
Originally posted by alaviss View PostYou should get a new hard drive then :P
Having Arch running on a USB Flash with BTRFS and never have corrupted logs
I have no way to check the integrity of the hard drive as I can't do SMART over USB.Last edited by ihatemichael; 16 February 2015, 09:51 PM.
Comment
-
Originally posted by SystemCrasher View PostThen, there should be tool to repair broken logs, which will try to salvage as many records as it can. Smth similar to fsck. On side note: good luck to recover, say, damaged gzipped text logs. And good luck to verify correctness of plaintext logs where it could be hard to spot some kinds of damage or malicious activity at all.
As far as the fsck utility... They are afraid that attempts to repair parts of the log could accidentally corrupt the log further. Instead they are opting to setting the log aside and essentially making it read-only in order to prevent any further corruption to it. Basically, at least in theory, once its detected as being corrupt it will never get worse. What you have is what you have, which is typically a lot, and a new log is started fresh. If an already flagged log DOES get worse then you probably have a dying hard drive.All opinions are my own not those of my employer if you know who they are.
Comment
Comment