Announcement

Collapse
No announcement yet.

10-Way Linux File-System Comparison On Linux 3.10

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Lizbeth
    replied
    Seems like xfs is the all around winner

    Leave a comment:


  • mazenmardini
    replied
    What is the ultimate filesystem?
    Last edited by mazenmardini; 12 August 2013, 11:35 PM.

    Leave a comment:


  • justinzane
    replied
    Efficient for Semi-Static Files, Numbers?

    Originally posted by jwilliams View Post
    Incorrect. relatime only updates the access time if it is earlier than the last mtime/ctime. For example, with relatime, if a file is modified and then read, there will be a write to update the access time. If the file is read again, there will be no more writes to update the access time (until the file is modified again).
    My understanding is that this, `relatime`'s behaviour as you describe, is of great utility for files that are rarely updated but frequently read. That would be files like the contents of /usr, /bin, /etc as well as image archives, music archives, etc. Again, as I understand it, `relatime` is pretty much useless for frequently updated files like those in /var. There are a ton of articles referencing "relatime write reduction" via Google, but seemingly none that have actual test/benchmark data on how much reductions is typical in various environments. Though this is now quite tangential to the original topic, you wouldn't happen to know of anywhere that has data on the effects of <none> vs `relatime` vs `noatime`, would you?

    Leave a comment:


  • jwilliams
    replied
    Originally posted by Artemis3 View Post
    Any time you read ANYTHING, a write must occur. Relatime delays the writes so they occur more efficiently, but they still occur.
    Incorrect. relatime only updates the access time if it is earlier than the last mtime/ctime. For example, with relatime, if a file is modified and then read, there will be a write to update the access time. If the file is read again, there will be no more writes to update the access time (until the file is modified again).

    Leave a comment:


  • justinzane
    replied
    Thanks, but...

    Originally posted by Artemis3 View Post
    Think of it.

    Any time you read ANYTHING, a write must occur. Relatime delays the writes so they occur more efficiently, but they still occur.

    This is why i always use noatime. Almost nothing (mutt?) needs to know when was the last time a file was read, and the performance loss is not neglible, not to mention adding wear to flash media.
    Thanks, you are probably right; but, that is thoroughly beside my mount. I am **not** suggesting that my options are optimal or the most commonly used. I'm just suggesting that it is a good idea to bnechmark whatever options **are** optimal/most common. Determining that seems to be something that Michael does regularly anyway.

    Leave a comment:


  • Artemis3
    replied
    Use noatime instead of relatime

    Think of it.

    Any time you read ANYTHING, a write must occur. Relatime delays the writes so they occur more efficiently, but they still occur.

    This is why i always use noatime. Almost nothing (mutt?) needs to know when was the last time a file was read, and the performance loss is not neglible, not to mention adding wear to flash media.

    Leave a comment:


  • qlat4
    replied
    Great that JFS appears in a test

    Over the years I've become pretty fond of JFS, which has never let me down, so it's great to see it included in this test. I wish Phoronix would include it in the other filesystem tests run from time to time. It may not be the latest thing, but it's solid, and always appears right up there in comparison tests like this. It's a pity that RedHat (and Fedora) SuSE (and OpenSuSE) make it difficult-to-impossible to install from scratch using JFS, but at least Debian has retained it as an option.

    I find JFS great on KVM guests, especially with the noop scheduler.

    While XFS is good too, especially for larger files, I was responsible for systems during the dreaded file corruption days if a filesystem wasn't shut down cleanly, now a forgotten episode, but that little sense of mistrust still remains long after the issue was resolved. I also have horrible memories of piecing together an EXT4 system from the lost+found folder, and lost an entire resier3 filesystem once, on a system running on another continent!

    JFS has been great on lightweight systems too. I run one server at an off-grid location, where power consumption is a significant issue, not just an ideal. JFS is known to be frugal on processor demand, and squeezes good capacity from small disks too.

    Any chance of Phoronix repeating that seminal 2007 file system comparison test done on Debian?

    Leave a comment:


  • jwilliams
    replied
    Originally posted by Vim_User View Post
    Most (all?) modern SSDs do compression themselves in hardware.
    Incorrect. The only common consumer SSDs that do compression are those with a Sandforce controller.

    Leave a comment:


  • Vim_User
    replied
    Most (all?) modern SSDs do compression themselves in hardware. Adding a software compression option therefore should not only cause performance decreases, but give also no advantages in used space. I would disable that on SSDs.

    Leave a comment:


  • justinzane
    replied
    Confused...

    Originally posted by jwilliams View Post
    The default options are generally default for a reason -- they are safe and give reasonably good performance under a wide variety of workloads and environments.

    You mention the discard option. That actually hurts performance with some SSDs, since some SSDs do not behave well when given a large list to TRIM. That is probably why it is not default.

    Using compression is a very bad choice with many of the benchmarks phoronix runs, since many of the benchmarks are writing streams of zeros, which compress exceedingly well, unlike more realistic data.
    I'm basing my assertion that some options are preferable both on experience with my own systems doing real tasks and on phoronix' mount options comparisons. Since it looks like the inter-filesystem and intra-filesystem benchmarks run mostly the same tests, it seems like you are implying that Michael's intra-filesystem benches -- the mount option comparisons -- are pretty worthless. Now, I've used just about every tool in the PTS disk suite at some point, and I've written a few hackish benchmarks own my own for specific purposes. I know that each individual test has design biases and that even recording and replaying the disk activity of an end-user system is only reflective of that user.

    However, one of the values of PTS, to me, is that it provides -- and Michael runs -- a variety of different benchmarks so that a more general insight can be gained. Given the unquestioned bias of benchmarks, it still seems that using the options that are shown to be most effective will be most commonly used. And, as I said, there are obviously differences between cheap flash, rotational media and modern SSDs. Since it seems like almost all FS benchmarks on phoronix are being done with either magnetic disks or modern SSDs, that would give 2 "optimal" sets of mount options, those with SSD optimization and those without.

    <gone to supper>

    Leave a comment:

Working...
X