If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
Where are the overall performance charts? I've been asking for these sort of charts since pts was in beta. These sort of articles are useful for individual metrics but are useless for the average reader who is just trying to choose the best overall.
If you take the time to tabulate and average out relative performance, you will see that NILFS2 was the best overall for a HDD, but reading this article won't tell you that. Is this so much to ask from pts?
Sounds like a bug report is in order for them? Btrfs has no problem adapting to a SSD by default.
I don't have the ability to mkfs a btrfs filesystem unless I go in search of the utilities and compile them myself but I did have a quick read of the btrfs source from 2.6.37 and there are several options there relating to SSDs. There's ssd, ssd_spread, nossd and discard. These all seem to be set independently and the info in Documentation/filesystems/btrfs.txt says that discard is not default. Running `mount` would get it to tell you what options were actually in effect though and I would expect it to say both ssd and discard if both were in effect - ssd does not seem to imply or set discard. I also looked at Documentation/filesystems/nilfs.txt and that also says that nodiscard is default.
does jfs support barriers? are they turned on?
if not, you can disregard the results.
JFS does not have an option for turning on barriers, and it does not use any barriers at all, as near as I can tell from searching the code for WRITE_FLUSH, or blkdev_issue_flush().
That's my main problem with the Phoronix benchmarks. It doesn't compare apples and oranges, but instead uses the "default options", which aren't the same across file systems. The fact that it is apparently running with the garbage collector off for nilfs2 also will give very misleading results. File systems that use a copy-on-write also have a tendency to fragment their freespace very badly --- but that's something that doesn't show up if you just do a free mkfs of the file system between each benchmark run, and you don't try to simulate any kind of file system aging before starting the test.
As with all benchmarks, you need to take them with a huge grain of salt.
The problem with that is that with barrier off, you can lose data --- especially if the system is very busy, and you crash while the disk is thrashing. Disks can sometimes delay writing blocks for full seconds; if you're sending lots of disk traffic to the end of the disk (i.e., a very high block number), and then you update a critical file system metadata block at the beginning of the disk (i.e., a very low block number), and then go back to sending lots of disk writes to the end of the disk, the hard drive can decide that it will avoid seeking to the beginning of the disk, keep it in its volatile RAM cache on the disk, and then focus on handling the writes at the end of the disk. If power drops, the critical file system metadata update could be lost forever. This is why barriers are important.
Given that very few people are using reiserfs and JFS from distributions, and those file systems are effectively unmaintained, no one has bothered to fix them so they use barriers either by default, or at all. JFS doesn't have the ability to use barriers at all. In the case of reiserfs, Chris Mason submitted a patch 4 years ago to turn on barriers by default, but Hans Reiser vetoed it. Apparently, to Hans, winning the benchmark demolition derby was more important than his user's data. (It's a sad fact that sometimes the desire to win benchmark competition will cause developers to cheat, sometimes at the expense of their users.)
In the case of ext3, it's actually an interesting story. Both Red Hat and SuSE turn on barriers by default in their Enterprise kernels. SuSE, to its credit, did this earlier than Red Hat. We tried to get the default changed in ext3, but it was overruled by Andrew Morton, on the grounds that it would represent a big performance loss, and he didn't think the corruption happened all that often --- despite the fact that Chris Mason had developed a python program that would reliably corrupt an ext3 file system if you ran it and then pulled the power plug. I suppose we should try again to have the default changed, now that Jan Kara is the official ext3 maintainer, and he works at SuSE.
So when you say, "it's the default as it comes from the distribution", that can be a bit misleading. The Enterprise distro's change the defaults, for safety's sake. I'm not sure, but I wouldn't be surprised if SuSE forced Reiserfs to use barriers by default when shipped in their Enterprise product, given that they were going to support it and paying customers generally get upset when their file systems get fragged. (Also because Chris Mason worked at SuSE when he submitted the reiserfs barrier patch that got vetoed by Hans.) So just because it is one way as shipped by a community distribution, or from upstream, doesn't necessarily that is the way enterprise distros will ship things when paying customers have real money on the line.
If people are just simply reading your reports for entertainment's sake, that's one thing. But if you don't warn them about the dangers of the default options, and they make choices to use a file system based on your report --- should you feel any concern? I suppose that's between you and your conscience and/or your sense of journalistic integrity....