Large HDD/SSD Linux 2.6.38 File-System Comparison
Phoronix: Large HDD/SSD Linux 2.6.38 File-System Comparison
Here are the results from our largest Linux file-system comparison to date. Using the soon-to-be-released Linux 2.6.38 kernel, on a SATA hard drive and solid-state drive, we benchmarked seven file-systems on each drive with the latest kernel code as of this past weekend. The tested file-systems include EXT3, EXT4, Btrfs, XFS, JFS, ReiserFS, and NILFS2.
BTRFS sucks for most relevant benchmarks - SQL
Why does BTRFS suck so much for the SQL benchmark that matter most to desktop users (ie. can be directly traced to the performance of firefox's website database)?
Ext3 has barrier=0 as default? Really? Seems strange.
Isn't that a distro-specific thing, though, default mount options?
I'm not sure if the inclusion of a plain HDD was per my request, but either way, thanks. The results were similar on some tests, but also very different on some others, so I think it's a good idea to keep doing it.
(I would assume the ones where the differences were greatest are the most seek-heavy workloads, which also seem like the ones it's most important to optimize in the mechanical HDD case.)
I get confused about all these different benchmarks. Is there a description of all the profiles somewhere? The names of the benchmarks are certainly not very descriptive. For example, for me the main factors in IO performance are:
1) Random write/read of small files (this is 90% operations of a desktop user).
2) Sequential write/read speed (you are copying/moving files)
3) Parallel sequential write/read speed (you are copying two big files at once -- ideally the combined speed should be the same as (2), but in reality it is often much lower)
Which benchmarks do I need to peruse to find out how the filesystems do in the described workloads?
Thanks! (character limit...)
ext3 always was, always will be, optizimed for benchmarks with default settings.
These default settings are completely idiotic. They risk the data of the user. But that does not count. For ext3 devs it is more important to have good numbers when somebody does a standard test without leveling the field via mount options.
Now, turn on barriers for everybody and see ext3 die a slow death.
Thanks for finally including JFS. It's surprisingly fast on a SSD.
On CentOS and according to the ext4 doc, the mount option 'discard' is off by default and should probably be turned on for SSD testing to take advantage of TRIM support.
Tags for this Thread