Testing Out The SSD Mode In Btrfs
Phoronix: Testing Out The SSD Mode In Btrfs
One month ago we provided benchmarks of the Btrfs file-system and found that while it contained many features to make it a next-generation Linux file-system, its disk performance was rather displeasing. We had found the EXT4 file-system ran faster in a number of the tests and even EXT3 and XFS had their own advantages. Besides offering features like snapshots and online defragmentation, Btrfs has a mode that is optimized for solid-state drives. Will the Btrfs SSD mode cause this new Oracle-sponsored file-system to be the best for non-rotating media? We have benchmarks in this article, but the results may not be what one would expect.
Anyone that is interested in btrFS and Oracle's commitment to Linux should watch
LF Collaboration Summit 2009: Chris Mason, Oracle
Let's just hope that doesn't change, if Oracle buys Sun.
Last edited by Louise; 05-29-2009 at 10:53 PM.
Reason: "does" should have been "doesn't".
isn't the point of SSD mode less wear levelling? ie hurting the drive less? maybe that should be tested, too.
The only proper way to test that is to buy a couple dozen, and have them continuously writing for months... Bit expensive, and by the time you're done, btrfs will have been updated, so you can start again...
Anyways, btrfs is still very new, and not speed-optimized at all. And the ssd-mode is more geared towards ssd's with long write-latencies.
Furthermore, the trim-function of the vertex is not yet functional in linux (afaik), and I expect phoronix of running the tests after each other without bothering to "reset" the drive... That would make the results pretty much worthless...
How about going in one level deeper? As in matching filesystem clusters to the the physical flash cells. As with RAID stripes, it can be a performance penalty if the cluster is divided between two stripes/cells.
How much of an impact does this make? How do different chosen clustersizes affect different tasks at hand.
This would make an interesting read since making sure the filesystem matches the underlaying physical storage is non-trivial on Linux with weird and poorly documented offset behaviour when it comes to partitions and filesystems.
Would GPT help instead of having to battle with the ancient DOS scheme? Is the newer Windows way of making the first 4MB (or so) of a disk off-limits for partitions actually a cheap and effective way to get around this?
So far I've seen no mention of this on Phoronix. Also when SSDs become fast, it would be nice to see how many old fashioned disks you have to RAID0 and RAID5/6 to match the speed. The optimization of this would also include this aspect.
Ext4 = featureless 1990's technology filesystem (an upgrade from the 1980's style ext3)
Btrfs = relatively contemporary filesystem capable of handling enterprise needs.
Shoot... DOS/FAT might be faster in some benchmarks... I really think testing something that is half baked and has 4 times the features to something "old and mature" (feature wise) is a mistake.
It's almost like somebody is trying to make last minute sales of Vista^H^H^H^H^Hext4 before btrfs comes out.
I think Phoronix has missed the point of SSD mode
In general these "articles" showing pages of graphs are getting a little boring
Same with the tests of different kernels from Ubuntu PPA repositories, best way to test different versions of a kernel is to use vanilla kernel.org ones with as many settings as possible kept the same not Ubuntu ones.
Plus using git it would be possible to find the exact commits of kernel performance regressions and raise bugs - which is a lot more productive than rc7 sucks compared to rc6
In fact I'd quite happily do this, as it's probably more enjoyable than reading these 10 page graph fests that don't really tell us very much
personally i don't think we'll see the true performance of SSD's until they become common place enough that the entire communication chain is optimized for them...
*BIOS - reporting cylinders/sectors/tracks isn't really applicable, rather sectors and block layout of important, OS can probably get around this anyway
*File System - any system that attempts contiguous layout and failing that, attempts to keep data on adjacent cylinder in the same track or on adjacent tracks, this could result in data all over the place on an SSD.
*SSD firmware - I have to assume they are currently optimizing the firmware to assume the file system is attempting to keep data on adjacent cylinders/tracks. If this is true, then a file system optimized for SSD's could in fact create more fragmented data as its not acting as the firmware would expect.
*I/O Scheduler - CFQ reorders commands when it thinks they are near each other in cylinder/track to reduce head movement, this is likely sub optimal for an SSD as it wants to write to "close" blocks.
of course this entire signal processing chain then needs to interact with the wear leveling algorithm which again adds a layer of misdirection when it comes to the final data location.
ultimately i think we'll see this chain starting to look much more like a memory allocator and the final layout only being determined by the SSD.
from what i've read on the linux magazine this month edition in brazil it's state the buy of sun enterprise by oracle, unless i miss read. But my worries are what will happen to (open)solaris and its technologies? what will happen to MySQL and OpenOffice? will oracle maintain everything as Sun did or will oracle kill those open source projects?
EDIT: Sorry being of topic!
mount -o ssd updates in 2.6.30-rc
Thanks for running these, just a note that the mount -o ssd option has changed quite a bit during 2.6.30-rc, so updating the kernel should give different results.
At least on my ssd, it does improve writeback speeds quite a bit