Announcement

Collapse
No announcement yet.

File-System Benchmarks On The Intel X25-E SSD

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Peter_Cordes
    replied
    The article's conclusion: "EXT4 and XFS were the two file-systems that had performed particularly well on the Intel X25-E Extreme SSD." seems to be based on the several large-file sequential tests they did. The only small-files test that wasn't flat across the board (CPU-limited by what the test program ws doing with the data, e.g. compiling C) was the file-server access patter in IO meter. JFS destroyed the competition, being twice as fast as ext3/4, and 1.4 times as fast as XFS. (Although xfs might have done better with mkfs -l lazy-count=1, which IIRC isn't the default yet, to maintain backwards compat. mounting with -o logbsize=256k helps a ton, too.)

    Since SSDs are small, you'd be crazy to use them for your multimedia bulk storage. (except maybe for a scratch drive for one movie you're editting, if you edit your own movies instead of just make dvdrips.) Or if you have a laptop and therefore only one drive total.

    So, use an extent-based filesystem (XFS or ext4) for your large-file filesystem, probably on a magnetic disk where all their fancy algorithms to improve locality are actually worth their CPU cost. It's well-known that extent-based filesystems are the way to go for large files.

    If you add an SSD to your system, use it for /, /usr, and /home. /usr/local/src is another good one, esp. if you have source trees for a few large projects kicking around.

    JFS is probably a good choice for those small-file filesystems on your SSD. JFS's CPU efficiency compared to other FSes is probably what makes it so fast. (i.e. with small files and large directories, probably a lot of stuff will be CPU bound.)

    re: default options:
    most filesystem's mkfs commands default to making filesystems that will be accessable if you have to put your drive into your old machine with a 2 or 3 year old kernel. So they don't enable the good stuff by default. In the modern era of live CDs and USB keys, this doesn't make as much sense as it used to (i.e. it's not hard to download a recent livecd and boot it to get at your data.)

    But I think the solution is this:
    mkfs --no-backwards-compat
    So there's the default defaults, and there's the good defaults that have all the settings enabled that the developers would make default except for maintaining backwards compat. It's what the normal defaults might be a year from now.

    That makes it easy for FS developers to put their best foot forwards and make a good impression with people trying a filesystem they're not familiar with. Obviously most filesystems have options that will be good for some specific workloads, but it's an easy way to enable everything that's recommended across the board for everyone.

    Has anyone ever suggested this before, and if so why hasn't it caught on?

    Leave a comment:


  • Wyatt
    replied
    He doesn't talk about Linux at all, but I would put forth that Anand's article posted yesterday would be very informative for anyone wanting to benchmark SSDs at all. See it here:
    SSD Anthology: Understanding SSDs and New Drives from OCZ
    Be aware, it's a monster.

    In particular, I'd like to highlight his point that random read and random write performance for small files is much more important than maximum sequential throughput. Gentoo users and hackers on large projects, you can him up on this, I'm sure.

    And yes, more data sets would be nice. The defaults are certainly useful, but mount options can sometimes change things drastically. Likewise, more filesystems would be interesting to see (I'm going to throw in a vote for SpadFS, as I'm curious how a B-tree-less filesystem behaves under various conditions).

    Edit: Link repair

    Leave a comment:


  • chithanh
    replied
    The MySQL performance blog has also done performance testing on the X25-E.

    They came to the conclusion that write cache enabled and write barriers disabled leads to lost transactions (unsurprisingly).

    With write barriers enabled the performance turned out to be very poor.

    Leave a comment:


  • paul_one
    replied
    ANOTHER test with encryption/compression?

    You even mention in the testing that it's CPU-bound, and not drive-bound... So why do them?

    There's no details on how the different filesystems were created - which someone else has mentioned so far.
    Due to the wear-levelling I'd have thought it would be sensible to totally blank the drives after each test to get true comparitive results (this is basically a hardware format of the drive - not simply an fdisk operation).

    Personally, I'd have preferred different systems set up - but that would require 5 (or so) X-25's and Intel laptop's. Obviously silly (but also the only way to get TRUE comparisons).

    Maybe run the first Filesystem, wipe, do each test and then re-test the first filesystem (to see if any impact has been done).

    Again, no true read/write timing was performed (only **simulated** DATA reading/writing).
    Again, only single read/write actions at the same time.

    .. I'm starting to loose faith..

    Leave a comment:


  • kernelOfTruth
    replied
    noatime,nodiratime (or only noatime) should be the bare minimum and should speed things noticably up

    unfortunately delete performance of reiser4 sucks somewhat but that's the price to pay for a filesystem that is the best in all the other areas

    Leave a comment:


  • energyman
    replied
    as I am afraid. Yeah, then you can add 30% to the times of ext3 (if there devs is to believe).

    try barrier=1 for ext3. For xfs and reiserfs the option is not needed. jfs does not support barriers.

    Leave a comment:


  • Michael
    replied
    Originally posted by energyman View Post
    that still doesn't answer which options were used to mount the fs. ext3 cheats (speed is more important than data safety) and IMHO after the 'lost kde/gnome/everything in /etc' desaster nobody should use ext4. Ever.

    There are fs that care about your data (reiserfs, reiser4, ext3 with the right mount options) and fs that don't (ext4).
    Everything was left at their Ubuntu defaults.

    Leave a comment:


  • mutlu_inek
    replied
    Regarding Reiser4: There are patchsets for all recent kernels available on kernel.org: http://www.kernel.org/pub/linux/kern...dward/reiser4/

    It would be great if it was included in these tests.

    Leave a comment:


  • marakaid
    replied
    Originally posted by energyman View Post
    that still doesn't answer which options were used to mount the fs. ext3 cheats (speed is more important than data safety) and IMHO after the 'lost kde/gnome/everything in /etc' desaster nobody should use ext4. Ever.

    There are fs that care about your data (reiserfs, reiser4, ext3 with the right mount options) and fs that don't (ext4).
    What are the right options for ext3 and the others?

    Leave a comment:


  • energyman
    replied
    yeah, you see - that is the problem - ext3 was tuned to look good in benchmarks with 'the defaults'. But 'the defaults' are shit.

    Leave a comment:

Working...
X