Announcement

Collapse
No announcement yet.

File-System Benchmarks On The Intel X25-E SSD

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    He doesn't talk about Linux at all, but I would put forth that Anand's article posted yesterday would be very informative for anyone wanting to benchmark SSDs at all. See it here:
    SSD Anthology: Understanding SSDs and New Drives from OCZ
    Be aware, it's a monster.

    In particular, I'd like to highlight his point that random read and random write performance for small files is much more important than maximum sequential throughput. Gentoo users and hackers on large projects, you can him up on this, I'm sure.

    And yes, more data sets would be nice. The defaults are certainly useful, but mount options can sometimes change things drastically. Likewise, more filesystems would be interesting to see (I'm going to throw in a vote for SpadFS, as I'm curious how a B-tree-less filesystem behaves under various conditions).

    Edit: Link repair

    Comment


    • #32
      The article's conclusion: "EXT4 and XFS were the two file-systems that had performed particularly well on the Intel X25-E Extreme SSD." seems to be based on the several large-file sequential tests they did. The only small-files test that wasn't flat across the board (CPU-limited by what the test program ws doing with the data, e.g. compiling C) was the file-server access patter in IO meter. JFS destroyed the competition, being twice as fast as ext3/4, and 1.4 times as fast as XFS. (Although xfs might have done better with mkfs -l lazy-count=1, which IIRC isn't the default yet, to maintain backwards compat. mounting with -o logbsize=256k helps a ton, too.)

      Since SSDs are small, you'd be crazy to use them for your multimedia bulk storage. (except maybe for a scratch drive for one movie you're editting, if you edit your own movies instead of just make dvdrips.) Or if you have a laptop and therefore only one drive total.

      So, use an extent-based filesystem (XFS or ext4) for your large-file filesystem, probably on a magnetic disk where all their fancy algorithms to improve locality are actually worth their CPU cost. It's well-known that extent-based filesystems are the way to go for large files.

      If you add an SSD to your system, use it for /, /usr, and /home. /usr/local/src is another good one, esp. if you have source trees for a few large projects kicking around.

      JFS is probably a good choice for those small-file filesystems on your SSD. JFS's CPU efficiency compared to other FSes is probably what makes it so fast. (i.e. with small files and large directories, probably a lot of stuff will be CPU bound.)

      re: default options:
      most filesystem's mkfs commands default to making filesystems that will be accessable if you have to put your drive into your old machine with a 2 or 3 year old kernel. So they don't enable the good stuff by default. In the modern era of live CDs and USB keys, this doesn't make as much sense as it used to (i.e. it's not hard to download a recent livecd and boot it to get at your data.)

      But I think the solution is this:
      mkfs --no-backwards-compat
      So there's the default defaults, and there's the good defaults that have all the settings enabled that the developers would make default except for maintaining backwards compat. It's what the normal defaults might be a year from now.

      That makes it easy for FS developers to put their best foot forwards and make a good impression with people trying a filesystem they're not familiar with. Obviously most filesystems have options that will be good for some specific workloads, but it's an easy way to enable everything that's recommended across the board for everyone.

      Has anyone ever suggested this before, and if so why hasn't it caught on?

      Comment

      Working...
      X