Page 2 of 7 FirstFirst 1234 ... LastLast
Results 11 to 20 of 66

Thread: Large HDD/SSD Linux 2.6.38 File-System Comparison

  1. #11
    Join Date
    Apr 2010
    Posts
    271

    Default

    Quote Originally Posted by TrevorPH View Post
    On CentOS and according to the ext4 doc, the mount option 'discard' is off by default and should probably be turned on for SSD testing to take advantage of TRIM support.
    Sounds like a bug report is in order for them? Btrfs has no problem adapting to a SSD by default.

  2. #12
    Join Date
    Jul 2008
    Posts
    1,723

    Default

    Quote Originally Posted by curaga View Post
    Thanks for finally including JFS. It's surprisingly fast on a SSD.
    does jfs support barriers? are they turned on?
    if not, you can disregard the results.

  3. #13
    Join Date
    Jun 2008
    Posts
    197

    Default

    Where are the overall performance charts? I've been asking for these sort of charts since pts was in beta. These sort of articles are useful for individual metrics but are useless for the average reader who is just trying to choose the best overall.

    If you take the time to tabulate and average out relative performance, you will see that NILFS2 was the best overall for a HDD, but reading this article won't tell you that. Is this so much to ask from pts?

  4. #14
    Join Date
    May 2008
    Posts
    31

    Default

    I also find it hard to work out which is the best solution. I think it would be best to order results with best performance at the top. The current disorder is infinitely harder to analyse.

  5. #15
    Join Date
    Sep 2006
    Posts
    714

    Default

    Quote Originally Posted by cruiseoveride View Post
    Where are the overall performance charts?

    Hopefully never going to happen. Micheal should have better sense then that.

    Such things would be so blindingly worthless and counterproductive that it would counter any sort of positive benefit PTS file system benchmarks can offer.

  6. #16
    Join Date
    Oct 2010
    Posts
    9

    Default

    Quote Originally Posted by locovaca View Post
    Sounds like a bug report is in order for them? Btrfs has no problem adapting to a SSD by default.
    I don't have the ability to mkfs a btrfs filesystem unless I go in search of the utilities and compile them myself but I did have a quick read of the btrfs source from 2.6.37 and there are several options there relating to SSDs. There's ssd, ssd_spread, nossd and discard. These all seem to be set independently and the info in Documentation/filesystems/btrfs.txt says that discard is not default. Running `mount` would get it to tell you what options were actually in effect though and I would expect it to say both ssd and discard if both were in effect - ssd does not seem to imply or set discard. I also looked at Documentation/filesystems/nilfs.txt and that also says that nodiscard is default.

  7. #17
    Join Date
    Jul 2009
    Posts
    250

    Default

    ntfs might have been a nice part. i think many ppl use it on mass storages for usability with windoze

  8. #18
    Join Date
    Feb 2009
    Posts
    22

    Default

    Quote Originally Posted by energyman View Post
    does jfs support barriers? are they turned on?
    if not, you can disregard the results.
    JFS does not have an option for turning on barriers, and it does not use any barriers at all, as near as I can tell from searching the code for WRITE_FLUSH, or blkdev_issue_flush().

    That's my main problem with the Phoronix benchmarks. It doesn't compare apples and oranges, but instead uses the "default options", which aren't the same across file systems. The fact that it is apparently running with the garbage collector off for nilfs2 also will give very misleading results. File systems that use a copy-on-write also have a tendency to fragment their freespace very badly --- but that's something that doesn't show up if you just do a free mkfs of the file system between each benchmark run, and you don't try to simulate any kind of file system aging before starting the test.

    As with all benchmarks, you need to take them with a huge grain of salt.

  9. #19

    Default

    Quote Originally Posted by tytso View Post
    That's my main problem with the Phoronix benchmarks. It doesn't compare apples and oranges, but instead uses the "default options", which aren't the same across file systems.
    That's done because most people just use the defaults given to them by upstream or their distribution so it's try to meant to be a real world comparison - http://www.phoronix.com/scan.php?pag...item&px=OTE4OQ

    If you would like to recommend a particular set of mount options for each file-system, I would be happy to carry out such tests under those conditions as well to complement the default options.

    Quote Originally Posted by tytso View Post
    The fact that it is apparently running with the garbage collector off for nilfs2 also will give very misleading results.
    The cleaner was running, but not quick enough for dbench, I assume a bug in that for NILFS2.

  10. #20
    Join Date
    Feb 2009
    Posts
    22

    Default

    Quote Originally Posted by Michael View Post
    That's done because most people just use the defaults given to them by upstream or their distribution so it's try to meant to be a real world comparison - http://www.phoronix.com/scan.php?pag...item&px=OTE4OQ
    The problem with that is that with barrier off, you can lose data --- especially if the system is very busy, and you crash while the disk is thrashing. Disks can sometimes delay writing blocks for full seconds; if you're sending lots of disk traffic to the end of the disk (i.e., a very high block number), and then you update a critical file system metadata block at the beginning of the disk (i.e., a very low block number), and then go back to sending lots of disk writes to the end of the disk, the hard drive can decide that it will avoid seeking to the beginning of the disk, keep it in its volatile RAM cache on the disk, and then focus on handling the writes at the end of the disk. If power drops, the critical file system metadata update could be lost forever. This is why barriers are important.

    Given that very few people are using reiserfs and JFS from distributions, and those file systems are effectively unmaintained, no one has bothered to fix them so they use barriers either by default, or at all. JFS doesn't have the ability to use barriers at all. In the case of reiserfs, Chris Mason submitted a patch 4 years ago to turn on barriers by default, but Hans Reiser vetoed it. Apparently, to Hans, winning the benchmark demolition derby was more important than his user's data. (It's a sad fact that sometimes the desire to win benchmark competition will cause developers to cheat, sometimes at the expense of their users.)

    In the case of ext3, it's actually an interesting story. Both Red Hat and SuSE turn on barriers by default in their Enterprise kernels. SuSE, to its credit, did this earlier than Red Hat. We tried to get the default changed in ext3, but it was overruled by Andrew Morton, on the grounds that it would represent a big performance loss, and he didn't think the corruption happened all that often --- despite the fact that Chris Mason had developed a python program that would reliably corrupt an ext3 file system if you ran it and then pulled the power plug. I suppose we should try again to have the default changed, now that Jan Kara is the official ext3 maintainer, and he works at SuSE.

    So when you say, "it's the default as it comes from the distribution", that can be a bit misleading. The Enterprise distro's change the defaults, for safety's sake. I'm not sure, but I wouldn't be surprised if SuSE forced Reiserfs to use barriers by default when shipped in their Enterprise product, given that they were going to support it and paying customers generally get upset when their file systems get fragged. (Also because Chris Mason worked at SuSE when he submitted the reiserfs barrier patch that got vetoed by Hans.) So just because it is one way as shipped by a community distribution, or from upstream, doesn't necessarily that is the way enterprise distros will ship things when paying customers have real money on the line.

    If people are just simply reading your reports for entertainment's sake, that's one thing. But if you don't warn them about the dangers of the default options, and they make choices to use a file system based on your report --- should you feel any concern? I suppose that's between you and your conscience and/or your sense of journalistic integrity....

    - Ted

Tags for this Thread

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •