Announcement

Collapse
No announcement yet.

Large HDD/SSD Linux 2.6.38 File-System Comparison

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by locovaca View Post
    I wouldn't want a benchmark for your particular scenario because I'll never enter a situation like that, and I would venture that a majority of users would not either; any such data would skew opinions of file systems unnecessarily. I don't care how well a Corolla tows a 3 ton camper, just tell me how well it drives in basic conditions (city, highway) and I'll go from there.
    Sure but you're begging the question of what is "normal conditions". Are you always going to fill a file system to 10% of capacity, and then reformat it, and then fill it to 10% again? That's what many benchmarkers actually end up testing. And so a file system that depends on the garbage collector for correct long-term operation, but which never has to garbage collect, will look really good. But does that correspond to how you will use the file system?

    What is "basic conditions", anyway? That's fundamentally what I'm pointing out here. And is performance really all people should care about? Where does safety factor into all of this? And to be completely fair to btrfs, it has cool features --- which is cool, if you end up using those features. If you don't then you might be paying for something that you don't need. And can you turn off the features you don't need, and do you get the performance back?

    For example, at $WORK we run ext4 with journalling disabled and barriers disabled. That's because we keep replicated copies of everything at the cluster file system level. If I were to pull a Hans Reiser, and shipped ext4 with its defaults to have the journal and barriers disabled, it would be faster than ext2 and ext3, and most of the other file systems in the Phoronix file system comparison. But that would be bad for the desktop users for ext4, and that to me is more important than winning a benchmark demolition derby.

    -- Ted

    Comment


    • #42
      weird results

      Hi! i am looking at http://www.phoronix.com/data/img/res...38_large/2.png
      with the results of sqlite ...
      is really ext3 2 times slower on SSD? how can this be? is this the efect of lack of garbage collection or not trimming?

      Thanks for info!
      Adrian

      Comment


      • #43
        Michael, for the graphs could you put a larger separator between the HDD and the SSD? I see there's a little hash mark, and the colors repeat. But at first glance it was kind of hard to tell where one ends, and the other begins.

        Comment


        • #44
          Originally posted by adrian_sev View Post
          Hi! i am looking at http://www.phoronix.com/data/img/res...38_large/2.png
          with the results of sqlite ...
          is really ext3 2 times slower on SSD? how can this be? is this the efect of lack of garbage collection or not trimming?
          I'm pretty sure that ext3 is winning very big on the SQLite benchmark because it does a large number of random writes to the same blocks --- and since ext3 has barriers off by default, on the hard drive the disk collapses the writes together and most of the writes don't actually hit the disk platter. Good luck to your data if you have a power hit, but that's why ext3 wins really big on an HDD.

          On an SSD, at least OCZ, it's not merging the writes, and so the random writes result in flash write blocks getting written, so that's why ext3 appears to be much worse on the OCZ SSD. Other SSD's might be able to do a better job of merging writes to the same block, if they have a larger write buffer. This would be very SSD-specific.

          I suspect that JFS didn't run into this problem, even though it also doesn't use barriers, because its write patterns happened to fit within the OCZ's write cache, so it was able to collapse the writes. Personally I don't think it really matters, since running a database like SQLite which is trying to provide ACID properties without barriers enabled is obviously going to (a) result in a failure of the ACID guarantees, and (b) result in very confusing and misleading benchmark results.

          Comment


          • #45
            Someone said it before, but I don't get the point of benchmarking ext4 on an SSD without the discard option (and maybe noatime.)

            An SSD benchmark would in fact be a good place to tell people they should use discard, for the few who wouldn't know it already.

            Comment


            • #46
              Originally posted by squirrl View Post
              Reiser3 is still the best all around choice.
              * Fault Tolerant
              * Efficient
              * Static
              But sadly it degenerates and fragments like a motherfokker. After one and a half year it's at 20% of the speed it started at. And there's no known way of defragmenting it, except copying all the files from-and-to the filesystem again.

              Comment


              • #47
                Originally posted by tytso View Post
                Sure but you're begging the question of what is "normal conditions". Are you always going to fill a file system to 10% of capacity, and then reformat it, and then fill it to 10% again? That's what many benchmarkers actually end up testing. And so a file system that depends on the garbage collector for correct long-term operation, but which never has to garbage collect, will look really good. But does that correspond to how you will use the file system?

                What is "basic conditions", anyway? That's fundamentally what I'm pointing out here. And is performance really all people should care about? Where does safety factor into all of this? And to be completely fair to btrfs, it has cool features --- which is cool, if you end up using those features. If you don't then you might be paying for something that you don't need. And can you turn off the features you don't need, and do you get the performance back?

                For example, at $WORK we run ext4 with journalling disabled and barriers disabled. That's because we keep replicated copies of everything at the cluster file system level. If I were to pull a Hans Reiser, and shipped ext4 with its defaults to have the journal and barriers disabled, it would be faster than ext2 and ext3, and most of the other file systems in the Phoronix file system comparison. But that would be bad for the desktop users for ext4, and that to me is more important than winning a benchmark demolition derby.

                -- Ted
                Well, since the distribution is the end user version of Ubuntu which is marketed to a more casual user I would expect the file system to receive a modest load of files (installation), then see mainly small reads and writes over the course of its lifetime (logs, home folder) with some occasional larger writes (software installation, cd rip maybe). I believe Ubuntu's default partitioning scheme is one big file system + a swap partition so this is the configuration I'd expect to see with this test. So yes, assuming a 10% full file system is probably ok given this set of assumptions.

                Originally posted by TonsOfPeople
                Wah, the default didn't set xxx, that's horrible
                If the defaults of the file system are not ok, either link to the bug report or it's not really an issue.

                Comment


                • #48
                  Originally posted by stqn View Post
                  Someone said it before, but I don't get the point of benchmarking ext4 on an SSD without the discard option (and maybe noatime.)

                  An SSD benchmark would in fact be a good place to tell people they should use discard, for the few who wouldn't know it already.
                  Here are my results with noatime and discard on OCZ Vertex 2:
                  http://openbenchmarking.org/result/1...SKEE-110309125

                  Comment


                  • #49
                    Originally posted by locovaca View Post
                    Well, since the distribution is the end user version of Ubuntu which is marketed to a more casual user I would expect the file system to receive a modest load of files (installation), then see mainly small reads and writes over the course of its lifetime (logs, home folder) with some occasional larger writes (software installation, cd rip maybe). I believe Ubuntu's default partitioning scheme is one big file system + a swap partition so this is the configuration I'd expect to see with this test. So yes, assuming a 10% full file system is probably ok given this set of assumptions.
                    Yes, but you're not constantly reformatting the file system (i.e., reinstalling the distribution) over and over again. That is, the file system is allowed to age. So a month later, with a copy-on-write file system, the free space will have all been written will potentially get quite fragmented. But the benchmarks don't take this into account. They use a freshly formatted file system each time --- which is good for reproducibility, but it doesn't model what you will see in real life a month or 3 months later.

                    The right answer would be use something like the fs impressions tool to "age" the file system before doing the timed benchmark part of the test (see: http://www.usenix.org/events/fast09/...es/agrawal.pdf).

                    The fundamental question is what are you trying to measure? What is more important? The experience the user gets when the file system is first installed, or what they get a month later, and moving forward after that?

                    Comment


                    • #50
                      Originally posted by locovaca View Post
                      Well, since the distribution is the end user version of Ubuntu which is marketed to a more casual user I would expect the file system to receive a modest load of files (installation), then see mainly small reads and writes over the course of its lifetime (logs, home folder) with some occasional larger writes (software installation, cd rip maybe). I believe Ubuntu's default partitioning scheme is one big file system + a swap partition so this is the configuration I'd expect to see with this test. So yes, assuming a 10% full file system is probably ok given this set of assumptions.


                      If the defaults of the file system are not ok, either link to the bug report or it's not really an issue.
                      You do realize that you're responding to Ted Tso the creator of the Ext4 file system, right?

                      Comment

                      Working...
                      X