Announcement

Collapse
No announcement yet.

Real World Benchmarks Of The EXT4 File-System

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by dben View Post
    Most of those tests are kind of ridiculous. There's especially no point in testing a video game; any changes in fps would be negligible here. You load the data from disk once, it gets cached into memory, and that's the end of it. Maybe the occasional disk activity here and there with logs and a new data file or two, but it's nothing. Encryption, compression, encoding, etc., are CPU intensive operations. They're rather pointless too.k


    Real-world benchmarks are great and all, but that doesn't mean you just pick a random program and time it. Find something IO-intensive if you don't want to run a purely IO-based benchmark. Boot times for example.

    Also, without reporting the error, your numbers are misleading and dishonest. It's great that you've averaged three runs, but we need to know how consistent the runs were. I have a suspicion that the error in the games benchmarks far outweigh the differences between the filesystems, given how much disk io games do. (Virtually none. Disks accesses slow, and game developers know to avoid them like the plague.)

    Be more careful in your conclusions and, if nothing else, give us reason to trust them.
    Your missing the point of the game benches. They are there to show that really doesn't effect applications such as games. As the article implies, it goes after real world use and if there is impact to those apps. In real world use, the fs really doesn't effect games. Granted a level load time would have been a bit more informative as too what could potentially made a bit of a difference.

    Comment


    • #22
      Originally posted by spinkham View Post
      If this test is done again, could you include JFS also? JFS is a strong contender for best Linux filesystem at the moment.
      I am also a JFS fan. It doesn't seem to get so many column inches despite being very good - and I have tried EXT3, XFS, ReiserFS and Reiser4. Unlike XFS, you don't get that thrashing that [Knuckles] mentioned, though I am vaguely aware that XFS can be tweaked. Just never tried it.
      Last edited by Chewi; 03 December 2008, 02:35 PM.

      Comment


      • #23
        So, I totally agree that testing games was pointless... And you can't defend it saying you wanted to test "real life app experience". The way file system influences experience is by map loading, app startup, etc. NOT by fps.

        So, boot time please, gnome/kde launch time, file copying, "du -sh /", text search on files etc.

        And, it would definitely get more attractive to test more filesystems. JFS, Reiser4, btrfs would add a lot of flavor to such article.

        Comment


        • #24
          Interesting results. I would however, like to see more variety in the bonnie, IOzone, and IOmeter configurations. These I feel actually do represent "real world" tasks; for tasks that deal with heavy I/O that is, which is why trying to simulate a variety of workloads through those tools is important. I can imagine a real fileserver streaming multiple video files while updating the locate database for example.

          But I do think that your philosophy of testing with the defaults--however the software under test is supposed to be configured out-of-the-box--is a good one. It does not preclude tuning but results from tuning should never be shown alone. That said, it would also be interesting to see if there is any tuning that would make a difference for these filesystems in different cases.

          Comment


          • #25
            ext4 seems cool! Does it protect against silent corruption? Typically 20% of a modern hard drive is devoted to error correcting codes. Once in a while, you will run into a problem that is not correctable, or what is worse; not detectable. You dont even know that there was some error in your files:


            And Ive heard about a large ext3 filesystem being fsck, it took one week. Does ext4 suffer from the same problem?

            Comment


            • #26
              EXT4 looks great in the first part of the test that tests large files. It also confirms what i have noticed that EXT3 sucks compared to XFS with big files (4 to 15GB).
              Looking forward to using EXT4 on my fileserver that got lots of large mkv files

              Comment


              • #27
                Regarding JFS:
                I'll not use it again, simply due to the amount of data I have lost on JFS, Same for ReiserFS.

                Regarding XFS:
                If your hardware supports write-barriers, XFS doesn't lose any data or corrupt. Just about any hardware you can buy in the last 2 years support write barriers properly, so XFS should be fine.
                The defualt Linux XFS tuning parameters is "wrong" in 2 ways:
                * It makes the log section way too small
                * It mounts with 2 log-buffers instead of the maximum 8.

                Why do I mention those 2 things?
                With more logbuffers XFS handles accessing losts of small files much better, and it effectively removes the "thrashing" some people mention. This is a mount-time parameter.
                With a larger log, it can handle deletes and changes much better, since with XFS it tries to queue up as much things at a time, to minimise disk seeking. The defualt log is typically ~4MB, but enlarging this to 64MB, well you can feel the performance difference. This is a filesystem-create parameter.

                Other usefull options are telling XFS how the underlying RAID is configured (If you have any), and it scales extremely well, since it can keep all disks ~ equaly busy. I was amased at how well I got an XFS filesystem to perform on a 5-disk RAID 5. Truly recommended if you RAID. This is a filesystem-create parameter.



                Regarding power usage:
                On my notebook I originally used JFS, because it apparently used the least CPU, but this didn't improve battery life at all. In fact it might have gotten better since I changed to XFS.

                Yes, XFS uses significantly more CPU power, but it completes the DISK stuff much faster. So I reckon on most systems the DISK uses more power to seek than the extra CPU cycles used to avoid 1 seek.

                Comment


                • #28
                  Regarding XFS, I'm using it on partitions storing only big files, since it's excellent in that regard (as long as defragmention is done now and then). I once tried to use it as root filesystem, but that was a mistake as the performance with small files was really appaling. Since someone mentioned some tweaks to make it perform better on small files I went looking and found out that adding logbufs=8 (needs >=128MB RAM) to mount options should make it perform better.

                  So it will be interesting to see if it really makes a difference..

                  Source: http://everything2.com/index.pl?node_id=1479435

                  Comment


                  • #29
                    Have you mounted all FS with barriers on or off?
                    Because ext3 turns them off by default and xfs, reiserfs on by default.
                    barriers cost 30% performance on ext3. If you don't made the playing field even, the benchmark is not worth the electricity bill

                    Comment


                    • #30
                      Originally posted by deanjo View Post
                      XFS is easily tweaked to cure that.
                      xfs defaults are known to suck. XFS is also known for its great performance increase with some simple tweaking.

                      ext3 sucks. But as long as people benchmark ext3 with barriers turned off, ext3 will look good. It is sickening.

                      Comment

                      Working...
                      X