Announcement

Collapse
No announcement yet.

The Performance Of EXT4 Then & Now

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by deanjo View Post
    Well the biggest performance hit is barriers. If you wish to have close to Ubuntu's Ext3's performance with EXT4 just mount the filesystem with nobarriers as mentioned
    barriers do not explain the 50% drop in 2GB IOZone read performance.

    Also, the article tested with nobarriers in 2.6.33-rc4, and although it helped the TPS for the PostgreSQL test, it still had a significant performance drop.

    I think there must be other important issues than just barriers.

    Comment


    • #22
      Originally posted by jwilliams View Post
      barriers do not explain the 50% drop in 2GB IOZone read performance.

      Also, the article tested with nobarriers in 2.6.33-rc4, and although it helped the TPS for the PostgreSQL test, it still had a significant performance drop.

      I think there must be other important issues than just barriers.
      Oh it's not the only reason, the default commit parameter also hurts performance as well.H aving to sync all data and metadata every 5 seconds does carry a pretty big cost.

      Comment


      • #23
        Originally posted by deanjo View Post
        Oh it's not the only reason, the default commit parameter also hurts performance as well.H aving to sync all data and metadata every 5 seconds does carry a pretty big cost.
        But does that hurt READ performance?

        And I thought ext3 already did the 5sec syncs. Wasn't that the big argument with ext4, which was doing ~30sec syncs at first?

        Comment


        • #24
          Originally posted by jwilliams View Post
          They have already done some of that, although it would be nice to see it updated now with the latest 2.6.33 release candidate. They probably should have linked to these in the intro:

          Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite


          http://www.phoronix.com/scan.php?pag..._2632_fs&num=1
          I was thinking along the lines of multiple kernels versions and more filesystems shown in the comparison to see the development direction for all of the filesystem versions.

          Comment


          • #25
            Originally posted by deanjo View Post
            Well the biggest performance hit is barriers. If you wish to have close to Ubuntu's Ext3's performance with EXT4 just mount the filesystem with nobarriers as mentioned before because pretty much every linux distro out there with the exception of openSUSE defaulted to not using barriers with EXT3. With EXT4 the default is to mount with barriers.
            What's the impact to filesystem consistency/integrity when enabling nobarriers?

            Comment


            • #26
              Originally posted by jwilliams View Post
              But does that hurt READ performance?

              And I thought ext3 already did the 5sec syncs. Wasn't that the big argument with ext4, which was doing ~30sec syncs at first?
              Sure it will hurt read performance, it's a forced sync no matter what the current operation is. The default commit on EXT 4 is 5 secs btw.



              The big point about comparing EXT3 to EXT4 is that by default EXT4 with it's default mount parameters your data at the cost of performance. That security doesn't come free.

              Comment


              • #27
                Originally posted by jpalko View Post
                What's the impact to filesystem consistency/integrity when enabling nobarriers?
                If your running a properly configured and monitored UPS or a HW raid with battery backup then there really isn't a reason not to use nobarriers. If your using a powerblock for your system and care about your data then barriers should be enabled as to minimize data loss.

                Comment


                • #28
                  Originally posted by movieman View Post
                  The problem was not so much that data when missing when you didn't fsync(), it was that you could write to a file, rename it on top of an old file, and then after a reboot discover that your file had been truncated to zero bytes rather than being either the old file or the new file.
                  AFAIK, that happens because the rename (metadata) can be committed before the write (data), and if you really need the write to be committed first, you're supposed to call fsync() between the two. And, unless I'm completely misunderstanding the scenario you're describing, it's not just "after a reboot", but "after a crash/power loss/other abnormal shutdown that occurs between the rename commit and the data commit".
                  Last edited by Ex-Cyber; 19 January 2010, 01:42 PM.

                  Comment


                  • #29
                    Originally posted by Ex-Cyber View Post
                    AFAIK, that happens because the rename (metadata) can be committed before the write (data), and if you really need the write to be committed first, you're supposed to call fsync() between the two.
                    Except no other current file system requires that, and 99.999% of all existing software doesn't do it. And even if much of that software is 'fixed', probably 90% of the people 'fixing' it won't realise that they also need to sync the directory to ensure that it works.

                    And one of the common uses is in shell scripts, where you'll have to sync the entire disk. Just to safely update a two-line file.

                    And, unless I'm completely misunderstanding the scenario you're describing, it's not just "after a reboot", but "after a crash/power loss/other abnormal shutdown that occurs between the rename commit and the data commit".
                    True, but 99% of Linux systems crash at some point, even if only because of a power failure; and I believe that ext4 as originally implemented could delay the data write up to a couple of minutes after the metadata, so the odds of this happening on a crash were high.

                    Applications should be able to rely on some basic, sane behaviour from a file system (such as a 'rename a b' leaving them with either file a or file b on the disk and not an empty file which never existed in the logical filesystem), with a few exceptions like databases which provide explicit guarantees to their users. File systems which don't behave in such a manner simply won't get used for anything which requires reliable storage, because no matter how fast they are they're not performing their most basic function of storing your data.

                    In addition, different users and different uses have different thresholds for data reliability: for example, I might not care if I lose a data file that I saved two minutes ago so long as I still have the data file which I wrote out five minutes ago... someone else might be incensed if they lose data that they wrote out two seconds ago. That kind of decision should not have to be made on a per-application basis ('Edit/Preferences/Do you care about your data?'), it should be part of the filesystem configuration.

                    The only argument I've seen for this behaviour is that 'Posix doesn't require us to do anything else'. But Posix doesn't require much of anything and I suspect that at least 90% of current software would fail on a system which only implements the absolute minimum Posix requirements.

                    Comment


                    • #30
                      There is some interesting discussion of some of these issues in the comments to Ubuntu bug #317781. Particularly interesting are Theodore Ts'o comments #45, #54, #56:

                      I recently installed Kubuntu Jaunty on a new drive, using Ext4 for all my data. The first time i had this problem was a few days ago when after a power loss ktimetracker's config file was replaced by a 0 byte version . No idea if anything else was affected.. I just noticed ktimetracker right away. Today, I was experimenting with some BIOS settings that made the system crash right after loading the desktop. After a clean reboot pretty much any file written to by any application (during the p...


                      Also, Ted's "Don't fear the fsync" blog entry is worthwhile:

                      After reading the comments on my earlier post, Delayed allocation and the zero-length file problemas well as some of the comments on the Slashdot storyas well as the Ubuntu bug, it’s become very clear to me that there are a lot of myths and misplaced concerns about fsync() and how best to use it. I thought it would be appropriate to correct as many of these misunderstandings about fsync() in one comprehensive blog posting.

                      Comment

                      Working...
                      X