Announcement

Collapse
No announcement yet.

The Performance Of EXT4 Then & Now

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • deanjo
    replied
    Originally posted by Apopas View Post
    I have a question guys.
    I have two hard disk drives. One is formatted with XFS and the other one with EXT4.

    Whenever I copy a large file (i.e 4GB) from ext4 to xfs, the speed meter of KDE acts like a cardiogram, with a speed from 80 MB/s to 25 and vice versa. Finally the copy time is 58-60 seconds.
    When I copy from xfs to ext4 the meter is stable from 67-70 MB/s and the total time is still about one minute.

    At first glance I thought the KDE's meter was just a mess, but since it's always stable from xfs to ext4 I suppose this is not the matter.

    So what' could be wrong guys?
    I've noticed this too, it seems to behave the same as copying from the local machine to a remote machine. (Buffering to ram maybe?) On a network situation copying from a remote machine to the local displays accurate speeds.

    Leave a comment:


  • val-gaav
    replied
    Originally posted by frantaylor View Post
    These benchmarks are tough to process by themselves. The older benchmarks are really invalid because the code has a fatal flaw: no data safety. You can make ANY filesystem look fast if you are only pretending to write the data to disk. You might as well benchmark the write performance of /dev/null.
    I think that's not correct... It's just that new ext4 settings for data safety are paranoid.

    I'm using ext4 for a year now and I have not experienced any problems with 2.6.29-2.6.30 (using it right now).
    While I had problems with 2.6.28 to be exact.

    I think the settings from 2.6.31 and up are good for a paranoid server admin but not for a regular desktop use. I plan to stay with 2.6.30 for as long as I can but probably at some point new mesa/drm/xorg etc. will force me to upgrade.
    Last edited by val-gaav; 20 January 2010, 03:30 PM.

    Leave a comment:


  • Apopas
    replied
    I have a question guys.
    I have two hard disk drives. One is formatted with XFS and the other one with EXT4.

    Whenever I copy a large file (i.e 4GB) from ext4 to xfs, the speed meter of KDE acts like a cardiogram, with a speed from 80 MB/s to 25 and vice versa. Finally the copy time is 58-60 seconds.
    When I copy from xfs to ext4 the meter is stable from 67-70 MB/s and the total time is still about one minute.

    At first glance I thought the KDE's meter was just a mess, but since it's always stable from xfs to ext4 I suppose this is not the matter.

    So what' could be wrong guys?

    Leave a comment:


  • jwilliams
    replied
    There is some interesting discussion of some of these issues in the comments to Ubuntu bug #317781. Particularly interesting are Theodore Ts'o comments #45, #54, #56:

    I recently installed Kubuntu Jaunty on a new drive, using Ext4 for all my data. The first time i had this problem was a few days ago when after a power loss ktimetracker's config file was replaced by a 0 byte version . No idea if anything else was affected.. I just noticed ktimetracker right away. Today, I was experimenting with some BIOS settings that made the system crash right after loading the desktop. After a clean reboot pretty much any file written to by any application (during the p...


    Also, Ted's "Don't fear the fsync" blog entry is worthwhile:

    After reading the comments on my earlier post, Delayed allocation and the zero-length file problemas well as some of the comments on the Slashdot storyas well as the Ubuntu bug, it’s become very clear to me that there are a lot of myths and misplaced concerns about fsync() and how best to use it. I thought it would be appropriate to correct as many of these misunderstandings about fsync() in one comprehensive blog posting.

    Leave a comment:


  • movieman
    replied
    Originally posted by Ex-Cyber View Post
    AFAIK, that happens because the rename (metadata) can be committed before the write (data), and if you really need the write to be committed first, you're supposed to call fsync() between the two.
    Except no other current file system requires that, and 99.999% of all existing software doesn't do it. And even if much of that software is 'fixed', probably 90% of the people 'fixing' it won't realise that they also need to sync the directory to ensure that it works.

    And one of the common uses is in shell scripts, where you'll have to sync the entire disk. Just to safely update a two-line file.

    And, unless I'm completely misunderstanding the scenario you're describing, it's not just "after a reboot", but "after a crash/power loss/other abnormal shutdown that occurs between the rename commit and the data commit".
    True, but 99% of Linux systems crash at some point, even if only because of a power failure; and I believe that ext4 as originally implemented could delay the data write up to a couple of minutes after the metadata, so the odds of this happening on a crash were high.

    Applications should be able to rely on some basic, sane behaviour from a file system (such as a 'rename a b' leaving them with either file a or file b on the disk and not an empty file which never existed in the logical filesystem), with a few exceptions like databases which provide explicit guarantees to their users. File systems which don't behave in such a manner simply won't get used for anything which requires reliable storage, because no matter how fast they are they're not performing their most basic function of storing your data.

    In addition, different users and different uses have different thresholds for data reliability: for example, I might not care if I lose a data file that I saved two minutes ago so long as I still have the data file which I wrote out five minutes ago... someone else might be incensed if they lose data that they wrote out two seconds ago. That kind of decision should not have to be made on a per-application basis ('Edit/Preferences/Do you care about your data?'), it should be part of the filesystem configuration.

    The only argument I've seen for this behaviour is that 'Posix doesn't require us to do anything else'. But Posix doesn't require much of anything and I suspect that at least 90% of current software would fail on a system which only implements the absolute minimum Posix requirements.

    Leave a comment:


  • Ex-Cyber
    replied
    Originally posted by movieman View Post
    The problem was not so much that data when missing when you didn't fsync(), it was that you could write to a file, rename it on top of an old file, and then after a reboot discover that your file had been truncated to zero bytes rather than being either the old file or the new file.
    AFAIK, that happens because the rename (metadata) can be committed before the write (data), and if you really need the write to be committed first, you're supposed to call fsync() between the two. And, unless I'm completely misunderstanding the scenario you're describing, it's not just "after a reboot", but "after a crash/power loss/other abnormal shutdown that occurs between the rename commit and the data commit".
    Last edited by Ex-Cyber; 19 January 2010, 01:42 PM.

    Leave a comment:


  • deanjo
    replied
    Originally posted by jpalko View Post
    What's the impact to filesystem consistency/integrity when enabling nobarriers?
    If your running a properly configured and monitored UPS or a HW raid with battery backup then there really isn't a reason not to use nobarriers. If your using a powerblock for your system and care about your data then barriers should be enabled as to minimize data loss.

    Leave a comment:


  • deanjo
    replied
    Originally posted by jwilliams View Post
    But does that hurt READ performance?

    And I thought ext3 already did the 5sec syncs. Wasn't that the big argument with ext4, which was doing ~30sec syncs at first?
    Sure it will hurt read performance, it's a forced sync no matter what the current operation is. The default commit on EXT 4 is 5 secs btw.



    The big point about comparing EXT3 to EXT4 is that by default EXT4 with it's default mount parameters your data at the cost of performance. That security doesn't come free.

    Leave a comment:


  • jpalko
    replied
    Originally posted by deanjo View Post
    Well the biggest performance hit is barriers. If you wish to have close to Ubuntu's Ext3's performance with EXT4 just mount the filesystem with nobarriers as mentioned before because pretty much every linux distro out there with the exception of openSUSE defaulted to not using barriers with EXT3. With EXT4 the default is to mount with barriers.
    What's the impact to filesystem consistency/integrity when enabling nobarriers?

    Leave a comment:


  • jpalko
    replied
    Originally posted by jwilliams View Post
    They have already done some of that, although it would be nice to see it updated now with the latest 2.6.33 release candidate. They probably should have linked to these in the intro:

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite


    http://www.phoronix.com/scan.php?pag..._2632_fs&num=1
    I was thinking along the lines of multiple kernels versions and more filesystems shown in the comparison to see the development direction for all of the filesystem versions.

    Leave a comment:

Working...
X