Announcement

Collapse
No announcement yet.

Ubuntu 12.10 File-Systems: Btrfs, EXT4, XFS

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • brunis
    replied
    Definition of small files?

    Could you add benchmarks that resemble the real world a bit more!? ..1MiB files are not small files.. 1KiB files are.. so extract the firefox source og linux kernel source to the drive and copy it out again and benchmark those. That's a real scenario, that would probably uncover weaknesses in Ext4!

    I'm moving my installation to a new Vertex4 SSD right now from an old 250GiB WD. Both have Ext4 filesystems hdparm claims ~65MiB/second on the WD Sata drive, but copying the mozilla source gives me 500KiB/second.. Far from your fantasy theoretical numbers!

    Hoping for more realistic benchmarks in the future!

    regards,
    Brunis

    Leave a comment:


  • detlef
    replied
    A IOzone test result picture was used twice

    On the bottom of Page 2, the result picture from the Test IOzone v3.405
    (Record Size: 1MB - File Size: 8GB - Disk Test: Read Performance) was added two times.
    A simple Copy & Paste failure?
    Can only be a hardware failure of the keyboard...

    Leave a comment:


  • alcalde
    replied
    Ubuntu <> Linux

    Originally posted by TobiSGD View Post
    Not an Ubuntu user, so a small question: Do I have to expect that the Ubuntu kernel will be patched in a way that I can see differences in filesystem performance compared to a stock kernel from kernel.org? If so, wouldn't it make sense to also put the stock kernel into those benchmarks?
    Or is there any other reason to test specifically the Ubuntu kernel and not a stock one, besides having Ubuntu in the title of the article for getting more page-hits from Ubuntu users?
    With lots of Linux pundits declaring recently that Ubuntu is the future of the Linux desktop, I think the majority of us who don't use Ubuntu simply don't count anymore. :-( It's gotten to the point where Lifehacker articles with titles like "How to customize your Linux desktop" are actually about how to configure Unity in Ubuntu, Matt Hardy's article "Three Alternatives To Ubuntu" consists of three distros - Pear OS, Mint, Peppermint - derived from Ubuntu, etc. Just as Linux users are treated as second class citizens in the Windows-dominated world, so now are non-Ubuntu users treated in the Linux world. It's starting to get ridiculous and when I've broached the subject with some the Linux talking heads no one wants to open up a debate on the subject. :-( Unless the "silent majority" gain a voice and speak up soon and open up a discussion about what we want the future of Linux to be, especially if we want it to be dominated by any one interest, it's going to happen by default.

    But on your specific point, I agree: I want to know how the file systems are currently running on the latest Linux kernel, not how they run on Ubuntu 12.10 beta and whatever kernel it's using.

    Leave a comment:


  • jabl
    replied
    Originally posted by highlandsun View Post
    Of course, in a dedicated database deployment, I would just have a single DB residing in a dedicated filesystem, and preallocate all of the space for the DB file(s). At that point, metadata updates are irrelevant, they would only be occurring for the mtime stamps and not for any structural changes, so FS structural corruption would be impossible.
    Yes, in such a situation a database engine can get away with using fdatasync() instead of fsync(). The filesystem still needs to have barrier support in order to provide data integrity guarantees when used on a device with a volatile write cache, though.

    Leave a comment:


  • Dami55an
    replied
    I believe that fsck is as essential as it was with the previous generation of filesystems.

    Leave a comment:


  • highlandsun
    replied
    Originally posted by jabl View Post
    Sure, a filesystem which doesn't support barriers (such as JFS or ext2) will obviously outperform one which does (e.g. ext4, btrfs, xfs) on a test which tests synchronous writes (fsync()). In order to avoid an apples to oranges comparison, you need to either

    - Disable the disk write cache when using a filesystem without barrier support (slower but safer).

    - Disable barriers on the filesystems with barrier support (mount with barrier=0) (fast but unsafe).

    - Or even better, use a device with a non-volatile write cache (e.g. a RAID card with battery backed cache) (fast AND safe).
    Hm, thanks for pointing that out. Yeah, I probably need to retest with disk write cache disabled.

    Of course, in a dedicated database deployment, I would just have a single DB residing in a dedicated filesystem, and preallocate all of the space for the DB file(s). At that point, metadata updates are irrelevant, they would only be occurring for the mtime stamps and not for any structural changes, so FS structural corruption would be impossible.

    Leave a comment:


  • jabl
    replied
    Originally posted by highlandsun View Post
    You should have also tested JFS. In my database tests, JFS outperforms all the others
    Sure, a filesystem which doesn't support barriers (such as JFS or ext2) will obviously outperform one which does (e.g. ext4, btrfs, xfs) on a test which tests synchronous writes (fsync()). In order to avoid an apples to oranges comparison, you need to either

    - Disable the disk write cache when using a filesystem without barrier support (slower but safer).

    - Disable barriers on the filesystems with barrier support (mount with barrier=0) (fast but unsafe).

    - Or even better, use a device with a non-volatile write cache (e.g. a RAID card with battery backed cache) (fast AND safe).

    Leave a comment:


  • highlandsun
    replied
    You should have also tested JFS. In my database tests, JFS outperforms all the others. Also, you should do a test with two HDDs (or a combination of HDD and SSD) where the journal for the main filesystem is stored on the 2nd device. Again, in my tests this can make a huge difference in overall throughput.

    My results are posted here:

    Leave a comment:


  • dnebdal
    replied
    Originally posted by devius View Post
    I'm 99,9% sure that's the case simply because physically the HDD in this test would never be able to achieve such high random write values. Even the fastest consumer HDD (Velociraptor 1TB) wouldn't be able to achieve even 1/10 of those 35MB/s.
    Like jabl said, the way btrfs works effectively converts random writes into linear writes - and with that in mind, 35 MB/sec is fairly modest.

    Leave a comment:


  • Yfrwlf
    replied
    Dpkg is incredibly, horribly, painfully slow with btrfs due to fsync calls. Would have been nice to see that added to the benchmarks.

    Leave a comment:

Working...
X