Announcement

Collapse
No announcement yet.

Btrfs vs. EXT4 vs. XFS vs. F2FS On Linux 3.10

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • phoronix
    started a topic Btrfs vs. EXT4 vs. XFS vs. F2FS On Linux 3.10

    Btrfs vs. EXT4 vs. XFS vs. F2FS On Linux 3.10

    Phoronix: Btrfs vs. EXT4 vs. XFS vs. F2FS On Linux 3.10

    Building upon our F2FS file-system benchmarks from earlier in this week is a large comparison of four of the leading Linux file-systems at the moment: Btrfs, EXT4, XFS, and F2FS. With the four Linux kernel file-systems, each was benchmarked on the Linux 3.8, 3.9, and 3.10-rc1 kernels. The results from this large file-system comparison when backed by a solid-state drive are now published on Phoronix.

    http://www.phoronix.com/vr.php?view=18720

  • JS987
    replied
    Why only SSD is tested? Most SSDs are unreliable in case of power loss without UPS.
    Another component in higher performing SSDs is a capacitor or some form of battery. These are necessary to maintain data integrity such that the data in the cache can be flushed to the drive when power is dropped; some may even hold power long enough to maintain data in the cache until power is resumed. In the case of MLC flash memory, a problem called lower page corruption can occur when MLC flash memory loses power while programming an upper page. The result is that data written previously and presumed safe can be corrupted if the memory is not supported by a super capacitor in the event of a sudden power loss. This problem does not exist with SLC flash memory.[36] Most consumer-class SSDs do not have built-in batteries or capacitors;[51] among the exceptions are the Crucial M500 series,[52] the Intel 320 series[53] and the more expensive Intel 710 series.
    http://en.wikipedia.org/wiki/Solid-state_drive

    Leave a comment:


  • jwilliams
    replied
    I thought it was certain types of reads that are typically slower with COW file systems.

    For example, assume you have three large files that are stored mostly sequentially on your drive, and then you open the three files and do various random writes across the three files. With a conventional filesystem, the writes will indeed be random, overwriting old data. With a COW filesystem, the writes will be mostly sequential (in the best case for the COW filesystem), but to a different part of the drive. So the COW filesystem may potentially be faster on the random writes (or not, if it has to do a read-modify-write operation, or if it has to write more data to the drive due to other COW features).

    When you go to read back the three formerly sequential files, they will still be sequential with the conventional filesystem. But with the COW filesystem (assuming auto-defrag has not run yet), the reads will have a random component since some blocks of the files have been relocated to another part of the drive, and that will slow the reads down. This is why auto-defrag is important for btrfs. But if your filesystem is under heavy load, either auto-defrag won't run, or it will load the system down further. And I think btrfs auto-defrag only defragments smallish files. If you have a very large file, you are out of luck (which is one reason why btrfs users recommend turning off COW for large database or VM files).

    Look at how badly btrfs does on the fio test, fileserver access pattern, which is mostly random reads. btrfs is six times slower than ext4. I wonder if reducing read_ahead_kb in the kernel would help the btrfs performance in random reads. (Incidentally, phoronix needs to update fio...version 1.57 is nearly two years old...fio is at 2.1 now)

    I believe ZFS (partially) solves this problem through caches like L2ARC. And that is one of the reasons why ZFS has a reputation as a memory hog. But I suspect even ZFS would beat btrfs in most of the benchmarks in this article. It is a shame ZFS was not included.

    Here is an interesting article that lists workarounds ("tuning") that you can try to optimize Oracle to work with ZFS:

    http://www.solarisinternals.com/wiki..._for_Databases
    Last edited by jwilliams; 05-20-2013, 07:12 AM.

    Leave a comment:


  • benmoran
    replied
    Originally posted by IsacDaavid View Post
    I've been reading many positive critics about btrfs for a couple years and how this promising filesystem is supposed to eventually replace ext4, but after all this time I must admit those benchmarks feel a bit daunting, specially the compilation and database ones. Do you think btrfs will ever get near the speed of ext4, or is it that all those fancy features come at a performance cost?

    I'm also waiting for swap file support.
    I'm by no means a file system expert, but I think that certain types of writes will just always be slow with COW file systems. If you desable COW for certain tests in btrfs, you do see substantial speedups. I suppose if you were on top of a btrfs raid array, you would still have some redundancy there.

    Leave a comment:


  • IsacDaavid
    replied
    I've been reading many positive critics about btrfs for a couple years and how this promising filesystem is supposed to eventually replace ext4, but after all this time I must admit those benchmarks feel a bit daunting, specially the compilation and database ones. Do you think btrfs will ever get near the speed of ext4, or is it that all those fancy features come at a performance cost?

    I'm also waiting for swap file support.

    Leave a comment:


  • GreatEmerald
    replied
    Originally posted by stan View Post
    So let me see... BTRFS is slower than EXT4 pretty much everywhere, sometimes MUCH slower. All the purported features of BTRFS that are supposed to make BTRFS better are pie-in-the-sky as far as I can tell. Has anyone actually used the BTRFS snapshot thing on their Linux Desktop? Is there even a GUI for it, something that can come close to Apple's Time Machine? From a Linux Desktop user's perspective, BTRFS is worthless right now, and development resources going into it are a waste.
    Are you serious? I can't live without snapshots. And yes, there is a GUI tool for snapshots, it's called Snapper. Though it needs YaST for its GUI; otherwise, it's a CLI utility. It also does automatic snapshotting and can show the differences between files in different snapshots.

    Originally posted by set135 View Post
    Oh, this is a Gentoo box, so that core filesystem changes *a lot*, so those 38 snapshots are not trivial.
    About Gentoo, I made an ebuild for Snapper, it's currently on Sunrise. You probably could use it, makes creating snapshots super easy. On my Gentoo machine, I made a "bmerge" script that makes a pre-post snapshot for every emerge process, so you can very easily do a perfect unmerge that way.

    Leave a comment:


  • renkin
    replied
    Originally posted by set135 View Post
    Just to note, I have been using btrfs as an alternate backup for my core system (about 80g) for over a year. I
    rsync to the partition and snapshot that twice a month. Currenly, using compression, the btrfs is upto 120g, and
    contains 38 snapshots named by date. I have already found this 'time machine' useful for recovering data. I also
    have another btrfs partition mostly full of large video files. This machine has been killed by brownouts many
    times. So far, everything is working well. I still use ext4 for most of my filesystems, but I consider btrfs worthy
    of consideration. Particularly if I was going to use raid or volume management. Oh, this is a Gentoo box, so that
    core filesystem changes *a lot*, so those 38 snapshots are not trivial.
    You might look into experimenting with send/receive using btrfs. I imagine it'd be way more efficient than using rsync

    Leave a comment:


  • a user
    replied
    Originally posted by Ericg View Post
    True, wasn't thinking about the fact that he just does zero-filled files instead of random-filled
    in case of random filled file it would be better for brtfs WITHOUT compression!

    purely random data cannot be compressed.

    Leave a comment:


  • set135
    replied
    Just to note, I have been using btrfs as an alternate backup for my core system (about 80g) for over a year. I
    rsync to the partition and snapshot that twice a month. Currenly, using compression, the btrfs is upto 120g, and
    contains 38 snapshots named by date. I have already found this 'time machine' useful for recovering data. I also
    have another btrfs partition mostly full of large video files. This machine has been killed by brownouts many
    times. So far, everything is working well. I still use ext4 for most of my filesystems, but I consider btrfs worthy
    of consideration. Particularly if I was going to use raid or volume management. Oh, this is a Gentoo box, so that
    core filesystem changes *a lot*, so those 38 snapshots are not trivial.

    Leave a comment:


  • benmoran
    replied
    I've been back on btrfs since the 3.8 Kernel, and don't think I'll be switching to anything else. It's fairly quick with the recent updates, and I use the snapshotting ALL the time.

    Leave a comment:

Working...
X