Announcement

Collapse
No announcement yet.

Btrfs vs. EXT4 vs. XFS vs. F2FS On Linux 3.10

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • JS987
    replied
    Why only SSD is tested? Most SSDs are unreliable in case of power loss without UPS.
    Another component in higher performing SSDs is a capacitor or some form of battery. These are necessary to maintain data integrity such that the data in the cache can be flushed to the drive when power is dropped; some may even hold power long enough to maintain data in the cache until power is resumed. In the case of MLC flash memory, a problem called lower page corruption can occur when MLC flash memory loses power while programming an upper page. The result is that data written previously and presumed safe can be corrupted if the memory is not supported by a super capacitor in the event of a sudden power loss. This problem does not exist with SLC flash memory.[36] Most consumer-class SSDs do not have built-in batteries or capacitors;[51] among the exceptions are the Crucial M500 series,[52] the Intel 320 series[53] and the more expensive Intel 710 series.
    http://en.wikipedia.org/wiki/Solid-state_drive

    Leave a comment:


  • jwilliams
    replied
    I thought it was certain types of reads that are typically slower with COW file systems.

    For example, assume you have three large files that are stored mostly sequentially on your drive, and then you open the three files and do various random writes across the three files. With a conventional filesystem, the writes will indeed be random, overwriting old data. With a COW filesystem, the writes will be mostly sequential (in the best case for the COW filesystem), but to a different part of the drive. So the COW filesystem may potentially be faster on the random writes (or not, if it has to do a read-modify-write operation, or if it has to write more data to the drive due to other COW features).

    When you go to read back the three formerly sequential files, they will still be sequential with the conventional filesystem. But with the COW filesystem (assuming auto-defrag has not run yet), the reads will have a random component since some blocks of the files have been relocated to another part of the drive, and that will slow the reads down. This is why auto-defrag is important for btrfs. But if your filesystem is under heavy load, either auto-defrag won't run, or it will load the system down further. And I think btrfs auto-defrag only defragments smallish files. If you have a very large file, you are out of luck (which is one reason why btrfs users recommend turning off COW for large database or VM files).

    Look at how badly btrfs does on the fio test, fileserver access pattern, which is mostly random reads. btrfs is six times slower than ext4. I wonder if reducing read_ahead_kb in the kernel would help the btrfs performance in random reads. (Incidentally, phoronix needs to update fio...version 1.57 is nearly two years old...fio is at 2.1 now)

    I believe ZFS (partially) solves this problem through caches like L2ARC. And that is one of the reasons why ZFS has a reputation as a memory hog. But I suspect even ZFS would beat btrfs in most of the benchmarks in this article. It is a shame ZFS was not included.

    Here is an interesting article that lists workarounds ("tuning") that you can try to optimize Oracle to work with ZFS:

    http://www.solarisinternals.com/wiki..._for_Databases
    Last edited by jwilliams; 05-20-2013, 07:12 AM.

    Leave a comment:


  • benmoran
    replied
    Originally posted by IsacDaavid View Post
    I've been reading many positive critics about btrfs for a couple years and how this promising filesystem is supposed to eventually replace ext4, but after all this time I must admit those benchmarks feel a bit daunting, specially the compilation and database ones. Do you think btrfs will ever get near the speed of ext4, or is it that all those fancy features come at a performance cost?

    I'm also waiting for swap file support.
    I'm by no means a file system expert, but I think that certain types of writes will just always be slow with COW file systems. If you desable COW for certain tests in btrfs, you do see substantial speedups. I suppose if you were on top of a btrfs raid array, you would still have some redundancy there.

    Leave a comment:


  • IsacDaavid
    replied
    I've been reading many positive critics about btrfs for a couple years and how this promising filesystem is supposed to eventually replace ext4, but after all this time I must admit those benchmarks feel a bit daunting, specially the compilation and database ones. Do you think btrfs will ever get near the speed of ext4, or is it that all those fancy features come at a performance cost?

    I'm also waiting for swap file support.

    Leave a comment:


  • GreatEmerald
    replied
    Originally posted by stan View Post
    So let me see... BTRFS is slower than EXT4 pretty much everywhere, sometimes MUCH slower. All the purported features of BTRFS that are supposed to make BTRFS better are pie-in-the-sky as far as I can tell. Has anyone actually used the BTRFS snapshot thing on their Linux Desktop? Is there even a GUI for it, something that can come close to Apple's Time Machine? From a Linux Desktop user's perspective, BTRFS is worthless right now, and development resources going into it are a waste.
    Are you serious? I can't live without snapshots. And yes, there is a GUI tool for snapshots, it's called Snapper. Though it needs YaST for its GUI; otherwise, it's a CLI utility. It also does automatic snapshotting and can show the differences between files in different snapshots.

    Originally posted by set135 View Post
    Oh, this is a Gentoo box, so that core filesystem changes *a lot*, so those 38 snapshots are not trivial.
    About Gentoo, I made an ebuild for Snapper, it's currently on Sunrise. You probably could use it, makes creating snapshots super easy. On my Gentoo machine, I made a "bmerge" script that makes a pre-post snapshot for every emerge process, so you can very easily do a perfect unmerge that way.

    Leave a comment:


  • renkin
    replied
    Originally posted by set135 View Post
    Just to note, I have been using btrfs as an alternate backup for my core system (about 80g) for over a year. I
    rsync to the partition and snapshot that twice a month. Currenly, using compression, the btrfs is upto 120g, and
    contains 38 snapshots named by date. I have already found this 'time machine' useful for recovering data. I also
    have another btrfs partition mostly full of large video files. This machine has been killed by brownouts many
    times. So far, everything is working well. I still use ext4 for most of my filesystems, but I consider btrfs worthy
    of consideration. Particularly if I was going to use raid or volume management. Oh, this is a Gentoo box, so that
    core filesystem changes *a lot*, so those 38 snapshots are not trivial.
    You might look into experimenting with send/receive using btrfs. I imagine it'd be way more efficient than using rsync

    Leave a comment:


  • a user
    replied
    Originally posted by Ericg View Post
    True, wasn't thinking about the fact that he just does zero-filled files instead of random-filled
    in case of random filled file it would be better for brtfs WITHOUT compression!

    purely random data cannot be compressed.

    Leave a comment:


  • set135
    replied
    Just to note, I have been using btrfs as an alternate backup for my core system (about 80g) for over a year. I
    rsync to the partition and snapshot that twice a month. Currenly, using compression, the btrfs is upto 120g, and
    contains 38 snapshots named by date. I have already found this 'time machine' useful for recovering data. I also
    have another btrfs partition mostly full of large video files. This machine has been killed by brownouts many
    times. So far, everything is working well. I still use ext4 for most of my filesystems, but I consider btrfs worthy
    of consideration. Particularly if I was going to use raid or volume management. Oh, this is a Gentoo box, so that
    core filesystem changes *a lot*, so those 38 snapshots are not trivial.

    Leave a comment:


  • benmoran
    replied
    I've been back on btrfs since the 3.8 Kernel, and don't think I'll be switching to anything else. It's fairly quick with the recent updates, and I use the snapshotting ALL the time.

    Leave a comment:


  • dalingrin
    replied
    A lot of people worry about compression and the latency or CPU load that might come with it. However, one of the most CPU intensive workloads that I do is compiling very large projects(Chromium, Android, etc), these keep CPU usage pegged at 100% on all cores nearly the whole time yet it is faster with BTRFS compression than without. One caveat is that you need to use LZO rather than ZLIB. It is slightly faster with compression on my SSD and much faster if the compilation is done on a rotational hard disk.

    Originally posted by Ericg View Post
    Features that make btrfs better than Ext4...

    1) Built in compress
    2) deduplication is being worked on
    3) built in volume manager
    4) ability to detect even single-bit corruption
    5) integrity checking the ext4 can never even DREAM of getting.
    6) ssd optimizations
    7) snapshotting
    8) online resizing
    9) online defragging
    10) Almost all raid levels supported in filesystem

    The only one the is even REMOTELY "Pie in the sky" is dedup, and its in progress. As far as your comment about compression... if you have a modern CPU then the compression takes less an millisecond except for gigantic files, which would most likely be video files...which it auto-skips on compression anyway, so you're getting zero latency. Also the data takes up less space on disk (important for early SSD adopters like me who only have 128GB ssd's in their laptops), and can be written TO disk more quickly since more data can be fit into the buffer.

    The snapshotting feature doesn't have a GUI yet, no, but its tech is stable and its being used on Suse and Fedora to integrate into update managers (same way windows does updates: create a snapshot before the update, update, if it breaks then you roll back to that pre-update snapshot).

    Is Btrfs slower than Ext4? Possibly. Though I'm not even positive it IS since we all know Michael's tests dont measure real-world performance (writing zeros to files doesnt count) but I don't even feel it or notice it on my laptop and I did a re-install to switch over to btrfs. The development efforts DEFINITELY aren't a waste, and I am looking forward to the day that either Btrfs or Tux3 replace Ext4.

    Quite honestly Stan you just look like a troll at this point, and a really bad one at that.

    I don't think he's a troll. I think his view is actually the current popular consensus even if I disagree with it.

    Leave a comment:

Working...
X