Announcement

Collapse
No announcement yet.

Benchmarks Of ZFS-FUSE On Linux Against EXT4, Btrfs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • locovaca
    replied
    Originally posted by edogawaconan View Post
    guess what - it is doable with zfs. Just because your filesystem of choice can't do it, it doesn't mean that it is impossible to do.
    It's doable doesn't means it's good practice. It's doable to strip PAM out of Linux and run everything as root, too. Making and trusting backups of open files is very bad business. There is no guarantee that the application has those files in any sort of useable state.

    Leave a comment:


  • edogawaconan
    replied
    Originally posted by locovaca View Post
    What's your business case for a transfer that is going to take 30 minutes but yet may have been altered from when you started? If you're looking to back up something like a transactional database making copies of open files is not the way to be going.
    guess what - it is doable with zfs. Just because your filesystem of choice can't do it, it doesn't mean that it is impossible to do.

    Leave a comment:


  • FeRD_NYC
    replied
    Another small typo in the article. In the text below the IOZone 8GB write test chart (second chart on page 4), judging by the image it's meant to read:
    When carrying out an 8GB write test with a 64Kb block size in IOzone, EXT4 and Btrfs were 1.64~1.67x faster than ZFS-FUSE.

    Leave a comment:


  • allquixotic
    replied
    Oh, I just wanted to chime in and say that, for my own dedicated server, I'm running a hardware RAID controller (Adaptec 5405 if you're interested) with four 1.5TB Seagate SATA disks, on Ubuntu 10.04.1 Server. I'm using the XFS filesystem due to the way it is tuned for smooth I/O performance and parallel access; I don't need the raw throughput of ext4, but I do need data safety and "fair" scheduling of I/O across processes, two things XFS is very good at.

    I was using ext4 on Software RAID5 before, but I realized my mistake when I was able to quadruple my write performance by moving to Hardware RAID10 and XFS.

    I don't think I will be upgrading anything as low-level as the filesystem on my server for at least a year (I tolerated the ext4 for a year before I tossed it), but if I ever do, I will definitely have to re-evaluate my options and see if btrfs has matured or if native ZFS on Linux is a reality.

    Leave a comment:


  • allquixotic
    replied
    Michael, thanks for the tests. While I still don't think these are really "benchmarks", they certainly provide interesting real-world data, which is what we want, after all. Very good job overall; it must have taken significant effort to get these tests to run as well as they did.

    I'll echo others' concerns that the tests are still being run on a single disk configuration, meaning that it is probably not informative for those who are seriously considering btrfs or zfs for server use. But for desktop users, these tests are indeed meaningful.

    I like seeing ext4 being the performance leader almost always, and this is a good justification for using it on desktops. The filesystem-related data loss rates on ext4 are down low enough these days on 2.6.34+ that most desktop users can use it and get the performance benefit. Hopefully said desktop users don't keep any really important data on their computer without backing it up somewhere, like their email or a thumb drive -- 95%+ of desktop computers don't run a redundant RAID array, so that means you are always vulnerable to hardware failure, let alone software failure. So backup backup backup, etc., and then use your awesome ext4 performance to get your work done.

    I do wish ext4 were COW and supported snapshots, but I have a feeling that would also kill some of the places where its performance excels. You can't have it all. Or, who knows, maybe Ted will come out with ext5 that combines all the advantages of ext4 with COW and snapshotting....

    Leave a comment:


  • Wyatt
    replied
    Originally posted by RealNC View Post
    Because they don't have FUSE implementations.
    Ah, I see. For some reason I was given the impression that you can use FUSE with pretty much any FS and never bothered to verify (I don't exactly have any use for it). It just seemed like a quick way to "level the playing field".

    Learned something new.

    Leave a comment:


  • RealNC
    replied
    Originally posted by Wyatt View Post
    Okay, so why are not Ext4 and btrfs being tested through FUSE too?
    Because they don't have FUSE implementations.

    Leave a comment:


  • Wyatt
    replied
    Okay, so why are not Ext4 and btrfs being tested through FUSE too?

    Leave a comment:


  • smitty3268
    replied
    Making EXT4 a COW filesystem would have been seriously stupid.

    The whole point of EXT4 was to make incremental changes on top of EXT3 - changing to a COW system would have required a rewrite from the ground up. Which is exactly the point of BTRFS.

    Leave a comment:


  • krogy
    replied
    Originally posted by andrnils View Post
    While interesting to see how the different FSes performs, there is so much more to it than speed, imho.

    Like the fact that ext4 will loose your data ( it has done, no one will trust it for another 5 years ). And btrfs is still a bit raw, but has potential. Still needs a few years worth of enterprise usage to be considered trustworthy.

    It's amazing that linux has so many filesystems to choose from, but not one really good choise

    How about this test for a more "real world" example:

    Given /some/dir to be backed up at regular intervals, how much work is involved to do that for the different FSes? To spicy things up, the backup has to be of the state of that dir at exactly 1pm.
    ext4 + lvm2 on top of your raid configuration of choice and you are done sir.
    and this way protects you also from the screw ups of the filesystem itself.

    Leave a comment:

Working...
X