Announcement

Collapse
No announcement yet.

Benchmarks Of ZFS-FUSE On Linux Against EXT4, Btrfs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • nutznboltz
    replied
    I'd like to see a benchmark that times how long it takes to remove a disk from a ZFS mirror.

    Hint:
    Search the world's information, including webpages, images, videos and more. Google has many special features to help you find exactly what you're looking for.

    Leave a comment:


  • devsk
    replied
    Originally posted by marakaid View Post
    Easy question about the Fuse ZFS:

    Is it good enough for a NAS? I care about data integrity, and could live with speeds of 50 MB/s in RaidZ modes.
    In my testing, sequential read speeds are almost platter. Sequential write speeds are substantially lower (like half of platter). But with a RAIDZ of 3 drives, you will be able to beat 50MB/s sequential write speed even with low end 7200rpm drives.

    So, yeah. zfs-fuse is ideal for your requirement. You have to throw enough RAM at it(512MB is not enough, I have mine at 1.5GB).

    Data integrity is super awesome! I have already seen a sample of it when I one of my older files developed a bad block (no idea if the drive introduced the error or what the cause was).

    Leave a comment:


  • marakaid
    replied
    Easy question about the Fuse ZFS:

    Is it good enough for a NAS? I care about data integrity, and could live with speeds of 50 MB/s in RaidZ modes.

    Leave a comment:


  • HisDudeness
    replied
    Originally posted by baryluk View Post
    Very interesting article. This misaligment read can be becuase he was using disk slice/partition, not whole (recomended) disk. And partition could be possibly unaligned. I hope next zpool will include this parameter to be configured at creation, without needing to patch it.

    I will test if my zfs is working correctly on one of 2TB WD *EARS disk.
    From what I got from ZFS it will use the block size reported by the device and at least for the WD EARS hdds this is 512 byte. So even if you align your partitions perfectly to the 4k boundaries (or use the full hdd), you'll get <4k writes. In those cases the drive has to read the 4k block first before writing the changed part.

    Leave a comment:


  • devsk
    replied
    Originally posted by kraftman View Post
    If this was Ext4 fault and if this happened in enterprise system (which didn't).



    Damn troll. Ext3, Ext4, XFS are great file systems. And no, it's not amazing, but it's something natural, because it's an Operating System which is present probably in every environment. What's the good choice in your opinion?
    Troll is an easy word! There is no doubt extX and XFS are great file systems. But they are so 10 years ago! Do they stack against ZFS or even against BTRFS in features? Where are consistent snapshots? where are the data checksums? Where is the built in compression and raid support?

    So, yes, they are great FS but not for today's storage requirements (checksums are not optional). So, calling the guy a troll is a trollish comment in my books.

    Leave a comment:


  • baryluk
    replied
    Originally posted by waucka View Post
    Snapshots only help you recover from "oops, I accidentally deleted a file", not "uh oh, the hard disk just failed".
    But snapshots also helps you backup file system (make snapshot, and then backup this snapshot to other file). It is just like switching off computer, and replicating it. For databases, and many others it is perfectly good strategy. Just copying data when file system is live and programs are constantly changing files would create inconsistencies. Atomic snapshot is prerequirement for good and correct backup. (it isn't replacement, but helps a lot).

    Leave a comment:


  • baryluk
    replied
    Originally posted by HisDudeness View Post
    Since your SSD probably uses 4kb blocks, have you considered doing the benchmark for ZFS with an ashift of 12 (instead of 9 i.e. 0.5kb blocks)? For my damn WD drives this was a serious performance boost. http://www.solarismen.de/archives/2010/08/08.html
    Very interesting article. This misaligment read can be becuase he was using disk slice/partition, not whole (recomended) disk. And partition could be possibly unaligned. I hope next zpool will include this parameter to be configured at creation, without needing to patch it.

    I will test if my zfs is working correctly on one of 2TB WD *EARS disk.

    Leave a comment:


  • HisDudeness
    replied
    Since your SSD probably uses 4kb blocks, have you considered doing the benchmark for ZFS with an ashift of 12 (instead of 9 i.e. 0.5kb blocks)? For my damn WD drives this was a serious performance boost. http://www.solarismen.de/archives/2010/08/08.html

    Leave a comment:


  • Xilanaz
    replied
    Originally posted by locovaca View Post
    It is poor data management. Transaction logs shoulb be the last line of defense against failure, not the first. Timely, application specific backups should always be your first line of defense. ZFS snapshots, in this case, depend on the database engine's emergency recover processes as your only line of defense.
    very well put, for regular backups you use the tools provided by the database engine you use, online backups, incremental backups, after imagine, etc. etc. Using a snapshot for a regular backup and then relying on the transaction backout to work is simply bad practice and like Locovaca I would be fired in 12 hours if I would do that.

    Leave a comment:


  • p-static
    replied
    Why does KQ Infotech get the press for this article? I thought they were just ripping off LLNL?

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

    Leave a comment:

Working...
X