A Quick Look At EXT4 vs. ZFS Performance On Ubuntu 19.10 With An NVMe SSD

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • deusexmachina
    replied
    Isn't ZFS the highest performance Linux filesystem that has bit rot protection? Damn... I want them both (highest performance & anti-bit rot)!

    Leave a comment:


  • deusexmachina
    replied
    Originally posted by jrch2k8 View Post

    read about ZFS and you will realize this comparison is extremely wrong on several fronts and is no real base to determine its speed in any form.

    This test use ZFS with defaults that is something no one wanting to use ZFS should ever ever do, ZFS is meant to be manually optimized for each volume workload and its also dependent on the amount of drives, bus, scheduler, RAM, etc.

    but for whatever reason Michael keep including ZFS on this tests again and again in this conditions where first no one will ever use and second against dumb file systems that only focus on speed that will always win no matter what in a speed contest specially against a horribly set ZFS system like this one.

    Please Michael again, stop using ZFS on this benchmarks if you don't have the time to set it properly or at least bearably, you are only hurting ZFS because the average phoronix user don't have enough context to understand why those result are so horrible or why this setup is so hilariously wrong and will never show any real world performance or benefit for using ZFS in the first place.

    For most readers here all they get is "Ext4 is faster hence ZFS is broken or buggy" regardless that is the farther from the truth you can get if you use ZFS properly(spoiler ZoL is among the fastest ZFS implementations and is very very enterprise ready as well)
    Can you be more specific in your recommendations for ZFS configuration?

    Leave a comment:


  • S.Pam
    replied
    Originally posted by cjcox View Post

    Not saying bit rot isn't a problem, just noting that 99% of all large scale enterprise storage subsystems don't deal with it effectively. I think you just like seeing Btrfs lose in (most) every benchmark But, if you don't sleep well thinking of bit rot, you make a very valid point
    I personally do lots of photography and can say from experience that bitrot is real. It's not only Btrfs but also ZFS and on Windows ReFS that can protect against it.

    Some reading if you like: https://arstechnica.com/information-...n-filesystems/

    Leave a comment:


  • ernstp
    replied
    Originally posted by cynic View Post
    would be nice to see an updated comparison of zfs against btrfs!
    Yeah this is the interesting benchmark. Though that's also tricky because you can tune both of them so much...

    Leave a comment:


  • cjcox
    replied
    Originally posted by Spam View Post

    Mostly agree. Though EXT4, LVM and MD RAID does not protect or detect bit rot. So if your data is valuable... Btrfs or ZFS is the way to go. Backups do not help against bit rot since you usually don't detect them before the rot is copied into the backups.
    Not saying bit rot isn't a problem, just noting that 99% of all large scale enterprise storage subsystems don't deal with it effectively. I think you just like seeing Btrfs lose in (most) every benchmark But, if you don't sleep well thinking of bit rot, you make a very valid point

    Leave a comment:


  • reavertm
    replied
    Originally posted by jrch2k8 View Post
    For most readers here all they get is "Ext4 is faster hence ZFS is broken or buggy" regardless that is the farther from the truth you can get if you use ZFS properly
    Good, so it will discourage users who are not truly interested in it anyway.

    Leave a comment:


  • birdie
    replied
    I for one don't use filesystems which are only supported by Linux. Makes restoring data from them near impossible. On the other hand ext4 is well supported by R-Studio.

    Leave a comment:


  • jrch2k8
    replied
    Originally posted by smartalgorithm View Post
    XFS always worked well for me... plain, simple and fast... ZFS looks cool but with a very big performance penalty for now...
    read about ZFS and you will realize this comparison is extremely wrong on several fronts and is no real base to determine its speed in any form.

    This test use ZFS with defaults that is something no one wanting to use ZFS should ever ever do, ZFS is meant to be manually optimized for each volume workload and its also dependent on the amount of drives, bus, scheduler, RAM, etc.

    but for whatever reason Michael keep including ZFS on this tests again and again in this conditions where first no one will ever use and second against dumb file systems that only focus on speed that will always win no matter what in a speed contest specially against a horribly set ZFS system like this one.

    Please Michael again, stop using ZFS on this benchmarks if you don't have the time to set it properly or at least bearably, you are only hurting ZFS because the average phoronix user don't have enough context to understand why those result are so horrible or why this setup is so hilariously wrong and will never show any real world performance or benefit for using ZFS in the first place.

    For most readers here all they get is "Ext4 is faster hence ZFS is broken or buggy" regardless that is the farther from the truth you can get if you use ZFS properly(spoiler ZoL is among the fastest ZFS implementations and is very very enterprise ready as well)

    Leave a comment:


  • stormcrow
    replied
    Originally posted by Vistaus View Post

    On SSD's too? 'Cause I'm still using ext4, but I consider switching to XFS on my next reinstall (unless there are conversion tools to do it right now?) if it's also fit for SSD's.
    I have used it on SSDs and while the performance for my particular desktop use is comparable - I didn't notice any real difference between XFS and ext4 - there were and faik still some gotchas for using it. One is traditionally Ubuntu's grub doesn't support booting from XFS. I don't know if that's changed recently. The other is that for some strange reason some of my GOG games, and I don't remember which ones, would inexplicably crash when I was using XFS for the drive they were installed on. No effin clue why, but changing it to ext4 and all was fine.

    Keeping in mind those couple of caveats, I don't see a reason not to use XFS over ext4 for a desktop. Use cases and performance vary, however.

    Leave a comment:


  • S.Pam
    replied
    Originally posted by cjcox View Post
    Just me, but IMHO, for ZFS, you really need the muitiple disk aspect of it. Otherwise, you can do most everything else with LVM. ZFS can be looked at as volume manager, but it was meant to be that plus a replacement for HW raid. Again, IMHO.

    So... better, might be a comparison between ext4 over some sort of SW RAID (md?) and ZFS various RAID setups. That could be across NVMe, just realize, that ZFS might be more interesting with more than just a couple or few drives (which might be difficult M.2 wise today). Btw, those tests need to include falure scenarios, time to "rebuild" (where applicable), ease of notification (on failure), ease of replace (e.g. hot swappability).

    And of course, still really talking about spinny disk really for ZFS. As I said, I'd probably go ext4 and LVM for low count NVMe.
    Mostly agree. Though EXT4, LVM and MD RAID does not protect or detect bit rot. So if your data is valuable... Btrfs or ZFS is the way to go. Backups do not help against bit rot since you usually don't detect them before the rot is copied into the backups.

    Leave a comment:

Working...
X