Announcement

Collapse
No announcement yet.

ZFS On Linux Is Called Stable & Production Ready

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    jabl, ryao:

    thanks a lot !

    will try to get IOMMU running (this box is using a motherboard (P9D WS) & cpu (Xeon 1245v3) that hopefully should support this


    @blackiwid:

    well, the most important argument against greater Btrfs usage is risk of "data loss" - take a look at the list of recent reports that popped up on the mailing list:



    (I believe several appeared on July & August)


    if you don't care that much about your data - feel free to use Btrfs

    meanwhile I'm only partially trusting Btrfs at least until it has attained comparable stability to ZFS and mainly putting data on ZFS pools



    luckily the ENOSPC problem seemed to have been fixed (at least in theory):


    will see how that fix works in the long-term


    the latency-spikes (throttling ?, affecting other parts of the kernel?) I saw with Btrfs during heavy i/o (e.g. while compiling firefox, chromium, etc.) in RAM with zram seemed to have been fixed just recently to (almost) full extent

    great progress with Btrfs also in the field of fsync issues (and incomplete data, data corruption) just recently mainly thanks to Filipe Manana !

    Comment


    • #22
      Originally posted by kernelOfTruth View Post
      @blackiwid:

      well, the most important argument against greater Btrfs usage is risk of "data loss" - take a look at the list of recent reports that popped up on the mailing list:
      yes but like u say yourself if nothing else happens to btrfs design they at least will fix every bug they find, we can be sure of that, if to nothing else.

      I think at the moment ZFS is for heavy loads and professional stuff really better, but for smaller tasks btrfs is better, and in the long run I dont see advantages of ZFS over BTRFS, it will always feel more alienated less integrated in linux, and it has his patent and other issues.

      If I would be the one person that waited for such stuff for 5 years and cant wait to use it on a big scale, I would maybe be also chear for zfs, but btrfs works kind of good enough for my current tasks. But I dont search a backup-fs, thats a bit the marketing of zfs, its there to make your data save, and I thought backups are for that

      But I am not yet full in on btrfs, and use it for linux ssds mostly not so much for file-servers. But int he reality I see there such fses much more interesting, then for Fileservers, because I dont resize my fileserver space that often, nor do I snapshop my fileserver (except maybe for a backup to send it), nor do I need compression of movies and pictures and other typical file-server files, but for etc and home dir I like it.

      And Snapshoting is also good for operation system package installing etc. So while zfs is extremly difficult at least for now to use as rootfs, btrfs is ideal for it.

      So for me zfs does not solv any problems I had before, I have my data on a single harddisk because I also use torrent on this, and makes no sense to write this small bytes allt he time on 2 harddisks on a raid1. ON the other hand a backup script or something like that to another harddisk is much more save than some raid as backup.

      But of course if the dont know 3TB harddisks are not big enough for u, u need to use raid, and therefor maybe this zfs is a good thing.

      Comment


      • #23
        agreed,

        the problem for me, also, is that ZFS isn't quite easy to get to work on the root partition and it's kinda annoying that it isn't built into the kernel by default (but it can be patched in ! - of course it can't be distributed in that way)

        3-4 TB seems enough for now

        the biggest obstacles for using it on /home (already had used it on my laptop & suspending to RAM, etc. simply works - whereas e.g. realtime-kernel support, suspend-support in ZFS is low priority) but had to leave it due to data corruption upon crashes or unclean shutdown

        just recently there have been fixes posted in the mailing list so, like you wrote, yeah - it'll eventually get there to be fully functional without "side-effects"


        but in the back of my mind I, from time to time, always have to think about Edward Shishkin's mention of bad design or bad design choice: http://lwn.net/Articles/393144/

        which meanwhile of course could have been fixed (didn't follow up with that closely)

        Comment


        • #24
          Originally posted by kernelOfTruth View Post
          agreed,

          but in the back of my mind I, from time to time, always have to think about Edward Shishkin's mention of bad design or bad design choice: http://lwn.net/Articles/393144/

          which meanwhile of course could have been fixed (didn't follow up with that closely)

          Even that very old lwn article mention on the end that its not true that it was a bad design choice:

          That fix has not been posted as of this writing, so its effectiveness cannot yet be judged. But, chances are, this is not a case of a filesystem needing a fundamental redesign. Instead, all it needs is more extensive testing, some performance tuning, and, inevitably, some bug fixes. The good news is that the process seems to be working as it should be: these problems have been found before any sort of wide-scale deployment of this very new filesystem.

          Comment


          • #25
            nice

            Comment


            • #26
              ZFS can do n-way mirrors, btrfs still stuck with 2 disks.
              ZFS has raidz1/2/3 and stripes of them, btrfs support for raid5/6 is in early stages.

              People are praising btrfs ability to mix raid levels on the same disks, it is a nice feature to store important documents mirrored and pron collection striped on just two HDDs, but it will be interesting to see how btrfs will handle more raid levels at once.

              Also from the mkfs.btrfs command line it is not exactly obvious to me how btrfs mirror disks in raid10 and how to make sure they are on different controllers.
              Really curious how creation of raid50 would look like... If it follows current style "mkfs.btrfs -d raid50 -m raid50 [12 disks]" has at least 3 ways to combine drives in raid groups: 2x5, 3x4, 4x3.
              ZFS command line syntax is much better at this point.

              Comment

              Working...
              X