Announcement

Collapse
No announcement yet.

Oracle Talks Up Btrfs Rather Than ZFS For Their Unbreakable Enterprise Kernel 6

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by intelfx View Post

    Care to show where exactly we were discussing enterprise anything?
    ZFS main purpose lies in enterprise. That's what's driving both it's development and porting. What you choose to play with at home is your private concern, it may or may not collide with the interests of actual developers.
    That's also main reason why your "toy feature's" haven't actually made it into thing in the form you complain about - real devs driving the development don't need the particular "feature". If they did, it'd be only because approaching the deployment without any pre-planning.
    Are you really trying to imply that systems where "terabytes cost thousands of dollars" (quote from one of the previous posts) are really in practice being deployed without any pre-planning and sysadmins have to drastically re-configure already live production systems? Such idiot sysadmins should be fired on the spot.

    What you really are trying to advocate here is nothing but demagogy - take feature from the filesystem you deem "correct" in your subjective ideological purview, feature which is lacking in "incorrect" filesystem ("incorrect" also in your subjective ideological purview) and keep hammering on the said lack thereof - trying to show it as an important problem - thus establishing the implied superiority of your "correct filesystem". Demagogy.

    Comment


    • #72
      Originally posted by drjohnnyfever View Post

      I disagree. I think one of ZFS's key innovations was to revaluate and redesign the relationship between the filesystem and the volume manager. Because of the Btrfs and other Linux developer prejudices they took a clear step backwards in that regard in Btrfs.

      If you want to discuss technical details I'm happy to do it.
      what does ZFS offers, in term of "reevaluate and redesign the relationship between the filesystem and the volume manager" that btrfs doesn't?

      I used to use ZFS a long time ago (and happy btrfs user since 2008) so I might have lost something in between, but as far as I can tell, they are quite similar in this area. (they're both fs and volume manager).


      Comment


      • #73
        Fragmentation: workload does matter, possibly more than free space.


        Btrfs raid56: It still needs some things fixed so it doesn't need as much hand holding but it can be stable on stable hardware. The problem is hardware isn't stable, firmware has bugs, people have powerfailures or crashes, and there are (shockingly) kernel bugs.
        - Scrub after powerfailure or crash. For big file systems it's expensive, but recommended until the write hole is fixed. Not if you get hit by the write hold, checksum mismatch still warns. If metadata is affected by it, it can be fatal.
        - Use one of the raid1 profiles for metadata
        - Use space_cache=v2. It uses a dedicated tree, so it will reside in metadata block groups and subject to checksumming. Whereas v1 are bitmaps stored in data block groups and are effectively nocow, with no checksumming.
        - Consider disabling the write cache on all drives. Or do a lot of power failure testing to see if the drives really honor fsync/fua.
        - Timeout mismatches apply to Btrfs too, not just mdadm and lvm raid.

        - Keep backups. Raid is not a backup.

        Btrfs development is very active. It's slowed down with more emphasis on stability lately. There are still many contributors from many companies. Hundreds of commits per kernel cycle. Facebook and openSUSE has tens of thousands of installations, regular desktop users and servers and cloud. Google's Crostini makes use of Btrfs for Chromebook, to make it possible to run Linux apps natively.

        It doesn't fit everyone's use case but it checks off a lot of boxes people consistently say they want: full metadata and data integrity via checksumming, online grow and shrink to resize, cheap snapshots, transparent compression, easier multiple device support (not just raid, doing online replacement of a drive), etc.

        Comment


        • #74
          Originally posted by jacob View Post

          I don't see that happening. Only Canonical is barracking for ZFS, no-one else in the Linux community seems to care much for it.
          In all fairness, I can see why. It is incredibly "heavy" as a technology. For example unlike the other alternatives, it takes almost all the ram of a development laptop just "running". On FreeBSD almost all of my installs stick with UFS (Same with when I used to run Solaris in fact. The only difference was ZFS was the default).

          So if you are saying that it isn't seeing much of an uptake, I am a little happy about that

          Comment


          • #75
            Originally posted by kpedersen View Post
            In all fairness, I can see why. It is incredibly "heavy" as a technology. For example unlike the other alternatives, it takes almost all the ram of a development laptop just "running".
            ZFS is pretty aggressive about caching I guess. It certainly was developed on for use on systems with quite a lot less memory than today.

            Hilarious but I use btrfs even on a Raspberry Pi Zero: noatime,space_cache=v2,compress=zstd:1.

            ./lzbench -v -b128 -c2 -ezstd,1,3/lz4/lzo1 -m400 ~/silesia.tar
            Pastebin.com is the number one paste tool since 2002. Pastebin is a website where you can store text online for a set period of time.


            Code:
            $ sudo hdparm -t /dev/mmcblk0p3
            /dev/mmcblk0p3:
            HDIO_DRIVE_CMD(identify) failed: Invalid argument
            Timing buffered disk reads: 58 MB in 3.09 seconds = 18.76 MB/sec
            Writes might involve a bit of a hit, hard to tell, to compress. But reads might be improved, since decompress rates exceed device performance.

            Comment


            • #76
              For me, the main drawback of zfs was the inability to expand the array by adding a new disk.
              I don’t know if such a feature exists now, but I chose btrfs precisely because of the ease of expanding the array.




              Last edited by Eraserstp; 22 May 2020, 08:13 AM.

              Comment


              • #77
                Originally posted by zxy_thf View Post
                Lacking of rock-solid RAID-Z/RAID-Z2 support forces NAS users to use ZFS over btrfs.
                I've been reading about RADI1C3 and RAID1C4 in btrfs and that they are more or less equivalent to RAID-Z and RAID-Z2, though I can't really tell if it's true nor why. Anyway, assuming it's not true and thus assuming RAID-Z is superior, that leads me to think the only advantage of ZFS is on bare metal, while if I'm using say LXD in a KVM instance I can safely go with btrfs, right?


                Originally posted by zxy_thf View Post
                2. Professional/Prosumers don't feel like to use it because their archive servers need RAID;
                I assume you mean they need RAID-Z, because some RAID is provided by btrfs too.

                Originally posted by zxy_thf View Post
                3. The vast majority features are not attractive to normal users because it's counter-intuitive to how they normally "use a hard drive".
                doesn't that hold true for ZFS too?
                Last edited by lucrus; 22 May 2020, 10:47 AM.

                Comment


                • #78
                  Originally posted by aht0 View Post
                  <...> What you really are trying <...>
                  What you really are trying to do here is shifting the goalposts. We were discussing raw feature parity, and your "BUT... BUT... REASONS" when confronted isn't one bit convincing or relevant. Goodbye.
                  Last edited by intelfx; 22 May 2020, 09:56 AM.

                  Comment


                  • #79
                    Originally posted by intelfx View Post

                    What you really are trying to do here is shifting the goalposts. We were discussing raw feature parity, and your "BUT... BUT... REASONS" when confronted isn't one bit convincing or relevant. Goodbye.
                    Discussing feature-parity is kind of pointless when features in question are not equally important themselves. The one you are banging drum about is a "toy feature" (from my point of view at least). Enterprise user knows in advance what he/she is about to do - avoiding the need of your "important feature" alltogether. When you need such feature? Home maybe, when you play around with your PC, add/remove drives and re-work your configurations all the time. You do not play with production systems in the way you want.

                    Comment


                    • #80
                      Originally posted by jegp View Post
                      I think the new raid modes of Btrfs (raid1c3 and raid1c4 ) are a kind of the new and safe replacement for raid 5 and raid6.
                      https://kdave.github.io/btrfs-hilights-5.5-raid1c34/
                      From a security point of view, raid1 is like raid5 (single fault prof), raid1c3 is like raid6 (two faults prof), raid1c4 can tolerate up to 3 fault.
                      However from a space efficiency point of view, raid1[cX] is worse than raid5/6.

                      If you want 1TB of space, with a raid1c3, you need 3TB of space. With raid6 (so the security is comparable), supposing to have disks with a size of 512GB, you need 4 disks for a total space of 2TB. So from a space efficiency POV, RAID6 uses 30% less space than RAID1C3

                      Of course there are other factors to consider:
                      - supposing disks of 512GB of size, for a raid1c3, you need 6 disks; I think that it is reasonable that the likelihood of a failure is proportional to the number of components; this means that raid1c3 has an higher likelihood of fault.
                      - raid5/6 in btrfs is less used and tested; so the likelihood of an issues is higher (and there is the problem of the write hole, even tough this is not so frequent)

                      Comment

                      Working...
                      X