Announcement

Collapse
No announcement yet.

4-Disk Btrfs Native RAID Performance On Linux 4.10

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by AndyChow View Post

    What?!? ZFS is a LOT slower than btrfs, on every metric.
    What I've heard is opposite, but that a while ago. At least ZFS uses much more RAM, thus giving the impression of heavy caching... that could yield to faster performance on certain situations.

    As there seems to be contradicting information among us, I'd really would like to see ZFS vs. btrfs comparison with memory usage. Also with and without a cache disk (some fast NVMe storage).

    Comment


    • #32
      Originally posted by Zucca View Post
      I don't think you can do RAID-1E on btrfs.

      Also for those who wonder btrfs RAID-1 speed: RAID-1 in btrfs is basically all disks smashed together as one JBOD and then split into two with mirroring. So theorically you'd get maximum of 2x disk speed from btrfs RAID-1 no matter how many disks you have in the pool. At least that's how I've understood it. Anyone is welcome to correct me if I'm wrong.
      Yeah, that's the theoretical.

      The reason there is a performance hit here is that afaik btrfs does not have the logic to figure out how is stuff split and where and balancing reads from two drives, it just fetches the first data it finds regardless of drive in the array, period.

      Reading at the same time from all drives in a block-level RAID1is a piece of cake, as each drive is a block-level copy of the others, on btrfs.... it's not.

      Comment


      • #33
        Originally posted by jacob View Post
        That's a different thing entirely. With a traditional journaling filesystem (not log structured) each update operation (file write, creation, deletion etc) involves writing metadata into the journal, which is at a fixed location on the disk. That means that there are a couple blocks which are constantly written over and over and over, which reduces the lifetime of a SSD quite a bit.
        Nope, ssds (and even SD cards nowadays) have wear-leveling so while for the block layer it is the same block, for the actual flash cell it's not.

        Comment


        • #34
          Originally posted by nomadewolf View Post
          Is it me, or these benchmarks only serve to ilustrate how poorly RAID has been implemented?
          To decide if its implementation is poor or not you should look at the only other filesystem that offers similar features, ZFS.

          Of course a dumb block-level RAID is faster, it does not have to deal with all the stuff btrfs must offer.

          Comment


          • #35
            Originally posted by AsuMagic View Post
            RAID1 is data redundancy, RAID0 is data interlacing meant for performance.
            i know what they are, my question still stands. you can read from both drives of raid1 at the same time

            Comment


            • #36
              Originally posted by Spacefish View Post
              ZFS could be a nice comparison too, as it offers most of the featured btrfs offers without being that slow!
              lol, how can you tell that without comparisons? zfs offers some of the features btrfs offers without existing for linux

              Comment


              • #37
                Originally posted by Zucca View Post
                At least ZFS uses much more RAM, thus giving the impression of heavy caching...
                heavy caching is done by page cache, zfs uses much more ram due to obsolete design

                Comment


                • #38
                  Originally posted by pal666 View Post
                  lol, how can you tell that without comparisons? zfs offers some of the features btrfs offers without existing for linux
                  sudo apt install zfs

                  Comment


                  • #39
                    Originally posted by starshipeleven View Post
                    Nope, ssds (and even SD cards nowadays) have wear-leveling so while for the block layer it is the same block, for the actual flash cell it's not.
                    That makes sense, thanks. Does it affect burst transfer performance though?

                    Comment


                    • #40
                      Originally posted by Zucca View Post
                      What I've heard is opposite, but that a while ago. At least ZFS uses much more RAM, thus giving the impression of heavy caching... that could yield to faster performance on certain situations.

                      As there seems to be contradicting information among us, I'd really would like to see ZFS vs. btrfs comparison with memory usage. Also with and without a cache disk (some fast NVMe storage).
                      Afaik ZFS tanks hard without decent amounts of RAM cache or a SSD used for cache. With RAIDZ-1 (RAID5 equivalent) or better, of course.

                      But yeah, I would like to get some fair comparisons ZFS vs btrfs.

                      Comment

                      Working...
                      X