Announcement

Collapse
No announcement yet.

Btrfs/EXT4/XFS/F2FS RAID 0/1/5/6/10 Linux Benchmarks On Four SSDs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by stiiixy View Post
    At least they are open to linux, and have mega-bomb dollar company's developing, building and supporting it on real systems. Can't say the same for some country's.
    And everything they develop is open source (aside from some gov't contract work, I imagine).
    Once, again, I really don't understand the rational basis for dislike of rh. What more can they possibly do?

    Comment


    • #12
      Looking at the results, it seems that EXT4 in RAID 5/6 is better for development use, while native BTRFS in RAID 5/6 seems better for serial work like video editing etc.
      I do both, so I am still undecided about the use of EXT4 vs BTRFS. Also, the warnings that RAID 5/6 is still under development and thus not stable within BTRFS hold me back too.

      Am I missing something?

      I leave F2FS out of the equation because I only use HDD. Maybe in the future I might switch to SSD's, but replacing all the HDD's in my main workstation is too costly for now.

      Frans.

      Comment


      • #13
        Originally posted by cjcox View Post
        Uh... done deal already. Unless you are trapped in the whole USA, Linux === Red Hat thing... in which case I think it's best to stop saying Linux or Linux distro and just say Red Hat (but beware of the trademark police).
        The only major distro that switched to btrfs yet is opensuse. I am pretty shure that fedora switchhes before ubuntu does so, its no redhat thing.

        But I agree that there is nothing to wonder IF in a few years everybody has switched to btrfs, its out of the question that this will not happen. There are many features linux absolutly needs in btrfs and they are pretty usable also for desktop users that dont use raid. And there is no real other path to get theese features. Heck lock even what Lennert wants to do with systemd and btrfs.

        Its no question if in a few years that will happen, the question is how fast exactl in 6 months or in 12 months or in 2 years, and when switches the last major distro (I guess that would be debian?) I think in 2 years debian starts using it as default. But who knows maybe it takes 3-4-5 years. But thats the question when exaclty does it happen, not if it will happen.

        Comment


        • #14
          I think btrfs in 5/6 mode is still considered unstable.

          So if you need raid 5/6.... use one of the others.

          If you just want raid 1, then btrfs is a good choice.

          Comment


          • #15
            @Michael:

            Have you considered creating the RAID10 arrays with the 'far' layout? Should give you comparable read-speed to RAID0 and comparable write-speed to the standard RAID10 'near' layout, since seek speeds on SSDs aren't constrained by having to physically move a mechanical arm and the fact that write-speeds are slower towards the end (inside) of rotating disk media.

            Code:
            mdadm --create /dev/mdX -l10 -n4 -p2f /dev/sd{b,c,d,e}1
            ^^ something like that.

            Comment


            • #16
              Originally posted by ermo View Post
              @Michael:

              Have you considered creating the RAID10 arrays with the 'far' layout? Should give you comparable read-speed to RAID0 and comparable write-speed to the standard RAID10 'near' layout, since seek speeds on SSDs aren't constrained by having to physically move a mechanical arm and the fact that write-speeds are slower towards the end (inside) of rotating disk media.

              Code:
              mdadm --create /dev/mdX -l10 -n4 -p2f /dev/sd{b,c,d,e}1
              ^^ something like that.
              Nope, just as testing stock/out-of-the-box is easiest for others to reproduce and then once you get into running a few tuning benchmarks it ends in a slippery-slope of people wanting all sorts of tuned results, which ends up being more time consuming than it's worth, not to mention I haven't even broken even yet on these tests in terms of the costs. But if on a mdadm Wiki there was some very well documented performance-recommended setup, would be happy to run such test, similar to other areas.
              Michael Larabel
              https://www.michaellarabel.com/

              Comment


              • #17
                Great work. Very insteresting

                About your preceding post, one downside to not create a RAID 10 with the "far" layout is that people reading this article may be afraid by the poor performances of mdadm RAID 10 compared to RAID 5.

                Maybe it would be useful to indicate this detail in future benchmarks, or to include tests for the "near" and "far" layouts.

                Also, it is worth noting that Btrfs does not offer such flexibility in configuration of RAID 10.

                Comment


                • #18
                  Again very strange test results. RAID0 should always be better than RAID5 and RAID6. It can loose to RAID0 or RAID10 in some cases but never ever lose to the parity raid levels. Some external factor is messing the test results.

                  Comment

                  Working...
                  X