Announcement

Collapse
No announcement yet.

Optane SSD RAID Performance With ZFS On Linux, EXT4, XFS, Btrfs, F2FS

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    With those pcie 4 devices coming out at 5 & 6.5 GB/s, it kinda makes me wonder where optane is going to end up in the mix.

    Comment


    • #12
      Originally posted by bug77 View Post
      This is one of the weirdest tests I've read lately. RAID1 performing on the same level as RAID0 and in some cases better than no RAID? Either we need new tests for Optane/XPoint or there's something wrong with the whole setup.
      I guess the lesson here is that Optane is already so fast, any extra complexity you add can make it worse (depending on the workload.)

      Comment


      • #13
        Originally posted by jrch2k8 View Post

        Hi, Michael could you publish your ZFS configurations? those result are way too atrocious for my liking taking into account i can reach some of those results with spinning disks instead of SSDs, so i assume you are using a single default POOL with no Volumes with whatever Ubuntu include as "defaults" which are in no way right for benchmarking.

        ZFS should never be used on the Bare POOL with defaults values.

        some helpful commands to debug that performance:

        zpool status -v
        zfs list
        zfs get all

        also this one could help to see if you have multi queue active on all disk

        cat /sys/block/your_drive_here/queue/scheduler

        also did you create a RAID0 with ZFS? i mean something akin to zpool create -f [new pool name] /dev/sdx /dev/sdy? because that is the worst scenario possible for ZFS and honestly the one scenario where no one should use ZFS for because you get 0 as ZERO data protection but 100% of the overhead since each disk has to write Metadata and checksum while waiting for the other disk for the same which translate in ZERO scaling, you can add 100 drives in stripe and your top speed will never be more than -/+10% of the fastest single disk in the best case scenario but in the real world the more drives you add to the stripe the worst the performance will be.

        Caveat:
        I do understand that you are benchmarking the Out-Of-The-Box settings on scenarios that regular user should be familiar with, i do, but ZFS is not and never was meant for desktops or OOB settings, ZFS is/was designed specifically to be optimized per volume for whatever you need as is often the case on Enterprise hence the defaults are the worst case scenario settings OOB for 99% of the tasks that a regular user will need and specially for benchmarking.

        If you post some of that relevant data i have no problem giving you a hand getting some basics right to improve your ZFS numbers, you also have several jewels on the Internet like archwiki and percona sites.

        https://wiki.archlinux.org/index.php/ZFS (the basics done right)
        http://open-zfs.org/wiki/Performance_tuning (the medium level optimizations)
        https://www.percona.com/blog/2018/05...fs-performance (some high level percona magic )

        Also you need a kernel patch to bring back hardware acceleration on ZFS if you don't have it

        Thank you very much for your hard work
        I think the most important question here is, "Is the ashift right?"

        Comment


        • #14
          All the bars are the same color. What happened to different colors?

          Comment


          • #15
            what about the performance of 970evo with f2fs zfs xfs?

            Comment


            • #16
              Originally posted by jrch2k8 View Post
              ZFS should never be used
              i couldn't have said it better myself

              Comment


              • #17
                Originally posted by ThoreauHD View Post
                With those pcie 4 devices coming out at 5 & 6.5 GB/s, it kinda makes me wonder where optane is going to end up in the mix.
                coming 6.5 gb/s optane obviously will be faster than current optane

                Comment


                • #18
                  Originally posted by MaxToTheMax View Post
                  I guess the lesson here is that Optane is already so fast, any extra complexity you add can make it worse (depending on the workload.)
                  raid1 is extra complexity over no raid

                  Comment


                  • #19
                    Originally posted by pal666 View Post
                    raid1 is extra complexity over no raid
                    Is it? On my ryzen, even more complex raid6 is still IO bound because:
                    [ 0.180194] raid6: avx2x4 gen() 30855 MB/s

                    So, give me a medium that gives me more than 30.8 GB/s.

                    Note, pcie and memory will be an issue WAY before there is an issue with raid.

                    Especially with raid1 - ANY read test on a raid one disk will be faster than no raid, the cpu will be idling waiting for IO anyway.

                    Comment


                    • #20
                      Can't wait for GNU GRUB 2.04 to be stable and get officially merged into coreboot so I can change all of my file systems to F2FS and get better performance.

                      Comment

                      Working...
                      X