With those pcie 4 devices coming out at 5 & 6.5 GB/s, it kinda makes me wonder where optane is going to end up in the mix.
Announcement
Collapse
No announcement yet.
Optane SSD RAID Performance With ZFS On Linux, EXT4, XFS, Btrfs, F2FS
Collapse
X
-
Originally posted by bug77 View PostThis is one of the weirdest tests I've read lately. RAID1 performing on the same level as RAID0 and in some cases better than no RAID? Either we need new tests for Optane/XPoint or there's something wrong with the whole setup.
- Likes 2
Comment
-
Originally posted by jrch2k8 View Post
Hi, Michael could you publish your ZFS configurations? those result are way too atrocious for my liking taking into account i can reach some of those results with spinning disks instead of SSDs, so i assume you are using a single default POOL with no Volumes with whatever Ubuntu include as "defaults" which are in no way right for benchmarking.
ZFS should never be used on the Bare POOL with defaults values.
some helpful commands to debug that performance:
zpool status -v
zfs list
zfs get all
also this one could help to see if you have multi queue active on all disk
cat /sys/block/your_drive_here/queue/scheduler
also did you create a RAID0 with ZFS? i mean something akin to zpool create -f [new pool name] /dev/sdx /dev/sdy? because that is the worst scenario possible for ZFS and honestly the one scenario where no one should use ZFS for because you get 0 as ZERO data protection but 100% of the overhead since each disk has to write Metadata and checksum while waiting for the other disk for the same which translate in ZERO scaling, you can add 100 drives in stripe and your top speed will never be more than -/+10% of the fastest single disk in the best case scenario but in the real world the more drives you add to the stripe the worst the performance will be.
Caveat:
I do understand that you are benchmarking the Out-Of-The-Box settings on scenarios that regular user should be familiar with, i do, but ZFS is not and never was meant for desktops or OOB settings, ZFS is/was designed specifically to be optimized per volume for whatever you need as is often the case on Enterprise hence the defaults are the worst case scenario settings OOB for 99% of the tasks that a regular user will need and specially for benchmarking.
If you post some of that relevant data i have no problem giving you a hand getting some basics right to improve your ZFS numbers, you also have several jewels on the Internet like archwiki and percona sites.
https://wiki.archlinux.org/index.php/ZFS (the basics done right)
http://open-zfs.org/wiki/Performance_tuning (the medium level optimizations)
https://www.percona.com/blog/2018/05...fs-performance (some high level percona magic )
Also you need a kernel patch to bring back hardware acceleration on ZFS if you don't have it
Thank you very much for your hard work
Comment
-
Originally posted by pal666 View Postraid1 is extra complexity over no raid
[ 0.180194] raid6: avx2x4 gen() 30855 MB/s
So, give me a medium that gives me more than 30.8 GB/s.
Note, pcie and memory will be an issue WAY before there is an issue with raid.
Especially with raid1 - ANY read test on a raid one disk will be faster than no raid, the cpu will be idling waiting for IO anyway.
Comment
Comment