Originally posted by cthart
View Post
Announcement
Collapse
No announcement yet.
HDD/SSD Performance With MDADM RAID, BCache On Linux 4.14
Collapse
X
-
Originally posted by cthart View PostFor those interested in benchmarks on ZFS with server hardware: we have a server with a 6-disk RAID 10 array (real, hardware RAID, on a Dell PERC controller with 1GiB of battery-backed RAM). This array is faster than any single SSD in all tests that we did. Using an SSD for the ZFS cache would actually slow it down. Of course, the battery-backed RAM allowing the array to operate in write-back mode is the key here. RAM is orders of magnitude faster than flash, while flash itself is not orders of magnitude faster than spinning disks.
Instead of hybrid disks with a small flash cache, I'd like to see hybrid disks with a battery-backed RAM cache.
Comment
-
-
These are interesting results, and it would be nice if they could be expanded even more to see how it works when you have a RAID1, RAID10, RAID5 or RAID6 as the backing devices on HDD. I really think that comparing bcache performance with an all SSD setup isn't really good, since the cost is so prohibitive to do so. Having the tests done with larger and more complex RAID HDD setups would be more useful to me. Though I can see how people with a two drive bay laptop might want to do a single HDD/SSD combo. But honestly, then I feel that two SSDs makes more sense just from a reliability standpoint.
But, there's still a big restriction with bcache as I understand it. Once you've setup a device to use it, you cannot remove it without also blowing away the device. Which is a huge huge problem. It's a commitment you can't back out of without major work. Not fun.
And which is why I use lvcache instead. So it would be really nice to see a comparision of bcache vs lvache vs plain HDD setups as above. My personal setup *seems* faster, but I haven't really stressed it much. And I've also running RAID1 HDDs with a cache on mirrored SSDs as well, since I don't trust disks not to fail on me.
Thanks again for doing these tests.
Comment
-
Originally posted by digitus2001 View PostCouple of points:
- I wish the article detailed if bcache was warmed up at all: My experience with BTRFS with native raid1 of two bcached 6tb HDD + 200gb SSD requires nearly a full week of normal activity for the cache to be 'warm'.
- The tests that involve random IO over the whole filesystem will of course be terrible and is nearly a meaningless test.
- Repeated IO over a data set smaller than the cache device is where bcache shines. Would be nice to put more test emphasis these use cases.
Disclosure: My experience with bcache+btrfs is great but anecdotal. I would never want to use an HDD in any other way...
But finding a proper benchmark is the key here. I can see why Michael might not be interested in writing a specific benchmark though, he's more into the benchmarking process in general instead.
Comment
Comment