Announcement

Collapse
No announcement yet.

HDD/SSD Performance With MDADM RAID, BCache On Linux 4.14

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by cthart View Post
    For those interested in benchmarks on ZFS with server hardware: we have a server with a 6-disk RAID 10 array (real, hardware RAID, on a Dell PERC controller with 1GiB of battery-backed RAM). This array is faster than any single SSD in all tests that we did.
    So what kind of disks are those? For the record, Samsung 960 EVO SSD can read 3 GB/s while the theoretical maximum for many models of 6 disks is 6 * 150 MB/s = 0,9 GB/s. I've never seen a spinning disk with 500 MB/s read speed.

    Comment


    • #22
      Don't we all want to know what would've happened if the samsung would've been used for bcache?

      Comment


      • #23
        Yea, I wish there was a boot speed comparison here, that's a better indicator of real-life use cases. Also it would be interesting to compare a scenario where /home is on an HDD and / is on an SSD.

        Comment


        • #24
          Originally posted by cthart View Post
          For those interested in benchmarks on ZFS with server hardware: we have a server with a 6-disk RAID 10 array (real, hardware RAID, on a Dell PERC controller with 1GiB of battery-backed RAM). This array is faster than any single SSD in all tests that we did. Using an SSD for the ZFS cache would actually slow it down. Of course, the battery-backed RAM allowing the array to operate in write-back mode is the key here. RAM is orders of magnitude faster than flash, while flash itself is not orders of magnitude faster than spinning disks.

          Instead of hybrid disks with a small flash cache, I'd like to see hybrid disks with a battery-backed RAM cache.
          Well if you can add arbitrary hardware let's do 8x NVMe SSDs and the result is around 28GB/s (https://www.techpowerup.com/237384/e...8-gb-s-barrier). The benchmark is synthetic of course, but we can see the potential of SSDs

          Comment


          • #25
            Originally posted by Think View Post

            Talking about comparisons, ZFS would be interesting as well.
            And DragonFlyBSD swapcache probably.. at least after next release, which should also bring Hammer 2 ..
            Swapcache is BCache's equivalent on DragonFly

            Comment


            • #26
              These are interesting results, and it would be nice if they could be expanded even more to see how it works when you have a RAID1, RAID10, RAID5 or RAID6 as the backing devices on HDD. I really think that comparing bcache performance with an all SSD setup isn't really good, since the cost is so prohibitive to do so. Having the tests done with larger and more complex RAID HDD setups would be more useful to me. Though I can see how people with a two drive bay laptop might want to do a single HDD/SSD combo. But honestly, then I feel that two SSDs makes more sense just from a reliability standpoint.

              But, there's still a big restriction with bcache as I understand it. Once you've setup a device to use it, you cannot remove it without also blowing away the device. Which is a huge huge problem. It's a commitment you can't back out of without major work. Not fun.

              And which is why I use lvcache instead. So it would be really nice to see a comparision of bcache vs lvache vs plain HDD setups as above. My personal setup *seems* faster, but I haven't really stressed it much. And I've also running RAID1 HDDs with a cache on mirrored SSDs as well, since I don't trust disks not to fail on me.

              Thanks again for doing these tests.

              Comment


              • #27
                Originally posted by digitus2001 View Post
                Couple of points:
                - I wish the article detailed if bcache was warmed up at all: My experience with BTRFS with native raid1 of two bcached 6tb HDD + 200gb SSD requires nearly a full week of normal activity for the cache to be 'warm'.
                - The tests that involve random IO over the whole filesystem will of course be terrible and is nearly a meaningless test.
                - Repeated IO over a data set smaller than the cache device is where bcache shines. Would be nice to put more test emphasis these use cases.

                Disclosure: My experience with bcache+btrfs is great but anecdotal. I would never want to use an HDD in any other way...
                This is a great point. The tests don't tell us anything really until the cache is warmed up. So maybe you need to do a git pull and build of the linux kernel, then clean and do another pull? Or does the fsmark tests have an option to warm up the cache and then run their tests? In my experience at home, my lvmcache has been pretty darn good. It smoths out performance in my anecdotal evidence.

                But finding a proper benchmark is the key here. I can see why Michael might not be interested in writing a specific benchmark though, he's more into the benchmarking process in general instead.

                Comment

                Working...
                X