Announcement

Collapse
No announcement yet.

HDD/SSD Performance With MDADM RAID, BCache On Linux 4.14

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    For a moment, i thought this was a test of bcachefs... lel

    Comment


    • #12
      Typo:

      Originally posted by phoronix View Post
      BlogBench ran into problems with BCache's wirteback mode.

      Comment


      • #13
        I have another idea for a comparative benchmark: mdadm RAID with an SSD for caching vs mdadm RAID with an SSD for LVM thin-provisioning metadata volume.

        LVM thin-provisioning has been around for some time now. The documentation mentions that it is possible to have the metadata volume for an LVM thin-pool on a separate disk. The documentation actually suggests you do that for better performance. However, I have not been able to find *any* benchmarks on how significant these performance benefits might be.

        Now, if I have a large RAID5 or RAID6 array consisting of SSDs or HDDs and then a (significantly faster) SSD for caching, what should I do?

        Should I use the SSD as a cache for mdadm with bcache, or should I do thin provisioned LVM with the metadata on the SSD?

        This question, unfortunately, is a lot deeper. Having the SSD serve as a cache for ldadm allows me to use a filesystem directly on top of that, without the additional layer of LVM. However, for various reasons I might want to use LVM anyway. Still, there are more possibilities with that. The ldadm plus cache enables me yo use non-thin-provisioned LVM. Maybe that performs better?

        Also, when storing the metadata on a separate disk outside the RAID, I probably want to have at least two SSDs in a RAID1 configuration for that. But since the max size for an LVM-thin-pool metadata volume is 16 GiB, I can buy very small SSDs.

        So benchmarks should probably be:

        - ldadm RAID 5/6 plus ldadm RAID1 (ssds) with LVM thin provisioning, the metadata partition being on the RAID1
        - ldadm RAID 5/6 with an SSD for caching, the FS directly ontop of the RAID
        - ldadm RAID 5/6 with an SSD for caching, traditional LVM ontop of the RAID and then the FS on top the LVM
        - ldadm RAID 5/6 with an SSD for caching, LVM thin-provisioning ontop the RAID and then the FS on top the LVM

        And for bonus:
        - ldadm RAID 5/6 with an SSD for caching plus RAID1 (ssds) with LVM thin provisioning, the metadata partition being on the RAID1

        Comment


        • #14
          I really hope some bcache expert could comment on these results. Looks really disappointing.

          Comment


          • #15
            Originally posted by kravemir View Post
            I'm thinking about 1TB HDD + 128GB SSD cache setup for laptop usage.
            dont bother... just buy a 1tb ssd and be done with it.

            Comment


            • #16
              Performance hit of RAID1 was surprising.. and disappointing..

              Comment


              • #17
                If you don't tune bcache the performance is typically horrible... It was designed when SSDs were in their infancy and significantly slower, so it has all kinds of knobs and monitors to disable itself and just pass thru I/O to the backing device. For example if it detects sequential I/O it bypasses bcache, and if it detects more than X nanoseconds of latency to the cache device it also bypasses bcache.

                So if not properly tuned it will most likely just add overhead and be slower in most cases.

                Comment


                • #18
                  Couple of points:
                  - I wish the article detailed if bcache was warmed up at all: My experience with BTRFS with native raid1 of two bcached 6tb HDD + 200gb SSD requires nearly a full week of normal activity for the cache to be 'warm'.
                  - The tests that involve random IO over the whole filesystem will of course be terrible and is nearly a meaningless test.
                  - Repeated IO over a data set smaller than the cache device is where bcache shines. Would be nice to put more test emphasis these use cases.

                  Disclosure: My experience with bcache+btrfs is great but anecdotal. I would never want to use an HDD in any other way...

                  Comment


                  • #19
                    Originally posted by vimja View Post
                    I have another idea for a comparative benchmark: mdadm RAID with an SSD for caching vs mdadm RAID with an SSD for LVM thin-provisioning metadata volume.
                    I also would like to know how well lvmcache does in comparison to bcache.

                    Comment


                    • #20
                      For those interested in benchmarks on ZFS with server hardware: we have a server with a 6-disk RAID 10 array (real, hardware RAID, on a Dell PERC controller with 1GiB of battery-backed RAM). This array is faster than any single SSD in all tests that we did. Using an SSD for the ZFS cache would actually slow it down. Of course, the battery-backed RAM allowing the array to operate in write-back mode is the key here. RAM is orders of magnitude faster than flash, while flash itself is not orders of magnitude faster than spinning disks.

                      Instead of hybrid disks with a small flash cache, I'd like to see hybrid disks with a battery-backed RAM cache.

                      Comment

                      Working...
                      X