Announcement

Collapse
No announcement yet.

Testing The BCache SSD Cache For HDDs On Linux 4.8

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Testing The BCache SSD Cache For HDDs On Linux 4.8

    Phoronix: Testing The BCache SSD Cache For HDDs On Linux 4.8

    It has been over one year since last testing the mainline Linux kernel's BCache support for this block cache that allows solid-state drives to act as a cache for slower hard disk drives. Here are some fresh benchmarks of a SATA 3.0 SSD+HDD with BCache from the Linux 4.8 Git kernel.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Interesting results, thanks. Much slower than I expected, though. I guess there is not much point at testing the faster drives, because the SSD doesn't seem to be a bottleneck here.

    Comment


    • #3
      Doing write back caching on a single SSD is unsafe. Although in this test you also only have a single HDD. For any data that was important both would be in RAID 1 pairs.

      Comment


      • #4
        I'd like to see btrfs supporting some form of ssd caching. Using bcache seems cumbersome as you need to plan your ssd and hdd's, install your fs and once done can't easily change. With btrfs I would expect to just throw additional hdd or ssd as they fill up or wear out. Too bad nothing seems to be happening in that area.

        Comment


        • #5
          I wonder if the block (hard sector) and bucket (erase block) sizes are suited to the SSD? make-bcache does try to get the block size automatically, but doesn't know anything about the SSD erase block size:

          -b bucket-size
          Specifies the bucket size. Allocation is done in terms of buckets, and cache hits are counted per bucket; thus a
          smaller bucket size will give better cache utilization, but poorer write performance. The bucket size is intended
          to be equal to the size of your SSD's erase blocks, which seems to be 128k-512k for most SSDs. Must be a power of
          two; accepts human readable units. Defaults to 128k.

          Comment


          • #6
            Originally posted by ferry View Post
            I'd like to see btrfs supporting some form of ssd caching. Using bcache seems cumbersome as you need to plan your ssd and hdd's, install your fs and once done can't easily change. With btrfs I would expect to just throw additional hdd or ssd as they fill up or wear out. Too bad nothing seems to be happening in that area.
            You can easily add and/or remove cache devices from a cache set. Or equally add or remove backing store from a cache device. The only thing you really need to do upfront differently than normal is prepare your backing-store and create your filesystem on the bcache block device node instead of on the underlying block device. You don't even need a cache device present.

            Comment


            • #7
              How does it compare against lvm cache?

              Comment


              • #8
                Why are two random Intel SSDs highlighted in the graphs instead of the actual test subjects?

                Comment


                • #9
                  Originally posted by devius View Post
                  Why are two random Intel SSDs highlighted in the graphs instead of the actual test subjects?
                  +1 [filling space, since message has to be 5 chars long]

                  Comment


                  • #10
                    Originally posted by Thaodan View Post
                    How does it compare against lvm cache?
                    I'm using LVM cache and using mirrored SSDs and Mirrored Disks. Any other setup would be insane for reliable storage. Now for testing... not a bad idea.

                    The big issues I saw with bcache is that it's an all or nothing. If you wanted to remove the cache, you had to move all your data elsewhere to then free up the cached devices so you could delete them and re-use them elsewhere. Running lvmcache, it's so much simpler and I can use the same SSDs to create multiple cache devices (LVs) for different LVs within a VG.

                    That's another gotcha, all your cache devices need to be in the same VG as the LVs you want to protect. So in that case you would need to partition the SSD(s), add the partitions to each VG, and then create LVs inside for caching.

                    But again, I'd love to see some results if at all possible comparing the two.

                    Comment

                    Working...
                    X