Announcement

Collapse
No announcement yet.

Attempting To Try Out BCache On The Linux 4.1 Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Attempting To Try Out BCache On The Linux 4.1 Kernel

    Phoronix: Attempting To Try Out BCache On The Linux 4.1 Kernel

    A few days ago I set out to try out BCache on the Linux 4.1 kernel now that this caching feature has matured in the mainline Linux kernel for a while. BCache serves as a cache to the Linux kernel's block layer whereby a solid-state drive (or other faster drive) can serve as a cache to a larger-capacity, traditional rotating hard drive.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Yea, I think that synthetic tests, where files are created on the fly, are just not good for how BCache works. More interesting would be boot time comparison, I believe.

    I'm considering switching to BCache myself (and am also on a Toshiba SSD), but it makes me wonder if it wouldn't actually be better to keep my current setup (/ on SSD, /home on HDD, ~/.cache and ~/.config symlinked to some place on /).

    Comment


    • #3
      Originally posted by GreatEmerald View Post
      Yea, I think that synthetic tests, where files are created on the fly, are just not good for how BCache works. More interesting would be boot time comparison, I believe.
      I have yet to figure out how to bcache my root directory. The last time I tried (with Debian jessie, initial release IIRC) it didn't work. The installer didn't support it and a manual setup didn't boot so it's probably initramfs related. However it works great on my home directory (512GB 840Pro caching an 8x3TB RAID6 with btrfs.)

      Michael, I saw good results with compiling.

      Comment


      • #4
        I think I have a 20GB SSD lying around somewhere. I think that would be great to use this for, but I'm definitely interested to see what can be done to improve results.

        Comment


        • #5
          Hi Michael,

          I think there is an opportunity here to compare bcache+single disk, bcache+md array vs zfs single disk, zfs zraid and zfs+zil cache as a storage subsystem showdown

          Comment


          • #6
            What is the SQLite benchmark doing that ends up being faster on a regular HDD than on an SSD? If that workload can degrade performance to sub-HDD levels, then we can't be sure that BCache isn't triggering similar issues.

            Comment


            • #7
              While I don't know much about the nature of the benchmarks used, IIRC bcache actually bypasses the cache for long contiguous accesses (think copying/streaming a file).

              Originally posted by gigaplex View Post
              What is the SQLite benchmark doing that ends up being faster on a regular HDD than on an SSD? If that workload can degrade performance to sub-HDD levels, then we can't be sure that BCache isn't triggering similar issues.
              The variance on that one result was pretty high, but not the others. That suggests to me that it's either a different issue, or it's inconsistently triggering the bug.

              Comment


              • #8
                I already have a few systems with bcache in production. My gaming computer is even better: since my gaming computer uses a lot of power, but games use a lot of space, I moved the harddisks to a thecus n4200eco. The thecus runs plain debian with my own compiled kernel.
                The thecus uses raid1 over wd-reds, then lvm, and then exports those lvm with fcoe to a specific vlan on a gigabit switch.
                On my gaming system I have a single ssd as a bcache cache with the fcoe devices as backing store.
                I use btrfs on bcache local-ssd, fcoe backing store for my steam partition, and it works wonderfully well.
                Writeback is a must since it crawls on writethrough.
                I also have colocation servers on which I use part dedicated raid 1 on ssd (samsung 840 evo) for essential vservers, and bcache with ssd as cache with wd-reds as backing for non-essential stuff.
                I once wondered why it was so slow copying a tree, then I turned writeback, and bam, finished.
                Bcache with writeback can get your random I/O throughput very high. It will not get your normal streaming writes faster, and it will not get applications that do not fsync() faster. Your block cache already takes a lot out of the pain, but once you start doing sync activity on the disk, the sync is ready when it is on SSD. That brings you IOPS of a normal disk (actually only depending on the rotational speed of your disk, as seek speeds are lower than that these days) from a mere 200 IO/s to a 40k or more IO/s.
                But things depends on your type of system load. But not everything needs writeback. A system partition can benefit a little from the caching (usually most of it is already in memory), but the few times you write to it...

                Anyway: good luck trying to get a fitting test on that :-). A lot of virtual servers doing the same test might do the trick.

                Comment


                • #9
                  i like how it says on the ssd "6Gb/s SATA 3"

                  Comment


                  • #10
                    if you cache on the SSD then the kernel will have to, in the background, read data off the SSD to write to the underlying electromechanical drive, so perhaps if you don't have enough memory bandwidth or CPU free then could that become the bottleneck as the memory and SATA busses get saturated?

                    Comment

                    Working...
                    X