Announcement

Collapse
No announcement yet.

EnhanceIO: New Solid State Drive Caching For Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • EnhanceIO: New Solid State Drive Caching For Linux

    Phoronix: EnhanceIO: New Solid State Drive Caching For Linux

    A commercial company has opened up their Linux driver that is based upon their SSD (Solid-State Drive) caching software product. This code is designed to use SSDs as cache devices for traditional rotating hard drives. This new SSD caching driver is based upon Facebook's Flashcache...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Good news; I had zero trust towards Facebook getting it included in the Linux kernel.

    There's another external module doing what flashcache does, I wonder what it became...

    Comment


    • #3
      Nice! I hope they will push to get it included in mainline.

      Comment


      • #4
        This is good news. I've been using the capabilities of the zfs/zfsonlinux l2arc, but was just testing out flashcache again yesterday on non-zfs filesystems. Bcache is the only other current product that also provides a cache solution.

        I may be wrong, but I couldn't see a way to install Bcache into an pre-existing linux kernel--the install method is to download a full blown, pre-patched kernel--too much work and to intrusive. Let me know if there is simpler way.

        The latest flashcache was a small git clone and compile--took just a few minutes, and there were no problems with my existing 3.7 installed kernel source. I did have an issue with flashcache not working which I need to look into more. Flashcache caused sector read errors when used on an md raid device-- a raid0 device with two different sized raid1 devices. Flashcache worked fine on a simple md device.

        Comment


        • #5
          Originally posted by mgmartin View Post
          This is good news. I've been using the capabilities of the zfs/zfsonlinux l2arc, but was just testing out flashcache again yesterday on non-zfs filesystems. Bcache is the only other current product that also provides a cache solution.

          I may be wrong, but I couldn't see a way to install Bcache into an pre-existing linux kernel--the install method is to download a full blown, pre-patched kernel--too much work and to intrusive. Let me know if there is simpler way.

          The latest flashcache was a small git clone and compile--took just a few minutes, and there were no problems with my existing 3.7 installed kernel source. I did have an issue with flashcache not working which I need to look into more. Flashcache caused sector read errors when used on an md raid device-- a raid0 device with two different sized raid1 devices. Flashcache worked fine on a simple md device.
          You can use L2ARC and SLOG devices with non-ZFS filesystems. Just put them on a zvol.

          Comment


          • #6
            Understood. The need was testing the ability of using a cache device without the zfs/spl dependencies.

            Comment


            • #7
              A follow up to my flashcache issue I made reference too--seems flashcache has issues working with my 3TB 4k sector drive.

              Comment


              • #8
                Good news, I only use obsolete magnetic disks for bulk data storage, where caching is irrelevant.

                I wouldn't waste an SSD on this.

                Comment


                • #9
                  My initial impressions with EnhanceIO are very positive.

                  The installation was simple, it required copying the source directory to a linux kernel source folder, running a patch to get the source into the kernel make system, then compiling the kernel modules ( I just did a full make to re-build my my entire kernel ). It's a little more work than flashcache which builds the modules entirely outside the kernel source tree, but the directions and process were clear enough. The code also looks fairly small which hopefully means easy to maintain.

                  A few things I really like with EnhanceIO:

                  1. The cache is completely transparent. This means you can add and remove a cache device to a mounted disk or individual partition. I added a cache device to a mounted, in-use partition, then removed the device with no issues. It also means no separate dm mapping to the physical device, so you continue to mount and access the cached device through it's default /dev entry.

                  2. Along with being transparent, the SSD cached device can fail and reads/writes to the actual device will continue. To prevent the loss of data in a write-back configuration, you can mirror SSD devices.

                  3. Everything is done through the /proc interface. There is one python script used to create and manage the cache devices. Lots of stats are available through the /proc interface.

                  4. Different cache replacement modes: random, FIFO, and LRU .

                  My favorite feature, and the one I think sets EnhanceIO apart from other cache solutions ( outside of zfs), is the transparency. I'm used to adding cache devices to running zfs filesystems, so it was strange for me--at first-- when setting up flashcache to have to create/add a cache prior to mounting and accessing the underlying, cached device.

                  What we need now is a feature matrix comparing the available cache solutions as they continue to mature and prepare for inclusion in the kernel along with some performance benchmarks.

                  Comment

                  Working...
                  X