Announcement

Collapse
No announcement yet.

Linux DM-VDO "Virtual Data Optimizer" Preparing To Land In The Upstream Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux DM-VDO "Virtual Data Optimizer" Preparing To Land In The Upstream Kernel

    Phoronix: Linux DM-VDO "Virtual Data Optimizer" Preparing To Land In The Upstream Kernel

    The Linux DeviceMapper code is preparing to introduce DM-VDO as the Virtual Data Optimizer that can provide inline deduplication, compression, zero-block elimination, thin provisioning, and other features. DM-VDO has long existed out-of-tree and should be a very useful addition to mainline...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    VDO is quite cool, and if it's integrated onto Stratis it could bring it quite a bit closer to Btrfs and even ZFS with its online deduplication.

    Comment


    • #3
      It's a filesystem agnostic solution. It would be interesting to see some benchmarks with it, especially after some time of use, to evaluate the effect of fragmentation, deduplication and compression.

      Comment


      • #4
        Ha I always wanted to try this on my home file server. Does anyone have some performance tests? I only need to saturate 1 Mbit/s network with an old phenom2 quad core. Most impact will probably be on write (which can be cached) and I don't have many small files with duplication so I might get away with much bigger block sizes.

        Comment


        • #5
          Would this be sandwiched between LUKS and a filesystem?

          Comment


          • #6
            Originally posted by Girolamo_Cavazzoni View Post
            Would this be sandwiched between LUKS and a filesystem?
            It's a dev mapper you could have any combination that fit's your need, directly on your block dev or even a file on your file system looped and then DM-VDO -> file system.

            Of course LUKS above DM-VDO doesn't make much sense because you can't dedup encrypted data.

            Comment


            • #7
              Cool! Cant wait for some benchmarks on both hdds and ssds ... but i think that the workloads to test something like this must be carefully crafted (and compared with no vdo case for the latency/bandwidth metrics)

              L.E. given that i just put into production a raid1/dm-integrity and after reading the vdo documentation, i was wondering what stopped them to add a block checksum if they do this anyway for reference counting ...
              Last edited by adriansev; 05 March 2024, 02:30 PM.

              Comment


              • #8
                Originally posted by Anux View Post
                Ha I always wanted to try this on my home file server. Does anyone have some performance tests? I only need to saturate 1 Mbit/s network with an old phenom2 quad core. Most impact will probably be on write (which can be cached) and I don't have many small files with duplication so I might get away with much bigger block sizes.
                It works reasonably well, but sometimes I/O becomes much slower.
                However, it has bad tolerance to power failures.The last time I tested it on my laptop, with btrfs above it, my metadata became so corrupted (after a few power failures) that I had to use btrfs restore to retrieve my data (which thankfully worked remarkably well).

                Comment


                • #9
                  Originally posted by aviallon View Post

                  It works reasonably well, but sometimes I/O becomes much slower.
                  However, it has bad tolerance to power failures.The last time I tested it on my laptop, with btrfs above it, my metadata became so corrupted (after a few power failures) that I had to use btrfs restore to retrieve my data (which thankfully worked remarkably well).
                  That kind of sucks. I was hoping that I can get some feature-parity with zfs by using some-journaling-filesystem+dm-vdo+dm-crypt+mdraid but it doesn't look like it's safe enough.

                  Comment


                  • #10
                    Originally posted by aviallon View Post
                    However, it has bad tolerance to power failures.
                    Of course there is much more work to be done before the data actually reaches your block dev. This is only intended for raids with batteries and servers/PCs with UPSs.

                    I'm not sure if it respects write barriers and if they could even help here. For normal home use a filesystem that includes all those features might be a more robust solution.

                    Comment

                    Working...
                    X