Announcement

Collapse
No announcement yet.

F2FS With Linux 5.12 Lets You Configure The Zstd/LZ4 Compression Ratio

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • F2FS With Linux 5.12 Lets You Configure The Zstd/LZ4 Compression Ratio

    Phoronix: F2FS With Linux 5.12 Lets You Configure The Zstd/LZ4 Compression Ratio

    The Flash-Friendly File-System (F2FS) with the Linux 5.12 kernel will allow configuring the compression ratio when enabling the transparent file-system compression support with LZ4 or Zstd...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    When are they going to add transparent compression to ext4? I recently tried ZFS with the lz4 compression option. Not only is disk i/o faster with compression enabled, but I'm also getting 1.30x compression ratio on mixed data. That's effectively 30% more disk space for free! LZ4 compression is a win-win and should be a no-brainer default for any modern filesystem.

    Comment


    • #3
      Curious if F2FS or similar is useful for an old laptop that only has an internal IDE HDD and USB 2.0 ports. Compression would probably help work against the I/O bottleneck a bit, it's sporting 2GB of RAM with another 2GB of ZRAM (which has helped a fair bit), CPU usage isn't under much load.

      Comment


      • #4
        Still there is no way to enable f2fs compression on existing f2fs partitions.

        Comment


        • #5
          Originally posted by polarathene View Post
          Curious if F2FS or similar is useful for an old laptop that only has an internal IDE HDD and USB 2.0 ports. Compression would probably help work against the I/O bottleneck a bit, it's sporting 2GB of RAM with another 2GB of ZRAM (which has helped a fair bit), CPU usage isn't under much load.
          I'm not sure if F2FS is good for a HDD, due to its log-structured approach, it will generate a lot of fragmentation. But it's perfect for flash media where this reduces the write amplification and overhead on the microcontroller.

          Comment


          • #6
            Can anyone using F2FS with compression see what I'm doing wrong here?

            I'm testing F2FS on Debian 11 running their supplied Kernel 5.10.

            Here's me testing BtrFS. I create an empty volume group (purely for testing), wipefs it, format it, mount it with compression. Check the disk space with "df", write a 1GB empty file, and check the disk space again:

            Code:
            # wipefs -af /dev/vg0/ftest
            # mkfs.btrfs -f -msingle -dsingle /dev/vg0/ftest
            # mount -o compress-force=zstd /dev/vg0/ftest /f
            # cd /f
            
            # df -hT ./
            Filesystem Type Size Used Avail Use% Mounted on
            /dev/mapper/vg0-ftest btrfs 5.0G 3.4M 5.0G 1% /f
            
            # dd if=/dev/zero of=test bs=1M count=1024
            # sync
            # ls -lah
            -rw-r--r-- 1 root root 1.0G Feb 14 10:42 test
            
            # df -hT ./
            Filesystem Type Size Used Avail Use% Mounted on
            /dev/mapper/vg0-ftest btrfs 5.0G 37M 5.0G 1% /f
            The 1GB empty file takes up about ~30MB of space with BtrFS using zstd compression.

            Repeating the process with F2FS:

            Code:
            # wipefs -af /dev/vg0/ftest
            # mkfs.f2fs -f -O extra_attr,inode_checksum,sb_checksum,compression /dev/vg0/ftest
            # mount -o compress_algorithm=zstd,compress_extension=txt /dev/vg0/ftest /f
            # chattr -R +c /f
            # cd /f
            
            # df -hT ./
            Filesystem Type Size Used Avail Use% Mounted on
            /dev/mapper/vg0-ftest f2fs 5.0G 339M 4.7G 7% /f
            
            # dd if=/dev/zero of=test.txt bs=1M count=1024
            # sync
            # ls -lah
            -rw-r--r-- 1 root root 1.0G Feb 14 10:48 test.txt
            
            # df -hT ./
            Filesystem Type Size Used Avail Use% Mounted on
            /dev/mapper/vg0-ftest f2fs 5.0G 1.4G 3.7G 27% /f
            The 1GB zerofile takes up 1GB, apparently uncompressed. It's a 5GB volume, and writing 5GB of zerodata fills the volume. Compared to BtrFS, where I can write many times that volume and the compression results in the volume not filling.

            I've repeated the test by creating an empty file first, setting the +c attribute, then using both dd and cat to append empty data instead of writing a file new. Same result.

            I've also repeated the test with lzo and lzo-rle , same deal.

            Am I missing something? Either in how I'm preparing the volume, mounting it, attempting to force compression, or something else?

            Comment


            • #7
              I have the same results. F2FS compression is working (high CPU usage on flushes), but it doesn't save any space so it is pretty useless.
              phoronix Michael, can you please investigate the issue? I can't believe F2FS devs are working hard for months and adding new functionality to a feature that apparently doesn't work (and maybe never did). I'm currently using Manjaro witn 5.12 rc kernel and yet F2FS saves literally nothing even on easy compressible data. Just a waste of CPU time. /sys/kernel/debug/f2fs/status confirms that compression is indeed enabled and working. Thanks!

              Comment

              Working...
              X