Announcement

Collapse
No announcement yet.

F2FS With Linux 5.11 To Support Casefolding With Encryption

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • F2FS With Linux 5.11 To Support Casefolding With Encryption

    Phoronix: F2FS With Linux 5.11 To Support Casefolding With Encryption

    For over one year the Flash-Friendly File-System (F2FS) has supported case-folding for optional case-insensitive file/folder support. The past number of years F2FS has also supported FSCRYPT-based file encryption. But now as we roll into 2021, support is finally seemingly ready for mainline in supporting casefolding with encryption enabled...

    http://www.phoronix.com/scan.php?pag...g-With-Encrypt

  • #2
    Every time I read an F2FS article I think that it would make a great root FS. While it doesn't have ZFS or BTRFS levels of options, the options it does offer are superb and what I'm looking for in a file system. Built-in encryption is always good. Case insensitivity support for Windows programs is terrific because I'm a desktop user and use Steam/Proton and Wine. Using LZ4 is awesome because it's my favorite dumb-fire and forget compressor...no configuring, no tweaking, just set and forget. Zstd requires switches and whatnot for the same codec speeds. And it's optimized for various kinds of removable media so it'd be great as a file system for a USB based recovery system, a portable Linux setup, and SBCs/ARM boards with NANDS and sdcards and etc.

    Comment


    • #3
      Originally posted by skeevy420 View Post
      Every time I read an F2FS article I think that it would make a great root FS.
      I've been using it for a couple of months already as my only partition and it works fine. My nvme is at 100% health and I had no FS corruptions even after multiple unsafe shutdowns. There are some known issues I think like the xfs test case but I wanted to try something different after all these years.

      Comment


      • #4
        Does anyone know of any device or manufacturer that uses F2FS as the default installed file system other than Lenovo in their Moto phones and tablets?

        Comment


        • #5
          Originally posted by skeevy420 View Post
          Using LZ4 is awesome because it's my favorite dumb-fire and forget compressor...no configuring, no tweaking, just set and forget. Zstd requires switches and whatnot for the same codec speeds.
          Can you elaborate a bit more? I thought that zstd was a ready-to-use straightforward replacement for LZ4. All the advantages with no disadvantages.

          You only need to choose LZ4 over zstd when you need compression speed over 500MB/sec. Right?

          Comment


          • #6
            Originally posted by C8292 View Post

            Can you elaborate a bit more? I thought that zstd was a ready-to-use straightforward replacement for LZ4. All the advantages with no disadvantages.

            You only need to choose LZ4 over zstd when you need compression speed over 500MB/sec. Right?
            The Not TLDR:
            It's because you can't use all the spiffy Zstd settings most FS mount options.

            The TLDR:
            It depends on your use case. If you need better compression or decompression speeds then LZ4. If you can live with slower compression speeds then Zstd. Using the default settings, LZ4 is still much, much faster than Zstd. Zstd has the edge in amount compressed but LZ4 has the edge in decompression speed.

            You can run Zstd with --format=lz4 or with --fast=5000 to get them to have about the same decompression speeds. A tuned Zstd is really close in performance to LZ4. The problem is that --fast=X isn't something that can be done via BTRFS mount options and ZFS has limited support for it. AFAIK, no FS has support for setting the Zstd --format which can turn Zstd into XZ, LZ4, and more.

            FWIW, Zstd:up to 9 is just fine for most people for places like /usr, /lib, /bin, or /etc. Places that are write once, read a bunch. Over 9 and you better be using a spinning HDD unless you just don't care about codec speeds because that's around where we start getting into slower than SSD speeds.

            It's places like /var, /home, /tmp, mounts under active read-write, where LZ4 is a better choice. In other words, you don't want to bog down a make && make install because Zstd is tuned too high.

            Basically, if you're on spinning disks, a low tuned Zstd all day long because it's still faster than the drive (provided you don't tune too high). If you're on an SSD it depends on how active the partition is going to be in regards to writes; since LZ4 won't bottleneck you anywhere and you can't improperly tune it it makes it a great fire and forget compressor.

            In ZFS Land the rule of thumb is essentially use LZ4 for speed and Zstd:2 for good mix of compression and speed. I use Zstd:2 on my BTRFS mounts.

            Personally, I wish LZ4-HC was an option because that would be great for the not active read-write mounts; the write once read a bunch mounts. Just saying that I'd take a package install speed hit if it meant better drive space utilization. The problem with using Zstd in that use-case is that its decompression speed is slower than my SSD at high compression levels whereas LZ4-HC isn't.

            Oh, and a big tip for BTRFS is to use Zstd with the force-compress mount option. force-compress switches the compression testing algorithm from whatever the kernel uses by default to the compressor used. Basically, with just compress the kernel checks the compress-ability of a file and then sends it to a compressor if it deems it good enough where as force-compress sends it straight to Zstd and Zstd's compression check is more efficient and faster. I'd come into F2FS assuming that LZ4 would behave similarly to Zstd in that regard.

            I'm not sure if it has changed recently, but the Zstd version first implemented in the kernel is something like 1.3.1 or 1.3.3 and that's what it still is. A lot of speed enhancements came during 1.4.X. So if you're looking at 1.4.X Zstd benchmarks remember that the kernel's version isn't as fast. If that's changed, someone please correct me.

            ZFS uses its own built-in Zstd and is using 1.4.5 so if you want the best Zstd support then ZFS is the FS to use.

            Comment


            • #7
              Originally posted by skeevy420 View Post

              The Not TLDR:
              It's because you can't use all the spiffy Zstd settings most FS mount options.

              The TLDR:
              It depends on your use case. If you need better compression or decompression speeds then LZ4. If you can live with slower compression speeds then Zstd. Using the default settings, LZ4 is still much, much faster than Zstd. Zstd has the edge in amount compressed but LZ4 has the edge in decompression speed.

              You can run Zstd with --format=lz4 or with --fast=5000 to get them to have about the same decompression speeds. A tuned Zstd is really close in performance to LZ4. The problem is that --fast=X isn't something that can be done via BTRFS mount options and ZFS has limited support for it. AFAIK, no FS has support for setting the Zstd --format which can turn Zstd into XZ, LZ4, and more.

              FWIW, Zstd:up to 9 is just fine for most people for places like /usr, /lib, /bin, or /etc. Places that are write once, read a bunch. Over 9 and you better be using a spinning HDD unless you just don't care about codec speeds because that's around where we start getting into slower than SSD speeds.

              It's places like /var, /home, /tmp, mounts under active read-write, where LZ4 is a better choice. In other words, you don't want to bog down a make && make install because Zstd is tuned too high.

              Basically, if you're on spinning disks, a low tuned Zstd all day long because it's still faster than the drive (provided you don't tune too high). If you're on an SSD it depends on how active the partition is going to be in regards to writes; since LZ4 won't bottleneck you anywhere and you can't improperly tune it it makes it a great fire and forget compressor.

              In ZFS Land the rule of thumb is essentially use LZ4 for speed and Zstd:2 for good mix of compression and speed. I use Zstd:2 on my BTRFS mounts.

              Personally, I wish LZ4-HC was an option because that would be great for the not active read-write mounts; the write once read a bunch mounts. Just saying that I'd take a package install speed hit if it meant better drive space utilization. The problem with using Zstd in that use-case is that its decompression speed is slower than my SSD at high compression levels whereas LZ4-HC isn't.

              Oh, and a big tip for BTRFS is to use Zstd with the force-compress mount option. force-compress switches the compression testing algorithm from whatever the kernel uses by default to the compressor used. Basically, with just compress the kernel checks the compress-ability of a file and then sends it to a compressor if it deems it good enough where as force-compress sends it straight to Zstd and Zstd's compression check is more efficient and faster. I'd come into F2FS assuming that LZ4 would behave similarly to Zstd in that regard.

              I'm not sure if it has changed recently, but the Zstd version first implemented in the kernel is something like 1.3.1 or 1.3.3 and that's what it still is. A lot of speed enhancements came during 1.4.X. So if you're looking at 1.4.X Zstd benchmarks remember that the kernel's version isn't as fast. If that's changed, someone please correct me.

              ZFS uses its own built-in Zstd and is using 1.4.5 so if you want the best Zstd support then ZFS is the FS to use.
              Than YOU so much for the detailed response. Read every bit of it!
              Now I understand the differences and compromises. Funny that being LZ4 that non-bottleneck compressor is not that wide adopted.

              Once again, thank you.

              Comment


              • #8
                Originally posted by C8292 View Post

                Than YOU so much for the detailed response. Read every bit of it!
                Now I understand the differences and compromises. Funny that being LZ4 that non-bottleneck compressor is not that wide adopted.

                Once again, thank you.
                No problem.

                From what I understand, at least in regards to BTRFS, it's because the kernel had LZO and that was deemed good enough back in the day (and is what all the file systems that offered compression had along with zlib).

                FWIW, 2/3s of the Linux file systems that we'd use for physical disks and offer compression have LZ4 support. There's basically only F2FS, ZFS, and BTRFS to choose from these days and two out of those three offer LZ4.

                Comment

                Working...
                X