Announcement

Collapse
No announcement yet.

F2FS File-System Gets Even Better With Linux 3.18

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • F2FS File-System Gets Even Better With Linux 3.18

    Phoronix: F2FS File-System Gets Even Better With Linux 3.18

    The Flash-Friendly File-System (F2FS) has been running well in our latest SSD benchmarks but with the forthcoming Linux 3.18 kernel it's going to be in even better shape...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I wonder if F2FS is useful to prolong the life of flash-based storage devices, and increase write speed for small files (as USB sticks are often extremely slow when writing small files). Since its release I still have no idea what this filesystem is good for, why it was created.

    Comment


    • #3
      File system benchmarks

      File system comparison should be interesting not only in interaction with high end SSDs, but also with low cost USB storage devices. Ext4 without journalling is also commonly used on these devices and could be considered.

      Comment


      • #4
        Until f2fs actually has some performance or wear impact on flash storage over ext4 (or has enough feature parity with btrfs not to justify the performance hit) I don't see the point. I've thrown it on several ssds, thumb drives, and sd cards I have and none of them show any improvement over ext4 in aggregate.

        Hopefully 3.18 can change that, because it seems intuitive than a filesystem designed for flash should have some performance advantage, it just is not manifesting yet, for me at least.

        Comment


        • #5
          Originally posted by zanny View Post
          Until f2fs actually has some performance or wear impact on flash storage over ext4 (or has enough feature parity with btrfs not to justify the performance hit) I don't see the point. I've thrown it on several ssds, thumb drives, and sd cards I have and none of them show any improvement over ext4 in aggregate.

          Hopefully 3.18 can change that, because it seems intuitive than a filesystem designed for flash should have some performance advantage, it just is not manifesting yet, for me at least.
          What is your implression on BTRFS for SSD? Can you compare the wear impact on BTRFS vs. EXT4? Thanks.

          Comment


          • #6
            Originally posted by Drago View Post
            What is your implression on BTRFS for SSD? Can you compare the wear impact on BTRFS vs. EXT4? Thanks.
            The higher end the SSD the more performance impact. The oldest SSD I have is a two year old 830 that spent a year on ext4 and a year on btrfs - so far it still has no reallocated sectors, because it doesn't see enough write load to wear it down.

            I'd estimate btrfs is within 5 - 10% of the peak read / write speeds of ext4 on that disk. Same with a Corsair Force LS drive I've tested. On higher throughput drives with higher iops like the mx100 or 840 Pro I've seen up to 20% performance loss against ext4 on btrfs, but again, not enough load to wear the drives down - my main drive is the 840 Pro and even after 5TB of writes it still has no reallocated sectors.

            Note that all these disks are not sandforce based, so btrfs is getting the benefits of lzo / gzip compression (depending on install). So on Sandforce devices I could easily see btrfs being 30% or more slower.

            Note that these figures have improved over time - I'm comparing 3.11 ext4 vs 3.14 btrfs, and the 840 pro and mx100 were on 3.16 kernels very recently.

            On mechanical disks or sata 2 SSDs I never see a performance difference anymore.

            In the general case, since both btrfs and ext4 will do write caching (and all the ssds support it as well) any performance counters that would in the naive case wear down an SSD are avoided. btrfs has more data structure overhead than ext4, so in aggregate it "probably" wears a drive out faster, but just consider how Techreport was wear testing SSDs on decade old POS ntrfs, and they lasted over a petabyte. I would not worry about wear impact from any modern Linux FS unless you have some major breakage between the SSD controller, SATA controller, or filesystem write buffers that it somehow constantly pushes very small writes to the disk with no caching.

            Just a quick terrible massively inaccurate measure of throughtput differences, first number is the hdparm raw read value, second is the raw read speed copying a 1GB junk file to a ramdisk from btrfs, and the third is the same with ext4. All disks except the 256GB 840 Pro are 128GB SSDs.

            840 Pro 450 370 400
            830 350 320 330
            mx100 420 350 390
            force ls 300 290 290
            Last edited by zanny; 08 October 2014, 03:22 PM.

            Comment


            • #7
              Originally posted by zanny View Post
              The higher end the SSD the more performance impact. The oldest SSD I have is a two year old 830 that spent a year on ext4 and a year on btrfs - so far it still has no reallocated sectors, because it doesn't see enough write load to wear it down.

              I'd estimate btrfs is within 5 - 10% of the peak read / write speeds of ext4 on that disk. Same with a Corsair Force LS drive I've tested. On higher throughput drives with higher iops like the mx100 or 840 Pro I've seen up to 20% performance loss against ext4 on btrfs, but again, not enough load to wear the drives down - my main drive is the 840 Pro and even after 5TB of writes it still has no reallocated sectors.

              Note that all these disks are not sandforce based, so btrfs is getting the benefits of lzo / gzip compression (depending on install). So on Sandforce devices I could easily see btrfs being 30% or more slower.

              Note that these figures have improved over time - I'm comparing 3.11 ext4 vs 3.14 btrfs, and the 840 pro and mx100 were on 3.16 kernels very recently.

              On mechanical disks or sata 2 SSDs I never see a performance difference anymore.

              In the general case, since both btrfs and ext4 will do write caching (and all the ssds support it as well) any performance counters that would in the naive case wear down an SSD are avoided. btrfs has more data structure overhead than ext4, so in aggregate it "probably" wears a drive out faster, but just consider how Techreport was wear testing SSDs on decade old POS ntrfs, and they lasted over a petabyte. I would not worry about wear impact from any modern Linux FS unless you have some major breakage between the SSD controller, SATA controller, or filesystem write buffers that it somehow constantly pushes very small writes to the disk with no caching.

              Just a quick terrible massively inaccurate measure of throughtput differences, first number is the hdparm raw read value, second is the raw read speed copying a 1GB junk file to a ramdisk from btrfs, and the third is the same with ext4. All disks except the 256GB 840 Pro are 128GB SSDs.

              840 Pro 450 370 400
              830 350 320 330
              mx100 420 350 390
              force ls 300 290 290
              Thanks for the exteded answer. I am receiveing 250GB EVO 840 drive next week, and since I will reinstall Fedora 21, I was considering BTRFS instead EXT4. You say that I need to enable the lzo compression, right? I though that SSD controllers compressed the data themselves. Chromium web browser used to write small patches of data on disk, even if it is not touched. This bothers me.

              Comment


              • #8
                Originally posted by Drago View Post
                Thanks for the exteded answer. I am receiveing 250GB EVO 840 drive next week, and since I will reinstall Fedora 21, I was considering BTRFS instead EXT4. You say that I need to enable the lzo compression, right? I though that SSD controllers compressed the data themselves. Chromium web browser used to write small patches of data on disk, even if it is not touched. This bothers me.
                The only modern SSD controller still doing data compression on the firmware is the Sandforce controller and the dozens of drives using it. Samsung does not, so using lzo will net you tangible benefits on btrfs. Use lzo for speed, or gzip for space.

                I mean, I use it on all my drives. The performance difference does not really impact me, because I'm still getting at least 350MB/s real world serial read speeds, and filesystem overhead barely impacts iops performance, which is where the real tangible difference is in my book.

                Comment


                • #9
                  Originally posted by zanny View Post
                  The only modern SSD controller still doing data compression on the firmware is the Sandforce controller and the dozens of drives using it. Samsung does not, so using lzo will net you tangible benefits on btrfs. Use lzo for speed, or gzip for space.

                  I mean, I use it on all my drives. The performance difference does not really impact me, because I'm still getting at least 350MB/s real world serial read speeds, and filesystem overhead barely impacts iops performance, which is where the real tangible difference is in my book.
                  If I may chime in with a question; when configuring a drive for a BTRFS compression type, when, where and how do you actually set that option? I know the fstab option, but I was under the impression you need to set the drive 'somehow' before loading data on to it (kind of hard with most distro's loading thier install right after creating partitions so you would have to do 'fancy' partitioningg prior), or anything after the data compression was enabled wouldn't get the new option. My information is probably outdated and I've been holding out on BTRFS for to long now, so go easy =D
                  Hi

                  Comment


                  • #10
                    Whenever you mount it just do mount -o compress=lzo. fstab options are just mount -o options. If your installer is automounting the drive, remount it before installing with the lzo compress option like mount -o remount,compress=lzo.

                    Comment

                    Working...
                    X