Announcement

Collapse
No announcement yet.

F2FS vs. EXT4 File-System Performance With Intel's Clear Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by ypnos View Post
    What are the reasons for using F2FS right now as compared to ext4?
    Although borderline TMI (Too Much Information) and not focused on a small set of strong contenders, the various tables in this wikipedia page seem like a good starting point.

    https://en.wikipedia.org/wiki/Compar...f_file_systems

    According to it (and we don't know if it's up-to-date), F2FS lacks snapshotting, data de-duplication (probably most interesting as an enabler of snapshotting), and data checksums (maybe because it's semi-redundant with NAND-level checksums?). So, I expect to stick with a combination of BTRFS and XFS, for the foreseeable future.

    Comment


    • #22
      I wonder when we will see F2FS root filesystem support for Ubuntu? An initial step for Debian has already been made:
      Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

      Comment


      • #23
        Originally posted by dc_coder_84 View Post
        I wonder when we will see F2FS root filesystem support for Ubuntu?
        For devices without FTL, this makes sense. However, if you're using a modern SSD, wouldn't it be better if existing filesystems just adopted some of F2FS's performance features?

        It seems to me like it'd be better if F2FS would stay close-to-the-metal, for use cases like embedded devices and cloud, hyperscalers they either don't need the higher-level features of other filesystems or get those capabilities through some other means.

        I'm genuinely skeptical how much sense F2FS really makes, when run atop a SATA or NVMe SSD, and especially without knowing any of the low-level parameters of the flash.

        Comment


        • #24
          Originally posted by coder View Post
          For devices without FTL, this makes sense. However, if you're using a modern SSD, wouldn't it be better if existing filesystems just adopted some of F2FS's performance features?

          It seems to me like it'd be better if F2FS would stay close-to-the-metal, for use cases like embedded devices and cloud, hyperscalers they either don't need the higher-level features of other filesystems or get those capabilities through some other means.

          I'm genuinely skeptical how much sense F2FS really makes, when run atop a SATA or NVMe SSD, and especially without knowing any of the low-level parameters of the flash.
          As I said, not much. It brings features that SSD/NVMe's controller is offering anyway and more often than not one impedes performance of other.
          On simple media, like SD, USB stick etc, F2FS performs _much_ better.

          Comment


          • #25
            Originally posted by Snaipersky View Post
            I've noticed that a thrashed ext4 drive hurts GUI responsiveness more than on a taxed F2FS or XFS filesystem. Anyone else notice this, or have any insight?
            Nope, you're not alone and is mostly responsible for Android lag, which is why it's promising to see F2FS make such progress.

            Comment


            • #26
              Originally posted by ypnos View Post
              What are the reasons for using F2FS right now as compared to ext4?
              Android devices. It helps greatly with lag, especially when the storage device fills up.

              Comment


              • #27
                Originally posted by kcrudup View Post
                Android devices. It helps greatly with lag, especially when the storage device fills up.
                Thank you!

                Comment


                • #28
                  Is it possible to also measure write amplification in the tests?
                  Because I think that especially compiles do a lot of different writes, which might or might not be amplified by the filesystem.
                  An easy figure would be to just see the number of bytes written to the disk before and after all the tests.

                  Comment


                  • #29
                    Originally posted by Ardje View Post
                    Is it possible to also measure write amplification in the tests?
                    You can usually figure out write amplification based on the SMART stats. I recommend browsing your drive's stats using gsmartcontrol.

                    In particular, pay attention to the media wear indicator statistic, which estimates the % of the SSD's life that has so far been consumed.

                    Originally posted by Ardje View Post
                    Because I think that especially compiles do a lot of different writes, which might or might not be amplified by the filesystem.
                    Look at the size of the generated files. Of course, there will probably be some small ones used to track dependencies (varies by buildsystem), but the bulk of writes are files many times larger than the block size.

                    The most write amplification-intensive operation you're likely to see is checking out or untar'ing large source trees. So, maybe an automated build server would suffer from a decent amount of write amplification. For the average developer, I expect it wouldn't tend to be an issue.

                    Comment


                    • #30
                      Originally posted by coder View Post
                      You can usually figure out write amplification based on the SMART stats. I recommend browsing your drive's stats using gsmartcontrol.
                      This was a question for Michael. I know how I can measure write amplification. But it's an important statistic, lacking in the stats.
                      Originally posted by coder View Post
                      In particular, pay attention to the media wear indicator statistic, which estimates the % of the SSD's life that has so far been consumed.
                      Unless it's an old OCZ, then it will always been 0%. But wear indicator statistics are no indication of write amplifification. Just reading the kernel write stats is much better.

                      Originally posted by coder View Post
                      Look at the size of the generated files. Of course, there will probably be some small ones used to track dependencies (varies by buildsystem), but the bulk of writes are files many times larger than the block size.

                      The most write amplification-intensive operation you're likely to see is checking out or untar'ing large source trees. So, maybe an automated build server would suffer from a decent amount of write amplification. For the average developer, I expect it wouldn't tend to be an issue.
                      Usually modification of a file will suffer from write amplification, especially using btrfs. Untarring should be a straight stream and should never result in amplification.
                      The issue is very big because at the end that determines if your Tesla borks after a few years of writing logs (and slowly writing logs can suffer from amplification). Maybe, just maybe if they used f2fs instead of ext4?

                      So these statistics are very important. You are not going to use btrfs or ext4 in a Tesla (anymore). But are you going to use f2fs on a high end compile server, where other factors might be more important.

                      Anyway, filesystems differ so much from eachother in write problems... I've had to investigate a problem with very high loads on a mailserver using ocfs2 and drbd. It took me a while to realise that the systems were A-ok, and perfectly capable of handling everything except for that ocfs2 was severely fragmented and the mail type of layout was not the best for current mail usage (40MB per e-mail instead of 4k...). So after reformatting the cluster and restoring the backup, the load gets to 0.4 instead of 40.
                      I've never ever seen a fragmented filesystem on linux. I've seen cluttered and corrupted btrfs filesystems, but that's all in the past.

                      So yeah, these figures are important.

                      Comment

                      Working...
                      X