Announcement

Collapse
No announcement yet.

Running The Flash-Friendly File-System On A Hard Drive? Benchmarks Of F2FS On An HDD

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by cybertraveler View Post
    f2fs is part of the OS and it writes data all over the drive (or rather the part of the drive where the partition is).
    Lol no it does not. It writes data to all sectors of the Flash Translation Layer, which is an abstraction (lie) created by the storage controller.

    How it is actually written on the drive depends from what the storage controller thinks is best.

    I think the f2fs design is useful.
    it is useful only if the storage controller can't do it already. SSD storage controllers don't need much help as they are designed to work with NTFS that is completely unaware of anything.

    Comment


    • #12
      IMHO:
      Michel should have tested F2FS with a SMR HDD and NOT with a PMR HDD.

      I believe that a better advantage of F2FS with HDD use will then show up because it will minimize the shingling rewrites that COMPLETELY ruin HDD speed performance and completely trashes the heads around and wears HDD prematurely if you use it also to store OS or to any intensive writing workload.

      FYI:
      ALL 2.5" HDD with 128MB of RAM cache ARE SMR *even* if OEM says/said_initially otherwise in its specifications or if doesn't specifiy if the drive is SMR or PMR.

      This happens both with SEAGATE, WD and TOSHIBA.

      Comment


      • #13

        Originally posted by starshipeleven View Post

        Originally posted by cybertraveler
        f2fs is part of the OS and it writes data all over the drive (or rather the part of the drive where the partition is).
        Lol no it does not. It writes data to all sectors of the Flash Translation Layer, which is an abstraction (lie) created by the storage controller.

        How it is actually written on the drive depends from what the storage controller thinks is best.
        Originally posted by cybertraveler
        I think the f2fs design is useful. .
        it is useful only if the storage controller can't do it already. SSD storage controllers don't need much help as they are designed to work with NTFS that is completely unaware of anything.
        see bold

        ...

        Comment


        • #14
          Wow, look at that random write performance difference

          Comment


          • #15
            Originally posted by cybertraveler View Post
            see bold...
            https://www.phoronix.com/forums/foru...34#post1076234

            see bold

            SSDs will spread writes all right, what you see in the partitioning /defrag software is actually a lie for flash-based media (Flash Translation Layer). The data can be written all over the drive even if your OS thinks it is writing in sector X all the time.

            F2FS has any relevance only for eMMC and USB/Sdcards. And even then the controllers do have some form of wear leveling too, all devices where you could physically kill the cells by overwriting the same "sectors" many times are long obsolete.

            Comment


            • #16
              Secondly, I'd love to see this ext4-xfs-f2fs benchmark re-run with the BFQ sched.

              Comment


              • #17
                Originally posted by starshipeleven View Post
                SSDs will spread writes all right, what you see in the partitioning /defrag software is actually a lie for flash-based media (Flash Translation Layer). The data can be written all over the drive even if your OS thinks it is writing in sector X all the time.

                F2FS has any relevance only for eMMC and USB/Sdcards. And even then the controllers do have some form of wear leveling too, all devices where you could physically kill the cells by overwriting the same "sectors" many times are long obsolete.
                Wear leveling increases write amplification. Of course, it probably doesn't kick in if you write to an empty (TRIMed) slot, so F2FS avoids it, in theory. Filesystems can be much smarter before they write data due to cache in RAM and delayed writes. A SSD controller only sees blocks and probably doesn't have so much RAM itself to sort things out (not to mention it has to flush fast as the user expects a fast sync after "writing" data to the SSD).

                Comment


                • #18
                  Originally posted by Weasel View Post
                  Wear leveling increases write amplification. Of course, it probably doesn't kick in if you write to an empty (TRIMed) slot, so F2FS avoids it, in theory.
                  Wear leveling is a multi-step process. The storage controller will write whatever you write to the first available free blocks, it does not give 2 shits that that data is supposed to be in "sector X" or "sector Y" in its little abstraction that shows to the actual operating system.

                  This is the part where imho the filesystem is completely irrelevant. You can't convince a flash storage controller to NOT do this no matter what filesystem you use.

                  Then there is other stuff like garbage collection, consolidating stuff to actually fill up flash blocks so there are true empty blocks, writing more fragmented than it should be advisable for the sake of dealing with a spike of activity, and more that do cause write amplification.

                  Filesystems can be much smarter before they write data due to cache in RAM and delayed writes.
                  This is were I think a "flash-friendly filesystem" can actually do something as caching for a SSD may or may not be the same as caching for a hard drive, but this matters mostly for flash controllers that aren't in SSDs.

                  A SSD controller only sees blocks and probably doesn't have so much RAM itself to sort things out (not to mention it has to flush fast as the user expects a fast sync after "writing" data to the SSD).
                  It's pretty much common knowledge that SSDs and even hard drives routinely cheat hard and report "sync done" even if the stuff is still only in the cache.

                  Also I'd like to disprove your belief about SSD cache RAM.

                  SSDs do have a pretty fucking significant amount of cache RAM onboard. Samsung 850 evo has 512MB or even 1 GB depending on model https://www.guru3d.com/articles-page...-review,2.html
                  And the 860 QVO can go up to 4GB of fucking RAM for the 4 TB model. https://www.guru3d.com/articles-page...-review,2.html

                  That's enough to run Linux comfortably in there, and the SSD controller themselves are three-core ARM R4, that can actually (theoretically anyway) run Linux too https://stackoverflow.com/questions/...-arm-cortex-r4

                  Really, this is the stuff running an SSD, I really doubt you can actually do much better with a filesystem.

                  Comment

                  Working...
                  X