Announcement

Collapse
No announcement yet.

F2FS File-System Shows Hope, Runs Against Btrfs & EXT4

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Michael I think you did the two recent filesystem tests backwards, Btrfs should have been running on the HDD and F2FS should be running on the SSD. It's kind of silly to be running a flash memory targeted FS on something that's *not* flash.

    Comment


    • #12
      Pretty useless tests without a mixed environment where you have a boot SSD drive that houses the OS and your data drive to read/write as app specific necessary.

      Build a test suite on that common scenario.

      Comment


      • #13
        I really want this for my systems running from SD cards (OpenPandora / Raspberry Pi). I've had a couple of SD cards corrupt on me (one a fairly expensive 64gb). And while I can't say the filesystem had anything to do with it, I'd still like the peace of mind a filesystem especially suited to flash this would bring.

        Comment


        • #14
          Originally posted by Pallidus View Post
          I have sdcards, flash pens, external hd's, etc etc


          they are all in either exfat or fat32

          so FUCK YES this needs to happen and happen fast


          I also question phoronix:


          WHY THE FUCK are you comparing f2fs to ext4 etc? and in a linux install? who the fuck cares


          here's an idea: HOW ABOUT A RELEVANT FUCKING TEST like comparing f2fs perfomance to exfat and fat32 in SD cards and USB pens ???

          y r people so stupid ffs
          Some people have SSDs inside their computers, are currently using ext4 on them, and might consider migrating to f2fs; you see, not everybody uses flash exclusively on external devices, and not everybody is only interested in extfat or fat32. I would even dare suggest that most of those using extfat / fat32 on an external device need to keep compatibility with other computers, and are thus less likely to consider f2fs as an alternative.
          Last edited by eduperez; 19 February 2013, 06:28 AM.

          Comment


          • #15
            I am not that much into filesystems, and I might sound ignorant, but instead of creating a new filesystem from scratch wouldn't it make more sense to enhance the established ones so they would work better on flash drives like by adding a mount option?

            Comment


            • #16
              Originally posted by a user View Post
              why should that be a reason???
              As far as I understand, it's log-structured so it (at least theoretically) could just revert incomplete writes and get some valid old state by just discarding incomplete logs. In fact, standard fsync() semantic is quite awkward and unnatural when it comes to log-structured/copy-on-write/similar designs where multiple versions of data present at the same time. However, APIs and software were designed in days where CoW-like designs were uncommon. So this semantic has been okay for filesystems used at that time.

              And you see, most of "classic" filesystems are only journaling metadata but not data. So while they ensure that metadata state is correct, they don't really care about actual data. So if you rewrite file and power is getting lost right at that moment, filesystem will be able to get correct metadata state by using journal so they describe some valid blocks. However actual file content could be half-old and half-new. You see, "classic" designs usually do not perform FULL journalling. Because it implies huge speed penalty as in classic designs they first have to write intent to journal and then actually commit it to data area. If all data included into journal and then into main area, they written twice and as the result, there is at least 2x penalty in write speed. As the result most filesystems resorted to only journaling metadata so at least you dont have to run fsck for hours to correct erroneous metadata. Some filesystems like EXT4 still include option to do full journalling. But full journalling mode is slow for mentioned reasons so almost nobody uses it. Some filesystems don't implemented full journalling at all as it's slow anyway.

              Then we come to CoW/log-structured designs. Some bright head(s) decided that let the whole storage area to be a journal. Then there is no dedicated data area - it's a huge journal instead. So no commits to data area. So no double writes and no write speed penalty while retaining all properties of full journalling mode. Upon write it does not overwrites old data but rather writes them to new place (hence "copy-on-write"). So in fact there is TWO versions of data appear. Old one, and new one. Old data state is constructed by just ignoring newly written fragment. New data state is constructed by taking it into account. So in fact there are multiple states could co-exist. That't also makes snapshots creation fast and easy - in fact all data and metadata are already here so it's just enough to declare it snapshot. Upon crash it's enough to discard incomplete writes and you have "old" state. Automatically. So either write completes and you have new state or it simply discarded. So it's also referred as "non-destructive write". Modern filesystems are often using this design due to mentioned advantages, even if they imply some complications as well (garbage collector needed, etc). Say btrfs of nilfs or zfs are good examples. And as far as I understand F2FS does something like that as well.

              As for me its highly debatable if log-structured design ignoring fsync() would cause more data loss than classic journal design which does not cares about rewrites of the file and actual data state after crash. Say, many file formats would become unreadable if they happen to be half-old and half-new. In fact, fsync as it currently used by apps would just slow-down CoW-based designs without any good reasons. In ideal world there should be some calls to mark "pre-transaction" state and "post-transaction" states for quite large "transactions". But it's not how posix file access api designed...

              Comment


              • #17
                Originally posted by Pallidus View Post
                my mp3's/flacs are the most precious things I have, I'm not happy with exfat
                If they're precious, why do you only have one copy of them?

                backups, backups backups.

                Seriously: It's not rocket science. Loss of media should be an irritation not a disaster. If you don't have multigenerational backups (each one on discrete media!) then you're a fool.

                BTW: if you keep your backups near the computer, that's also cause for concern. We've had a number of cases where employees were burgled and the thieves took USB external drives (containing their backups) along with their machines.

                Consider using cloud backups as well as local ones - but never instead of. Just ask users of Mega why that's a bad idea.

                Comment


                • #18
                  Originally posted by 0xBADCODE;314645And you see, most of "classic" filesystems are [B
                  only journaling metadata[/B] but not data. So while they ensure that metadata state is correct, they don't really care about actual data. So if you rewrite file and power is getting lost right at that moment, filesystem will be able to get correct metadata state by using journal so they describe some valid blocks. However actual file content could be half-old and half-new. You see, "classic" designs usually do not perform FULL journalling.
                  I'm glad you said "usually"

                  The first thing I do when setting up a filesystem is to set the mount options to full journalling - flash or spinning rust.

                  You can set journalling to writeback when setting up a disk, but once you're using for real live data that's a kick in the pants waiting to happen. Yes it costs speed. I consider data integrity more important (and I use ZFS on machines which are physically capable of taking multiple drives) than having fast access to corrupted data.

                  On a flash drive there's virtually no gain in delaying writes more than a few ms, so why do it?

                  Comment

                  Working...
                  X