Announcement

Collapse
No announcement yet.

XFS Reverse-Mapping Proposed For Linux 4.8: Getting Ready For New File-System Features

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by duby229 View Post
    So I'm trying to be as realistic here as possible and honestly it's been my experience data integrity in power failure situations highly depends on the drive. Drives are designed specifically to handle those circumstances.
    Try power failures with a filesystem with and one without journaling, and you will see some differences. The non-journalled filesystem has a good chance to fuck itself up, or at the very least to require a filesystem check to be functional again.
    The journalled filesystem will recover itself (the metadata), but the data may or may have not been written correctly, there is no way to know.

    Hard drives cannot cope with a data transfer that is abrupty interrupted as they have no fucking idea of what is happening, they see only blocks as they are block-level devices, they don't know filesystems, it's not their job.

    Unless we are talking of a Drobo, which is a self-sufficient device running a proprietary RAID filesystem that shows itself as a single drive over USB, Esata or whatever.

    So they make sure they write all blocks they received before the power was cut in the right places (hopefully), but how and were they go depends from the filesystem level, and if the filesystem level does not cope with power failures you corrupt data or the whole filesystem.

    Comment


    • #12
      Originally posted by starshipeleven View Post
      can dd make backups while the filesystem is still online and in use?

      Because CoW filesystems can do so. Snapshot, then start transferring the data away while other processes still work on it.

      putting offline a server to do backups isn't cool.
      Failure to plan downtime isn't cool. I really do hate it when people brag about uptime, then you know for sure it hasn't been backed up or updated for that long at least.

      Comment


      • #13
        Originally posted by duby229 View Post
        Can somebody please explain to me what benefit does CoW techniques imbue? In every scenario I can imagine it seems like it would perform much worse due to it write thrashing the shit out of your drives and free space fragmentation being unmanageable because of that.
        Imagine doing development with snapshots. For instance setting up new application stacks. With previous generation file systems you would need to clone huge trees and install stuff separately. Not cool. It's a mess. Modern DevOps ftw.

        Comment


        • #14
          sorry, so would xfs COW feature a fully COW feature like in btrfs or just metadata.. and if I dare asking what's the difference between protecting metadatas and the whole file.
          Cheers

          Comment


          • #15
            Originally posted by duby229 View Post
            Failure to plan downtime isn't cool.
            I said taking server offline isn't cool, not that it's better to not make backups. Go troll elsewhere.

            That said, it's scientifically proven that the best backup strategy is the one that requires the least human interaction, with a proper CoW filesystem you can automate everything.

            Comment


            • #16
              Originally posted by horizonbrave View Post
              sorry, so would xfs COW feature a fully COW feature like in btrfs or just metadata..
              XFS journals metadata already since a long time ago, if a filesystem is CoW it offers full data and metadata journalling (CoW is a more efficient system than journalling to not murder performance, but they are protecting data the same).

              and if I dare asking what's the difference between protecting metadatas and the whole file.
              metadata is the file system auxiliary structures, indexes and stuff needed to know where actual data is stored (this is still something physically written on disk), if something bad happens and you are not protecting metadata you can fuck up the whole filesystem and lose all data therein.
              FAT32 does not have metadata protection (journalling), and you can easily try this with a USB flashdrive, you write to it and then yank it off the USB port, and see what happens. More often than not it will need a filesystem check to be writable again, or it may also become garbled and unreadable (requiring a full reformat). Not checking the filesystem and using it "dirty" is also likely to eventually lead to random issues.

              Most (not-so) modern filesystems like NTFS and ext3/4 and XFS have metadata journalling, so that in the event of losing power in the middle of a write operation (or a program crashes in the middle of a write operation, or whatever), the journal allows the system to rebuild the metadata and the filesystem itself is fine and will not require a file system check to be usable safely.

              Point is, the filesystem itself is OK, but the data you were writing isn't protected, so if you were modifying a file while shit happened, that file is likely corrupted or unreadable or whatever.

              A full CoW filesystem protects data and metadata by simply writing to disk the new filesystem state (modified file and modified metadata), THEN and ONLY THEN it updates the pointers to the newly written stuff, and finally it erases the old stuff (if it isn't part of a snapshot or deduplicated with something else or whatever).
              In this case, if a write operation fails because reasons, the old data and metadata is always safe, and the partially written stuff is discarded on the next mounting of the filesystem.

              Comment


              • #17
                No RAID or volume management? Shame. I hope they'll consider it in the future.

                Comment


                • #18
                  Originally posted by thelongdivider View Post
                  It's as if they looked at BTRFS and said "why would people use this over XFS?" They then set out to fix all answers.
                  lol when they finish they will have produced btrfs

                  Comment


                  • #19
                    Originally posted by duby229 View Post
                    Can somebody please explain to me what benefit does CoW techniques imbue? In every scenario I can imagine it seems like it would perform much worse
                    it will avoid copy, which is faster than copying. and in scenarions when you lack imagination you always can disable cow for specific files on cow-capable fs, while you can't add cow for specific files on incapable

                    Comment


                    • #20
                      Originally posted by duby229 View Post
                      I'm personally convinced my backup strategy is a lot safer and probably faster too.
                      probably you are mistaken

                      Comment

                      Working...
                      X