Announcement

Collapse
No announcement yet.

XFS File-System Picks Up New Features With Linux 5.1 Kernel

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by linner View Post
    The last time I ran fsck on a much smaller ext filesystem it corrupted itself and destroyed the whole thing. I've had ext systems end up dead even shutting down normally. I hate that filesystem.
    Can't reproduce here with ext4, I had multi-TB arrays with mdadm and ext4 on top for a long time. Does ext4 crap out once it reaches a some specific size or what?

    Comment


    • #22
      Originally posted by linner View Post

      Holy hell just hope you never have to run a fsck on that thing. How long did it take to format? I can't imagine; maybe you have expanded it over time and didn't notice. If you want data integrity then ext would not be high on my list. The last time I ran fsck on a much smaller ext filesystem it corrupted itself and destroyed the whole thing. I've had ext systems end up dead even shutting down normally. I hate that filesystem.
      That is the filesystem's size from day 1. It took maybe a couple minutes to format, didn't find it out of the ordinary. Note that earlier versions of ext would write a superblock every X blocks, ext4 doesn't do this anymore. Also, you can and should configure the amount of space reserved for inodes. When you mostly have large files, you will need much less inodes than the default.

      I really don't expect the fs to fall apart by itself. Not in the year 2019.

      Comment


      • #23
        Originally posted by Vistaus View Post
        2 Is there a way to convert EXT4 to XFS without losing data (like there is for EXT4 to Btrfs conversion)?
        Some guy wrote a userspace tool for that. It basically creates a huge sparse file on the source filesystem. Within that sparse file, the new filesystem is created and then files are moved over piece by piece, growing the huge sparse file on the block level. In the end, it covers the whole partition and the partition is then re-mounted as the new filesystem.

        It works, but I consider it an academic exercise, or for people who are in it for the thrill. https://github.com/cosmos72/fstransform

        Comment


        • #24
          Originally posted by starshipeleven View Post
          Not for your usecase. I mean really, you could run anything on a hardware RAID and it would be the same as the card has its own RAM cache.

          What makes or breaks the performance of a hardware RAID is the card. As long as you are using a stable filesystem on top of that it's all fine.
          Actually, believe it or not, even on a server 48 (or 60) * 12TB drives RAID 60 array, you can see major performance differences between XFS and ext4.
          E.g. At least in our case, ext4 is considerably faster when it comes to creating and deleting (many-many) billions of files (though, in ext4 case, we are forced to carve our storage to ~64+ partitions to stay under the inode limit); On the other hand XFS tends to be faster in file addend operations and and in fsync latency, etc.

          More-ever, we are seeing the same results (though the performance difference is far smaller) when dealing with 24 x 1.92TB MU SSDs.

          - Gilboa
          Last edited by gilboa; 03-09-2019, 02:53 PM.
          DEV: Intel S2600C0, 2xE52658V2, 32GB, 4x2TB + 2x3TB, GTX1080, F28/x86_64, Dell UP3216Q 4K.
          SRV: Intel S5520SC, 2xX5680, 36GB, 4x2TB, GTX550, F28/x86_64, Dell U2711..
          BAK: Tyan Tempest i5400XT, 2xE5335, 8GB, 3x1.5TB, 9800GTX, F28/x86-64.
          LAP: ASUS Strix GL502V, i7-6700HQ, 32GB, 1TB+256GB, 1070M, F29/x86_64.

          Comment


          • #25
            Originally posted by linner View Post
            The last time I ran fsck on a much smaller ext filesystem it corrupted itself and destroyed the whole thing. I've had ext systems end up dead even shutting down normally. I hate that filesystem.
            One of the production servers, a machine w/ 24 SDDs (in RAID50, single ext4 partition) and 48 HDDs (in RAID60, 64 ext4 partitions) has suffered a catastrophic hardware failure. As it took us some time to replace it (~40 days), we had to keep it running even though it crashed every ~2-3 days.
            Even though it crashed at least 20 times, beyond some minor open file corruption (and were talking about a system that stores billions of files) we had zero issues in getting the machine up again.

            When the replacement machine came, we simply rsync'ed the files to the new servers and continued working where we left.
            ext4 is **very** robust.

            - Gilboa

            DEV: Intel S2600C0, 2xE52658V2, 32GB, 4x2TB + 2x3TB, GTX1080, F28/x86_64, Dell UP3216Q 4K.
            SRV: Intel S5520SC, 2xX5680, 36GB, 4x2TB, GTX550, F28/x86_64, Dell U2711..
            BAK: Tyan Tempest i5400XT, 2xE5335, 8GB, 3x1.5TB, 9800GTX, F28/x86-64.
            LAP: ASUS Strix GL502V, i7-6700HQ, 32GB, 1TB+256GB, 1070M, F29/x86_64.

            Comment


            • #26
              Originally posted by ypnos View Post

              Some guy wrote a userspace tool for that. It basically creates a huge sparse file on the source filesystem. Within that sparse file, the new filesystem is created and then files are moved over piece by piece, growing the huge sparse file on the block level. In the end, it covers the whole partition and the partition is then re-mounted as the new filesystem.

              It works, but I consider it an academic exercise, or for people who are in it for the thrill. https://github.com/cosmos72/fstransform
              Thanks a lot! :-)

              Comment

              Working...
              X