Announcement

Collapse
No announcement yet.

Linux 5.14 SSD Benchmarks With Btrfs vs. EXT4 vs. F2FS vs. XFS

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Originally posted by bug77 View Post

    Not much "wow!" there when you realize it's 12yo.
    Making great software takes time.
    Linux also took its time.
    Windows wasn't really usefull for anything before version 3.11 and then it wasn't really any good.
    Mac OS didn't have real multitasking before OSX.
    EXT file system wasn't really adopted on the mainstream before ext3.

    But I am just glad that BTRFS is where it is now.
    Last edited by pracedru; 05 September 2021, 05:03 AM.

    Comment


    • #92
      Originally posted by fkoehler View Post
      Well, you can do snapshots during the night hours and weekends,
      Snapshots aren't the problem. They're basically free. We use snapperd to make hourly snapshots on our departmental fileserver, which makes them actually useful!

      The overhead that comes with snapshots is when they're deleted. And I don't know if snapperd added any options to schedule that off-hours, but that's the bit you'd want to schedule.

      Originally posted by fkoehler View Post
      Also, correct me if I'm wrong, but while theoretically the nodatacow files should not get deduplicated during snapshot, practically files that deserve "nodatacow" should be heavily used and pretty much always change between snaps, so there should be no difference in disk space requirements ...
      Depends on how often you do snapshots.

      Anyway, my solution is to create a subvolume with snapshots disabled, and try to get all the high-turnover stuff located there. The thing about high-turnover data is that it also tends to be low value. So, there's less benefit in snapshotting it, anyway.

      An exception to this might be databases, although they each have their own backup mechanism.

      Comment


      • #93
        Originally posted by curfew View Post
        Taking a snapshot doesn't copy any data. The copying happens on-demand at the next time when you are changing the file. So the copying will always happen when you are actively using the computer and therefore you take the performance hit.

        Taking a single snapshot on a daily basis is "frequent" in the same sense as darkbasic used it. Frequently would be a few times each day or even hourly similar to Apple's Time Machine.
        I kind of see your point, but have you actually measured, if this affects you in any meaningful way? Don't most file formats with content need to do a full copy on save anyways? Like Video, JPEG and co, all office formats.... So for Joe Averageuser doing actual work, there would be very little difference. Only productivity use case I would personally remember from my thesis are Hyperchem molecular dynamics trajectories that had a fixed size binary format that could be (ab)used as a pseudo "structure database". Very niche.

        Then there would be databases which most people will use in nodatacow or have good reasons to go slow and safe.

        Only thing I can see that actually could matter would be humongously large logfiles which actually are written in O_APPEND mode. Which would only be slow for each first write after snapshot. So if you snapshot in 10 minute intervalls and have about 10 transactions per second, that cause a write in log, this would mean less than 0,1 % of your transactions get slowed down.

        Premature optimization is the root ....

        Comment


        • #94
          Originally posted by Azrael5 View Post
          It looks like F2Fs be the best solution for SSD and I assume USB as well.
          I have been using it for over a year, it rocks. BTRFS-like features (checksumming, compression, casefolding, native encryption, and so on, albeit no snapshots) with basically no speed compromises, or write amplification like btrfs.


          My biggest issue is that its default flags are suboptimal, and its missing clearer documentation. For example, I had to figure out theres a 16 extension limit for the whitelist/blacklist from the source code, and I *still* can't figure out why compress_cache refuses to be enabled.

          Comment


          • #95
            Originally posted by Sakuretsu View Post
            After seeing this F2FS is really picking my interest now.
            Originally posted by Azrael5 View Post
            It looks like F2Fs be the best solution for SSD and I assume USB as well.
            I have been using it for over a year, it rocks. BTRFS-like features (checksumming, compression, casefolding, native encryption, and so on, albeit no snapshots or raid) with basically no speed compromises, or write amplification like btrfs. It handles torturous workloads (like writing a million pngs) better than ext4 as well.


            My biggest issue is that its default flags are suboptimal, and its missing clearer documentation. For example, I had to figure out theres a 16 extension limit for the compression whitelist/blacklist from the source code, and I *still* can't figure out why compress_cache refuses to work.
            Last edited by brucethemoose; 16 November 2021, 03:23 PM.

            Comment


            • #96
              Originally posted by curfew View Post
              Taking a snapshot doesn't copy any data. The copying happens on-demand at the next time when you are changing the file. So the copying will always happen when you are actively using the computer and therefore you take the performance hit.

              Taking a single snapshot on a daily basis is "frequent" in the same sense as darkbasic used it. Frequently would be a few times each day or even hourly similar to Apple's Time Machine.
              No, not really.

              A cow filesystem implies that each new write is in a new extent. Therefore it makes no difference if you snapshot a file or not. The write will happen in a new place anyway.

              However, if you have lots of snapshots or reflinks, it takes longer to delete them because Btrfs has to update the references from the shared snapshots. This happens in the background with btrfs-cleaner kernel thread.
              Last edited by S.Pam; 09 August 2022, 03:55 PM.

              Comment


              • #97
                I've just wondered about the bad Flexible IO Random Write for btrfs in this article. Buffered/Direct settings seem to be different:

                Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

                Btrfs: 215k ; EXT4: 245k
                Flexible IO Tester v3.18
                Type: Random Write
                Engine: IO_uring
                Buffered: Yes !!!
                Direct: No !!!

                Block Size: 4KB
                Disk Target: Default Test Directory

                This article:
                Btrfs: 143k ; EXT4: 712k
                Flexible IO Tester 3.25
                Type: Random Write
                Engine: IO_uring
                Buffered: No !!!
                Direct: Yes !!!

                Block Size: 4KB
                Disk Target: Default Test Directory

                Comment


                • #98
                  Hoping Michael will do ZFS next round. Anyone see any quality ZFS vs XFS benchmarks during the last year on NVMes?

                  Comment

                  Working...
                  X