Announcement

Collapse
No announcement yet.

Linux 5.14 SSD Benchmarks With Btrfs vs. EXT4 vs. F2FS vs. XFS

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Is there a way to compare the effects of each filesystem in SSD wear? That would be interesting. Specifically, I'd like to know how much of a negative effect does journaling have. It may be really bad or it may be nearly anecdotal.

    Comment


    • #32
      It'd be interesting to see how btrfs compression affects the performance on such a fast SSD. And of course how zfs and bcachefs compares to btrfs.

      Comment


      • #33
        Michael thank you but these tests are of near-zero use for most people out there.

        What about testing something more mundane and user-oriented?
        • The time to launch a distro of your choice (could be quite time consuming actually)
        • The time to launch Gimp (yeah, takes a lot of time as well). For fun I'd test launching Photoshop CS2 in Wine as well as it involves reading a ton of files
        • The time to unpack e.g. Linux 5.13 source, followed by sync (just to make sure buffers are flushed)
        • The time to tar/archive a large enough number of small and medium size files + sync
        • The the time to copy a decent size tree of files + sync
        • The time to install a big enough rpm/deb file or a number of such files - sync is not normally necessary but I'd invoke it just in case
        These are all simple tests which are quite relevant for users vs. e.g. eight-thread SQLite write performance.

        Time and time again you continue to benchmark for high-load enterprise servers, not end users.

        I've got a lot more ideas for things which you could test but considering I've voiced them many times already I've got no hope. It's sad but what can I do. Those tests could be highly relevant for users.
        Last edited by avem; 27 August 2021, 03:27 PM.

        Comment


        • #34
          Originally posted by avem View Post
          Michael thank you but these tests are of near-zero use for most people out there.

          ...

          Time and time again you continue to benchmark for high-load enterprise servers, not end users.
          And what if nearly all of those test showed almost no difference?

          To really tease apart the differences between whatever he's testing, you often have to focus on the extremes.

          And I don't even consider DB or VM testing with CoW to be out-of-bounds, although it would probably make sense also to test with nodatacow, or at least chrattr +C.

          Originally posted by avem View Post
          I've got a lot more ideas for things which you could test but considering I've voiced them many times already I've got no hope. It's sad but what can I do.
          Did you try the normal submission process for adding to PTS?Keep in mind that Michael is only one guy, and he has to cover news and maintain the site/forms as well. I doubt you'll get everything you want, but if you contribute some benchmarks, he might accept at least some of them.

          Comment


          • #35
            Originally posted by avem View Post
            Michael thank you but these tests are of near-zero use for most people out there.

            What about testing something more mundane and user-oriented?[LIST]...
            ...For fun I'd test launching Photoshop CS2 in Wine as well as it involves reading a ton of files
            ...
            .
            How do you automate that?

            Comment


            • #36
              Originally posted by S.Pam View Post
              You mean people that value data integrity?
              Databases have their own data integrity mechanisms, they don't rely on the filesystem for that. Databases and VMs are the two use cases where disabling CoW makes sense.

              Comment


              • #37
                Originally posted by jacob View Post

                Databases have their own data integrity mechanisms, they don't rely on the filesystem for that. Databases and VMs are the two use cases where disabling CoW makes sense.
                Maybe, but these mechanisms don't integrate with self-healing features of checksumming filesystems. VMs don't have any "data integrity mechanisms" of their own as well (except if a similar filesystem is used on the guest).

                If I'm having a database or a VM in production, I'll be having it on RAID1. If I have the performance headroom to use a fancy filesystem, then I'll be using the filesystem-specific facilities for RAID1. And if I'm doing all that, then it will be quite silly of me to deprive myself of self-healing properties of this storage stack.

                Comment


                • #38
                  Originally posted by flower View Post

                  many hdds and ssds use crc32 internally to verify data. as btrfs uses crc32 too its pretty useless.
                  and there is still integritysetup - if you use an external drive for integrity it doesnt have ANY performance penality. i am using this in an raid10 setup for quite a while (checksum is sha256)
                  Show me a single non esoteric HDD or SSD that uses crc32 internally. What they really use is parity based ECC:s like Reed-Solomon or LDPC and they use it not for data integrity reasons but because they have to or you would experience constant bit errors.

                  Filesystems like ZFS and BTRFS where created to detect and autocorrect (if set up to use multiple copies) silent bitrot that the simpler ECC:s cannot catch, there have been studies done on ZFS: https://research.cs.wisc.edu/adsl/Pu...ion-fast10.pdf
                  Disk corruptions are prevalent across a broad range of modern drives.In a recent study of 1.53 million disk drives over 41 months [7], Bairavasundaram et al. show that more than 400,000 blocks had checksum mismatches, 8% of which were discovered during RAID reconstruction, creating the possibility of real data loss.
                  Last edited by F.Ultra; 27 August 2021, 07:33 PM.

                  Comment


                  • #39
                    Originally posted by Etherman View Post
                    How do you automate that?
                    Which part? Gimp/Photoshop?

                    1. Run them once, make a screenshot, store it, run them twice, run a screen recording application in background - when it "sees" the required picture, your measure the time.
                    2. Run them once, see how much data has been read from the disk, run them twice, check how much time it's taken to read this amount of data from the disk. It would also require either running `echo 3 > /proc/sys/vm/drop_caches` prior or using nocache.

                    The second option is quite easy to automate.
                    Last edited by avem; 27 August 2021, 07:59 PM.

                    Comment


                    • #40
                      Originally posted by shoarmapapi View Post
                      How come that brtfs is so slow compared to the older filesystems?
                      it's only slow on sqlite which should use non-cow file, because it does its cow itself

                      Comment

                      Working...
                      X