Announcement

Collapse
No announcement yet.

Btrfs / EXT4 / F2FS / XFS Benchmarks On The Linux 4.12 Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Brane215 View Post
    I like how f2fs is performing. It means it's perfectly OK for work from some cheap USB key, CF card etc.
    On crappy flash devices like that it's pretty good as it manages to increase write speed significantly, especially if you have decent amounts of RAM as it also ram-caches writes aggressively.
    I have a couple 32GB flash drives that are slow as molasses on FAT32, they take hours to write a few GB of files that on other (normal) flash drives get written in a dozen minutes.

    With f2fs the write seems to complete "instantaneously" (it actually goes to ram cache), and is then written at normal speeds on the flash. Writing 1GB of stuff takes a few minutes, not half an hour as it was taking with FAT32.

    Comment


    • #12
      Guess running some benchmarks like that is the obvious next step now that I just plugged in a NVMe SSD on a PCIe riser into my aging dual Xeon MacPro: https://www.youtube.com/watch?v=5Od0YNnZD_k

      Comment


      • #13
        Originally posted by Michael View Post

        At the moment, none come to mind but there are around a thousand tests/suites and do lose track of some of the ones in there at times.... Any good open-source IO latency tests come to mind?
        fio will output latency percentiles of any workload you run so long as you don't turn off the latency gathering component.

        Comment


        • #14
          I second the remark about ZFS, comparaison with ZFS on linux would be great.
          Also beware i think raid1 on 4 disk btrfs is basicaly treated as raid10 for redundancy, you do not have 4 copy of data, only one

          Comment


          • #15
            If that were true, all one need do is remove drives to test.
            Hi

            Comment


            • #16
              Is F2FS going to get boot support for Linux distros any time soon, other than Android? Or is that forever going to be delegated to EXT4? And will we ever see EXT5 built for solid state storage someday?

              Comment


              • #17
                Originally posted by SkOrPn View Post
                Is F2FS going to get boot support for Linux distros any time soon, other than Android?
                The patch adding f2fs support to GRUB (the bootloader) was sent an year ago, but the GRUB maintainer wanted to wait after release to include it.http://lists.gnu.org/archive/cgi-bin...mal&sort=score

                There are patched GRUBs that can boot Linux from f2fs that use that code. https://aur.archlinux.org/packages/grub-f2fs/
                If you care about that post in the GRUB mailing list to ask when they plan to merge it.

                Also there are pre-compiled efi drivers you can use for rEFInd or whatever else (also GRUB with some effort), still coming from the same code sent for GRUB, here (tested personally and it works) http://efi.akeo.ie/ not just for f2fs, anyway.

                And will we ever see EXT5 built for solid state storage someday?
                AFAIK no, as that would require a total rewrite as SSDs are very different from traditional hard drives, and at that point it won't make sense to call it "ext" anymore. Ext4 is likely the last of the "ext" filesystems, will receive more features but won't be ported to SSDs.

                Comment


                • #18
                  Originally posted by starshipeleven View Post
                  AFAIK no, as that would require a total rewrite as SSDs are very different from traditional hard drives, and at that point it won't make sense to call it "ext" anymore. Ext4 is likely the last of the "ext" filesystems, will receive more features but won't be ported to SSDs.
                  OK, great information, thank you. Lets say that in a few years time SSD manufacturers get the capacity far greater than HDD and at the same time manage to bring down costs well below HDD. And that in turn causes everyone to adopt SSD exclusively. Do you think we will always be using Ext4 for solid state in that scenario, or will something replace it eventually, such as F2FS or something else like Btrfs?

                  I adopted SSD's way back during the time when OCZ released their Vertex series and haven't looked back, maybe ten years now. Only my server still uses HDD and that is because of capacity. Once capacity and cost is no longer the defining reasons to use HDD over SSD I plan on replacing every single HDD with at least an equivalent SSD, regardless if the HDD's are still healthy. Just concerned that we are still using something designed for HDD for such a long time after SSD's became available.

                  Comment


                  • #19
                    Originally posted by SkOrPn View Post
                    OK, great information, thank you. Lets say that in a few years time SSD manufacturers get the capacity far greater than HDD and at the same time manage to bring down costs well below HDD. And that in turn causes everyone to adopt SSD exclusively. Do you think we will always be using Ext4 for solid state in that scenario, or will something replace it eventually, such as F2FS or something else like Btrfs?
                    I think that ext4 will eventually die with HDDs. Sure it all depends from distros to decide to adopt a new filesystem as default, but everyone made the switch to ext4 back then, so I don't see why they shouldn't hop on the next best filesystem.

                    Detecting SSDs during install isn't exactly difficult (kernel detects it and shows "non-rotational" attribute for most flash devices), so it's possible to have the installer decide default filesystem depending on that.

                    Once GRUB integrates f2fs support they will probably start offering that as an option at install time (currently only some do, like Anergos/manjaro/chakra even if it is not supported officially by the bootloader so you need to have a separate /boot partition formatted with a filesystem GRUB can read)

                    Comment


                    • #20
                      Originally posted by starshipeleven View Post
                      I think that ext4 will eventually die with HDDs. Sure it all depends from distros to decide to adopt a new filesystem as default, but everyone made the switch to ext4 back then, so I don't see why they shouldn't hop on the next best filesystem.

                      Detecting SSDs during install isn't exactly difficult (kernel detects it and shows "non-rotational" attribute for most flash devices), so it's possible to have the installer decide default filesystem depending on that.

                      Once GRUB integrates f2fs support they will probably start offering that as an option at install time (currently only some do, like Anergos/manjaro/chakra even if it is not supported officially by the bootloader so you need to have a separate /boot partition formatted with a filesystem GRUB can read)
                      Well, I just finally installed Fedora 26 cleanly on my Samsung SM961 256GB PCIe SSD (same as 960 Pro). I have pulled all other SSD's out of the system and permanently disabled all SATA ports within the bios. I now am exclusively only using this NVMe SSD for / and /home, and then using my 12TB NAS + USB 3.1 1TB SSD for all other storage purposes.

                      Because of this exclusive change to PCIe NVMe in my main system I am now VERY much wondering if I should adopt a nand friendly F2FS+blk-mq setup, or something along those lines, or at least BFQ once 4.12 is released here in the next few days? I also see that Kyber is getting mainline support already, which seems far more like something you would use for a low-latency NVMe device. It feels like 2017 or maybe 2018 are the years to start really thinking about moving on from things like AHCI/SATA/EXT4/CFQ etc. I have no plans to ever go back to HDD's or SATA, not locally on my main PC anyway. Not until SATA is at least as fast as PCIe, which I doubt will ever happen.

                      Anyone have an idea what would be best setup for the Operating System on NVMe solid state devices please? I do a little bit of everything, gaming, encoding, file transfers, archive, extract archives, write iso's to usb, copy/paste from one partition to the other, copy/paste from one system to the other, lots of browsing the net, image editing, lots of videos, plex, youtube, music, lots of cloud syncing, etc etc. What would be the best file system setup and scheduler for my immediate future?

                      Or should I stick with the defaults of EXT4 and CFQ for a while longer?

                      EDIT: I just realized that I have to use the Clover UEFI bootloader and I don't think it supports F2FS. So, not exactly sure it can boot my Fedora install or not. So EXT4 it is a while longer for me. Will do some extensive reading on kyber and blk-mq.
                      Last edited by SkOrPn; 23 July 2017, 02:58 AM.

                      Comment

                      Working...
                      X