Announcement

Collapse
No announcement yet.

Linux RAID Performance On NVMe M.2 SSDs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    very interesting tests, thanks a lot.

    wondering is there any reason why all FIO tests had the fsync disabled (-fsync=0 by default disable fsync entirely)? for a few types of database related workload, it is one of the most important IO metrics to look at. any chance to include that in some of the future tests?

    cheers!

    Comment


    • #12
      Originally posted by sdack View Post
      The Btrfs numbers in one word: confusing.
      Yeah, though the sort of justify how I am using btrfs these days: For directories with source code and builds. It is what it is really good at.

      Comment


      • #13
        Originally posted by varikonniemi View Post
        Anyone got an explanation how btrfs is seriously trailing the bunch in almost all tests, but then in Linux compilation it is the clear winner?
        It has a slow fsync. It used to be much worse (I remember hitting this nasty performance bug several years back: https://bugs.launchpad.net/ubuntu/+s...kg/+bug/601299), but it really doesn't like syncing, which makes it poor at most of the artificial benchmark that does syncs to make sure all the work has been completed. It is better in real life.

        Comment


        • #14
          Its interesting how SSD endurance has dropped over the years. My intel X25-E's from 2009 are only 64 GB but have a 1 Petabyte of write endurance. I don't think there are any modern SATA or M.2 drives that even come close to that. Still using these X25-E's today as the OS disks in my workstation, Fedora 25 boots almost instantly from them. They won't wear out any time soon, so why replace?

          Comment


          • #15
            Originally posted by torsionbar28 View Post
            Its interesting how SSD endurance has dropped over the years. My intel X25-E's from 2009 are only 64 GB but have a 1 Petabyte of write endurance. I don't think there are any modern SATA or M.2 drives that even come close to that. Still using these X25-E's today as the OS disks in my workstation, Fedora 25 boots almost instantly from them. They won't wear out any time soon, so why replace?
            Not really, it's simply a consequence of newer production nodes. Less nm, less charge in a cell, less time it can remain healthy. That's all.

            Comment


            • #16
              Generally my Linux partitions are booted from EXT4, but occasional BTRFS, etc. All my notebook's operating systems use the same NTFS-compressed Data partitions.
              The commercial version of NTFS (Microsoft) is different from the Linux version of NTFS. It would be interesting to compare the traditional Linux partitions with the two NTFS partitions.
              Some of the newer notebooks, etc allow two or more SSD drives to have the same exact hardware. My Dell XPS-15 notebook (old 2013 model) only allows one of each different types of SSD: m2 and SATA. My DATA drives are in the Microsoft NTFS-compressed formats, for the three Windows-10 partitions, and their data partitions. In addition, I have ten (10) Linux partitions for up to ten Linux operating systems, with proper Linux partitions.

              If the Linux operating system is from one of the many derived from Ubuntu, it is extremely easy to multi-boot into any Linux kernel, from a menu-choice of available (installed) Linux kernels. "Grub-customizer" (or its forks) is installed by default into some Linux operating systems; it may also be installed from the command-line. This reports problems with Linux Kernel 4.13 might be resolved in any of the bug-fixes that are released about each week, for several weeks after the release of the Stable kernel version, by the Linux Foundation.

              Easy re-testing into an Linux kernel can be chosen easily with Grub customizer. Eventually the "correct" usual kernel might be discovered; either any of the newer weekly-bug-fixes, or an older release of the kernel.

              A few independent researchers have tried running RAID-0, etc using USB flash drives. This could be of interest to some users as well.

              Comment


              • #17
                Not sure if it was pointed out in the article, or by someone else in this thread. BTRFS has an 'ssd' mount option, to optimise its copy-on-write behaviour for SSDs.
                Would be good to see this benchmark with this "ssd" option turned on vs off.

                EDIT: Just looked at the btrfs mount options in the article, looks like it was turned on in these tests.

                Comment


                • #18
                  Originally posted by torsionbar28 View Post
                  Its interesting how SSD endurance has dropped over the years. My intel X25-E's from 2009 are only 64 GB but have a 1 Petabyte of write endurance. I don't think there are any modern SATA or M.2 drives that even come close to that. Still using these X25-E's today as the OS disks in my workstation, Fedora 25 boots almost instantly from them. They won't wear out any time soon, so why replace?
                  Are you seriously comparing a enterprise-grade SLC SSD to consumer MLC SSDs?

                  Because SLC cells can be re-written like 10-100 more times than MLC cells (but store like 1/4 or less data), and this is well-known.

                  Comment


                  • #19
                    Thank you for the interesting test!
                    It is kind of alarming that even in raid 0 btrfs seemed to have issues. I did hope for btrfs to become a possible replacement in my productive systems but I nearly gave up on it already.
                    If I understood correctly, it is a mdadm raid. Did you see any relevant cpu usage running those nvmes in raid?
                    I am currently thinking of building a compile station with two of the new epyc cpus and 960pro nvmes in raid1, as it seems like this is the perfect scenario for those cpus where the threads dont communicate with each other.

                    Comment

                    Working...
                    X