Announcement

Collapse
No announcement yet.

Linux RAID Performance On Dual NVMe SSDs

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux RAID Performance On Dual NVMe SSDs

    Phoronix: Linux RAID Performance On Dual NVMe SSDs

    Here are our latest Linux RAID benchmarks using the very new Linux 4.16 kernel while using two high-end Samsung 960 EVO 500GB NVMe solid-state drives with Ubuntu 18.04 LTS. Using MDADM Linux soft RAID were EXT4, F2FS, and XFS while Btrfs RAID0/RAID1 was also tested using that file-system's integrated/native RAID capabilities.

    http://www.phoronix.com/vr.php?view=26177

  • #2
    After all these years of first being the file system on the Unix based Silicon Graphics machines, XFS still shines and shines brighter than most. Well done team XFS !

    Comment


    • #3
      I'm rather pleased by how competitive BTRFS is with PostgreSQL. I guess in many common server workloads that's the dominant factor. Having a classical webapp, the code usually doesn't change and can get kept in ram. The filesystem should usually only touch the database and maybe some media files.

      In this scenario, BTRFS-RAID1 seems to be the perfect choice. It supports snapshots, so one can make backups of the database while running and without it even noticing. Media files would be save from bit-corruption.

      Anyway that's what my company does for a couple of years now and I always thought the BTRFS features come with a higher price. Our reasons to use it where mostly snapshots, which make backups much easier, deduplication (we use lxc containers a lot) and the bit-rot protection.

      Comment


      • #4
        Originally posted by treba View Post
        I'm rather pleased by how competitive BTRFS is with PostgreSQL. I guess in many common server workloads that's the dominant factor. Having a classical webapp, the code usually doesn't change and can get kept in ram. The filesystem should usually only touch the database and maybe some media files.

        In this scenario, BTRFS-RAID1 seems to be the perfect choice. It supports snapshots, so one can make backups of the database while running and without it even noticing. Media files would be save from bit-corruption.

        Anyway that's what my company does for a couple of years now and I always thought the BTRFS features come with a higher price. Our reasons to use it where mostly snapshots, which make backups much easier, deduplication (we use lxc containers a lot) and the bit-rot protection.
        Not only snapshots, great for unionfs style containers too. Overlayfs consumes some RAM with each extra layer.

        Comment


        • #5
          Does anyone know how to run a PTS test profile (like the one from the article: phoronix-test-suite benchmark 1803273-FO-SAMSUNG9663) for another/secondary disk, is that by changing the environment variable PTS_TEST_INSTALL_ROOT_PATH to the relevant disk?

          Comment


          • #6
            Originally posted by Davidovitch View Post
            Does anyone know how to run a PTS test profile (like the one from the article: phoronix-test-suite benchmark 1803273-FO-SAMSUNG9663) for another/secondary disk, is that by changing the environment variable PTS_TEST_INSTALL_ROOT_PATH to the relevant disk?
            Yes that is the easy/universal way (or same effect by editing EnvironmentDirectory in the user-config.xml/phoronix-test-suite.xml
            Michael Larabel
            http://www.michaellarabel.com/

            Comment


            • #7
              I would be interested to see something like this but including ZFS, especially if it had ZFS's SSD caching. Would be interested if btrfs being in native kernel space would win out.

              Comment


              • #8
                Michael have you ever tried running a FS comparison on the popular cloud platforms? I've tried searching and couldn't find anything.

                If not, it would be great to see the same tests running on AWS and Azure.

                Comment


                • #9
                  MD RAID tests should be done with RAID10 (with parity o2 or f2) instead of RAID1

                  Comment


                  • #10
                    Originally posted by Jumbotron View Post
                    After all these years of first being the file system on the Unix based Silicon Graphics machines, XFS still trades blows with ext4 and f2fs depending on load. Well done team XFS !
                    Fixed.

                    Comment

                    Working...
                    X