Announcement

Collapse
No announcement yet.

Btrfs On 4 x Intel SSDs In RAID 0/1/5/6/10

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by gigaplex View Post
    Then again I'm not sure how non-RAID sequential reads are significantly faster than single disk, as if it's actually a linear span like Michael claimed, it won't have multiple sources to read from.
    Well, I guess it depends on how it is submitted to the kernel. If the read requests are small in size and are submitted sequentially (i.e. next request is issued after the the previous one completes) then there's nothing the kernel can do about it. But if you submit a read request for a big chunk of data, the kernel (well, the multi-device implementation in the kernel, be it LVM, MD or btrfs) should at least in principle be able to split the request into smaller requests for non-overlapping areas of the different drives that comprise the mirror set (RAID 1 assumed here). But maybe that's too much work and/or tricky to implement.

    In any case, it should be possible in case several read requests are submitted asynchronously or by several threads - the kernel simply needs to choose which block device should service the request and there's no transformation of the requests involved.

    Obviously, this can only work for reading. When writing to a mirror set, the same stuff needs to get written on all devices and it will be fast as the slowest device.

    Comment


    • #12
      Originally posted by gigaplex View Post
      Then again I'm not sure how non-RAID sequential reads are significantly faster than single disk, as if it's actually a linear span like Michael claimed, it won't have multiple sources to read from.
      Sorry, I misunderstood your post. It is, of course, curious that the non-RAID set is faster in some circumstances.

      It may be possible if there are simultaneous requests for different areas of the logical device that would land in different physical devices but that seems a bit contrived.

      Comment


      • #13
        comparison to md-raid with ext4

        Would it be easy to add an md-raid/ext4 comparison?

        Comment


        • #14
          Originally posted by vladpetric View Post
          Would it be easy to add an md-raid/ext4 comparison?
          I would also like to see the same exact experiment, but with mdadm and ext4. I feel for Linux that's currently more popular.

          Comment


          • #15
            Originally posted by zeroepoch View Post
            I would also like to see the same exact experiment, but with mdadm and ext4. I feel for Linux that's currently more popular.
            mdadm is what my other RAID tests are being done with.
            Michael Larabel
            https://www.michaellarabel.com/

            Comment


            • #16
              Originally posted by Michael View Post
              Likely too much work without multiple premium users and/or donaters requesting such tests.
              One premium user here. With ZFS on linux seeing much recent work it would be nice to see how it compares to btrfs.
              FreeBSD results would be nice to see to what extent ZFSoL has reached its potential.

              Comment


              • #17
                Never enough technical detail

                Michael,

                It's critical to have complete technical detail so that readers can have confidence in your results.

                I suggest you model your benchmarks on those of Anandtech.com (when Anand lal Shimpi was still at the helm).

                The missing details for this review were:
                • What hardware (motherboard, CPU and RAM) was used in this benchmark?
                • What storage related BIOS/UEFI settings were used?
                • What tunables for the system or filesystem were used (if any)?
                • How was TRIM handled?; i.e.:
                  • What TRIM related filesystem options were enabled or disabled?
                  • Were the SSDs securely erased in between each test?


                Without this kind of detail I cannot rely on your benchmarking.

                I suggest a paragraph or two discussing the issues of Linux filesystem benchmarking with SSDs to show the reader that you know what you're doing.

                e.g. For this article it would have been sensible to discuss whether BTRFS supports TRIM (and if there are limitations to this support)

                I do appreciate that you are one of the few people who even attempts to benchmark Linux and BSD systems but please recognise that you have a way to go before benchmarks can be trusted.

                Comment


                • #18
                  Originally posted by AusMatt View Post
                  The missing details for this review were:
                  • What hardware (motherboard, CPU and RAM) was used in this benchmark?
                  • What storage related BIOS/UEFI settings were used?
                  • What tunables for the system or filesystem were used (if any)?
                  • How was TRIM handled?; i.e.:
                    • What TRIM related filesystem options were enabled or disabled?
                    • Were the SSDs securely erased in between each test?

                  The hardware info is always shown on the OpenBenchmarking.org system table shown in the articles.

                  Most of the other information can be obtained from going to that OpenBenchmarking.org results page where you have access to all of the system logs and other data from the benchmarking runs.

                  It's all very transparent and unless otherwise noted readers know to expect the stock settings of the given software were used.
                  Michael Larabel
                  https://www.michaellarabel.com/

                  Comment

                  Working...
                  X