Optimizing Linux MD Bitmap Code Yields 89% Throughput Boost For Quad SSDs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
  • phoronix
    Administrator
    • Jan 2007
    • 67050

    Optimizing Linux MD Bitmap Code Yields 89% Throughput Boost For Quad SSDs

    Phoronix: Optimizing Linux MD Bitmap Code Yields 89% Throughput Boost For Quad SSDs

    A promising patch for the Linux kernel is optimizing the locking contention and scattered address space for the MD bitmap code to improve both the storage throughput and latency...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite
  • bezirg
    Senior Member
    • May 2016
    • 156

    #2
    Impressive... Michael bless us with such RAID benchmarks!

    Comment

    • fermino
      Junior Member
      • Sep 2024
      • 4

      #3
      I know it is sequential throughput, but those numbers are wild nonetheless!

      Comment

      • Anux
        Senior Member
        • Nov 2021
        • 1878

        #4
        Strange, shouldn't Bitmaps only influence random write performance? Like atime does?

        I would be interested in the difference of mdadm with and without bitmaps or better internal/external bitmaps.

        PS: here is an old article with 10k RPM HDDs: https://louwrentius.com/the-impact-o...rformance.html

        Comment

        • zexelon
          Senior Member
          • May 2019
          • 731

          #5
          Now this is really interesting! LVM RAID is md under the hood, so would this significantly improve LVM RAID setups?

          Comment

          • Gamer1227
            Phoronix Member
            • Mar 2024
            • 56

            #6
            Originally posted by fermino View Post
            I know it is sequential throughput, but those numbers are wild nonetheless!
            He said tail latency reduced by 85%, wich is the main factor for random performace

            Comment

            • Raka555
              Junior Member
              • Nov 2018
              • 672

              #7
              I always have to be that guy ...

              I wonder if this still plays nice with hard disks, since many people use linux as a NAS for bulk storage on rotating media...

              Comment

              • Espionage724
                Senior Member
                • Sep 2024
                • 319

                #8
                Oddly-specific optimization, but I'd totally be RAID0'ing 4 NVMes just to do it (I had an Acer Predator Skylake laptop that came with 2 256GB NVMe drives out-the-box RAID0 config'd through Intel RST that was pretty cool)

                Comment

                • ernsteiswuerfel
                  Phoronix Member
                  • Feb 2009
                  • 65

                  #9
                  I wonder if (and how much) RAID-performance with traditional harddisks has increased too? Still got my RAID5 array running with ye goode olde el cheapo rustbuckets. Didn't see any details about that on the LKML message.

                  Comment

                  • schmidtbag
                    Senior Member
                    • Dec 2010
                    • 6599

                    #10
                    I doubt HDDs would be impacted. Perhaps 10Gbps SAS drives could be since they have larger caches and faster read/write speeds, but the thing about HDDs is they tend to be controlled by a single controller, whereas NVMe drives are directly connected to the PCIe bus and therefore are going to differ in their CPU overhead.
                    Having said that, I doubt that even SATA SSDs are really affected much by this.

                    Comment

                    Working...
                    X