Announcement

Collapse
No announcement yet.

Linux MD RAID Gets Some Improvements For 4.9 Kernel

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux MD RAID Gets Some Improvements For 4.9 Kernel

    Phoronix: Linux MD RAID Gets Some Improvements For 4.9 Kernel

    The MD subsystem updates were sent out earlier this week for the Linux 4.9 kernel merge window with a few improvements/features to note...

    http://www.phoronix.com/scan.php?pag...ux-4.9-MD-RAID

  • #2
    A few years ago I discovered that I could tune /sys/block/md0/md/stripe_cache_size for both raid5 and raid6.
    this value appears to be set very low to avoid starving machines for memory.

    I may be wrong on this, but the value is by default set to 256 and the formula appears to be 256 * 4096 * number of disks. So in my case 8 disks configured in raid 6 means that it will use about 8 megabytes of memory or in other words 1MB pr disk.

    Since most disks have 32mb of cache or more I assume that I can safely increase the value so it is almost equal to the cache size of each disk.
    I have myself set the value to 8192 (*4096 * number of disks) which is about 268MB or 268/8 = 33.5Mb pr. disk

    My raid is insanely MUCH faster and I think that the risk of corruption is no worse than having a lower value since it probably depends on the cache size of the disk anyway.
    I have run this on my raid that have went down several times thanks to power interruptions and have never lost anything at all, however a few times I have noticed that the raid fixes silent data corruption on read and during periodic checking (scrubbing).

    Comment


    • #3
      Originally posted by waxhead View Post
      A few years ago I discovered that I could tune /sys/block/md0/md/stripe_cache_size for both raid5 and raid6.
      this value appears to be set very low to avoid starving machines for memory.
      Yeah, yet another cache that is set at an unrealistically low value for modern computers.
      It is a system cache using RAM, it has nothing to do with disk's own RAM cache (stuff bolted to and operated by the drive itself).

      Btw, it appears to matter only for RAID5 according to docs https://github.com/torvalds/linux/bl...on/md.txt#L603

      Comment


      • #4
        Those docs must be out of date, because there are very significant performance improvements on RAID6 as well.

        On some of my servers 8192 is best, while others like 2048. These are all different 12bay servers with the same drives, so it's probably more to do with the system bandwidth or the number of SATA channels through the SAS expanders.

        Comment

        Working...
        X