Announcement

Collapse
No announcement yet.

Btrfs RAID Testing Begins With Linux 4.0

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Btrfs RAID Testing Begins With Linux 4.0

    Phoronix: Btrfs RAID Testing Begins With Linux 4.0

    With Btrfs recently landing RAID 5/6 improvements and other enhancements, I've been working on some fresh Btrfs RAID benchmarks using the Linux 4.0 kernel...

    http://www.phoronix.com/scan.php?pag...inux-4.0-Btrfs

  • #2
    wheres the article?

    Comment


    • #3
      RAID5 please.
      ## VGA ##
      AMD: X1950XTX, HD3870, HD5870
      Intel: GMA45, HD3000 (Core i5 2500K)

      Comment


      • #4
        Would yo mind testing things like rebalance when one drive fails and you replace it?

        Basically if it's anything but crazy actually running raid5, even on a not crucial server

        Comment


        • #5
          HGST discs are best for RAID

          WD discs are not a good idea for RAID, they fail at a high rate.
          Recently I had a WD enterprise disc fail within 3 months, and that wasn't the first problem I had with WD discs (even though this was a new record).

          Use HGST discs, never had a problem with those and after the recent WD failure decided to stick to HGST exclusively from now on.

          Comment


          • #6
            Originally posted by fhuberts View Post
            WD discs are not a good idea for RAID, they fail at a high rate.
            Recently I had a WD enterprise disc fail within 3 months, and that wasn't the first problem I had with WD discs (even though this was a new record).

            Use HGST discs, never had a problem with those and after the recent WD failure decided to stick to HGST exclusively from now on.
            I don't understand it fully, but it has something to do with the color of the model. Cavier Blue drives for example don't have the required hardware instructions to keep the access patterns to the disk synchronized with the other disks. Technically the raid will work just fine, but the hardware just thrashes all to hell.

            Comment


            • #7
              Originally posted by fhuberts View Post
              WD discs are not a good idea for RAID, they fail at a high rate.
              I'm gonna go out on a limb and say that Michael knows about the WD green reputation, but they're also the cheapest. I don't think he's suggesting they be used in real situations, but they're fine for benchmarking.

              Comment


              • #8
                WD Greens have insanely low head parking timeout (8s maybe?). Maybe thats the culprit. I have it disabled on my NAS. Was lucky to spot article about this week after i bought disks.

                Comment


                • #9
                  ..and do not forget about "WD idle3 timeout"

                  Originally posted by fhuberts View Post
                  WD discs are not a good idea for RAID, they fail at a high rate.
                  Recently I had a WD enterprise disc fail within 3 months, and that wasn't the first problem I had with WD discs (even though this was a new record).

                  Use HGST discs, never had a problem with those and after the recent WD failure decided to stick to HGST exclusively from now on.
                  HGST would be my favorite, too. One of drives from HGST I've got helds 70 000 hours of run. That is it: 7 years. And still perfect S.M.A.R.T. state. Wow, that's how one should manufacture HDDs for sure.

                  Yet, WDs are my second favorite if Hitachis are not an option for some reason. They're worse than HGST drives, but far better than Seagate, who are legendary for their fallouts (backblaze stats are pretty clear on that, btw).

                  But there is one catch in WD. One should check "WD idle3" timeout state and if it enabled - disable it! Hdparm would do (even though it requires power cycling). The caveat is: if idle3 is enabled (and it is on most "green" drives) - HDD would park heads after short timeout. And then unpark them on any activity. This is unwise attitude and it lead for 140 000 head parking cycles in some mere 1000 hours of uptime. Needless to say it wears out mechanics and drive would die in some few months (IIRC, drives can usually withstand about 300-600K parking cycles or so). However, things will become normal after you disable WD idle3 timeout and cycle power. This way, many WDs would work far longer than they would usually

                  Yet, cheap WD series seem to have issues with gold plating on PCB (or to be exact, almost complete lack of it), which leads to PCB oxidization in very foreseeable time (around 1 year) and eventually PCB stops conducting somewhere (often in magnetic heads block connection) - then drive starts to malfunction, etc and could die in ways which make it costly to recover data.

                  Comment


                  • #10
                    Originally posted by fhuberts View Post
                    Use HGST discs, never had a problem with those and after the recent WD failure decided to stick to HGST exclusively from now on.
                    That's funny, because I have a HGST disk sitting on my desktop looking pretty and that's pretty much it. Mind you, it's not that it failed, or anything is shown wrong in SMART either, but... reads are hilariously slow. Slower than writes, to the point that launching Firefox takes 2 minutes from cold boot. So I could use it to store things like pictures and other things that are completely not dependent on drive performance, but that's pretty much it...

                    Comment

                    Working...
                    X