Announcement

Collapse
No announcement yet.

The Linux 4.0 EXT4 RAID Corruption Bug Has Been Uncovered

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    This isn't a problem for Linux users on distributions like RHEL, Ubuntu, and other fixed-release distributions that don't tend to update major versions of their kernel post-release, but this corruption issue has already become a problem for Arch Linux and other rolling-release distributions with users who quickly jump to new versions of upstream software.
    Originally posted by wargames View Post
    So, basically, you are implying that one should stick to LTS releases and throw their graphics card through the window? And YOU pretend Linux is ready for the desktop? LOL
    No, I don't think he was implying that at all. He was just trying not to cause a panic among those folks who aren't running a bugged kernel, which is probably 90% of the readership here.

    On topic: I'm glad this was solved. I've been running my personal desktop on a bleeding edge kernel but don't have any EXT4 on RAID. Was just holding my breath until I got hit with this bug. (Only / is EXT4, so not a huge deal if it breaks.) Glad to hear I probably won't be.

    Comment


    • #12
      Michael, thanks for actually looking at my bug report and at least linking to the patch along with some context. Other media sites it seems don't want to put in that effort. Keep up the good work.

      Comment


      • #13
        Originally posted by cjcox View Post
        A friend of mine has a prison pen pal who insists there are other filesystems besides ext4.
        xfs master-race

        Comment


        • #14
          Originally posted by pal666 View Post
          wrong
          you need both mesa and llvm
          Try again. You definitely need to keep up to date with the kernel for KMS based drivers. Also, LLVM is only required for some of the drivers.

          Comment


          • #15
            Originally posted by sl1pkn07 View Post
            only RAID users is affected, rigth?
            It seems that raid0 users only are affected. I have 4.0.3 compiled with only raid1 support running for a week now and my array is fine.

            Comment


            • #16
              Originally posted by cjcox View Post
              A friend of mine has a prison pen pal who insists there are other filesystems besides ext4.
              LMAO!! Well played sir. I had forgotten about good ole Hans.
              Last edited by torsionbar28; 22 May 2015, 10:42 AM.

              Comment


              • #17
                That photo with a RAID of WD Green drives makes me cringe. RAID on Green drives is a recipe for data loss.

                Comment


                • #18
                  Originally posted by torsionbar28 View Post
                  That photo with a RAID of WD Green drives makes me cringe. RAID on Green drives is a recipe for data loss.
                  I know... It's such a bad idea. Those drives are running on a short timescale as is.

                  EDIT: I don't understand how BTRFS does RAID, but I don't think it works in the traditional way. So the hardware limitations of the green drives firmware microcode may not apply for all I know.
                  Last edited by duby229; 22 May 2015, 11:56 AM.

                  Comment


                  • #19
                    Originally posted by duby229 View Post
                    EDIT: I don't understand how BTRFS does RAID, but I don't think it works in the traditional way. So the hardware limitations of the green drives firmware microcode may not apply for all I know.
                    What limitations? The only thing I know of is that they by default do head parking way too often, but that's fixable.

                    Comment


                    • #20
                      Originally posted by GreatEmerald View Post

                      What limitations? The only thing I know of is that they by default do head parking way too often, but that's fixable.
                      I can't say I fully understand it. It has something to do with limitations WD imposes at the firmware level. One problem green drives have is they can trigger RAID controllers to mark the drive bad because of missing error correction feature TLER. Another problem green drives have is they don't have the logic to keep seeking syncronized between drives and it adds a bunch wear and tear. These are issues that only occur in RAID setups.

                      Comment

                      Working...
                      X