Announcement

Collapse
No announcement yet.

Systemd 219 Released With A Huge Amount Of New Features

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by duby229 View Post
    If it finds thousands of bad sectors it isn't worth trying to fix. Really best bet is get a new drive.
    Right, I actually remember dropping this HDD, I wonder if that was the cause of this.

    When I dropped the drive it wasn't connected/working at that time, and the drop was about ~30cm or so (to hardwood floor).

    Thanks for the help, and sorry about the complaining.

    Sigh.
    Last edited by ihatemichael; 16 February 2015, 11:12 PM.

    Comment


    • #42
      Originally posted by ihatemichael View Post
      Oh fuck, badblocks is actually reporting bad blocks now:

      Code:
      [root@myhost ~]# badblocks -v /dev/sdb
      Checking blocks 0 to 976762583
      Checking for bad blocks (read-only test): 18853240
      18853241
      18853242
      18853243
      18853244
      18853245
      18853246
      18853247
      18853248
      18853249
      18853250
      18853251
      18853264
      A small walkaround until you have a new drive: https://wiki.archlinux.org/index.php...lesystem_Check
      This will make EXT4 aware of bad blocks and avoid using them. Next time, slap BTRFS on the drive, it will report immediately if there's a problem (saved me one time)

      Comment


      • #43
        Originally posted by ihatemichael View Post
        Thanks, and sorry about complaining about systemd.

        systemd is actually very nice.

        I feel like an idiot now.
        Oh don't worry, there will be plenty of other chances for systemd to piss you off. This just happened to be not one of them

        Comment


        • #44
          Well, hang on guys. We can still maybe bitch at systemd for this yet.

          Let that badblocks run to completion. When we test hard drives for customers, we typically don't worry a whole lot about a single or even a few runs of bad blocks (it seems like when a lot of drives screw something up and make sectors unreadable, they do it to a whole streak, like sometimes 4 sectors, sometimes like 60, sometimes one bad sector, a few good sectors, and a streak of bad ones). I don't even think it's a surface problem (or at least if it is, it's not continuing to degrade), I think the drive just flipped its shit and wrote some data incorrectly, maybe during a bad power-down or who knows what.

          But, the drives that are getting many different runs of bad sectors are a problem case for two reasons: It's likely there is a major problem with the drive, and if there isn't a major problem with the drive, there is a quality issue and we would prefer to rule out possibility of data loss given the rate that these problems are appearing with.

          Now, for the SMART readout, I'm too lazy to look back who said that there is a problem from that due to pending sectors, and they might be right IF the smart readout were indicating pending sectors. But, here's the line for pending sectors:

          197 Current_Pending_Sector 0x0032 252 252 000 Old_age Always - 0

          And the direct relevant number is the 0 at the end. It is saying there are 0 pending bad sectors. 197 is just the attribute number. But not all drives provide the raw number at the end (so zero is ambiguous), so it is also important to look at those 252, 252, 000. That is saying that right now it rates the health of this attribute at 252, the worst it has ever been is 252, and it's considered failed if it gets down to zero.

          So the SMART readout, at least before running badblocks, was clean. I bet if you run it now that the drive has found some issues, it will currently show pending sectors.

          Comment


          • #45
            Originally posted by alaviss View Post
            A small walkaround until you have a new drive: https://wiki.archlinux.org/index.php...lesystem_Check
            This will make EXT4 aware of bad blocks and avoid using them. Next time, slap BTRFS on the drive, it will report immediately if there's a problem (saved me one time)
            Thanks, do you use BTRFS for your rootfs and also for your home partition?

            Comment


            • #46
              I do a lot of PC repair for a living, so I deal with bad drives almost daily.

              What the pending sector count indicated was that there had been 252 bad sectors detected that had data in them that failed to read, and then got re-allocated on the next write attempt. It's those same bad sectors that are listed in the re-allocated sector count. Either way, the drive has bad sectors. If as the owner suggests it was caused by too much shock, then it's likely the surrounding sectors are reading slowly and will fail soo too. There just isn't any good way to repair that. You can try writing to those sectors a few time to strenghten their local domain, but that rarely actually works.

              Anytime that more than a "few" bad sectors are found, the drive should just be replaced. Especially when you believe it's physical shock damage.

              Comment


              • #47
                Does smartmontools (smartctl) and badblocks also apply to SSD?

                I have a 830 SSD on a ThinkPad and I'm curious to know what the health on that SSD is like.
                Last edited by ihatemichael; 16 February 2015, 11:49 PM.

                Comment


                • #48
                  Originally posted by ihatemichael View Post
                  Does smartmontools (smartctl) and badblocks also apply to SSD?

                  I have a 830 SSD on a ThinkPad and I'm curious to know what the health on that SSD is like.
                  Yeah, it should work on anything SATA. Blocks on an SSD are different from sectors on a harddrive though. It's highly unlikely to get bad blocks on an SSD until they start write wearing. On Windows SSD's tend to start write wearing near the beginning of the drive. I haven't experienced any wearing on linux yet so not sure what it would like like on linux.

                  Comment


                  • #49
                    Originally posted by ihatemichael View Post
                    Thanks, do you use BTRFS for your rootfs and also for your home partition?
                    Yes, I use subvol for that (quite convenient, no need for multiple partition)

                    Comment


                    • #50
                      Originally posted by duby229 View Post
                      I do a lot of PC repair for a living, so I deal with bad drives almost daily.

                      What the pending sector count indicated was that there had been 252 bad sectors detected that had data in them that failed to read, and then got re-allocated on the next write attempt. It's those same bad sectors that are listed in the re-allocated sector count. Either way, the drive has bad sectors. If as the owner suggests it was caused by too much shock, then it's likely the surrounding sectors are reading slowly and will fail soo too. There just isn't any good way to repair that. You can try writing to those sectors a few time to strenghten their local domain, but that rarely actually works.

                      Anytime that more than a "few" bad sectors are found, the drive should just be replaced. Especially when you believe it's physical shock damage.
                      This isn't what the 252 means, I promise. If those numbers read 0 (at or below the third of those numbers), the drive is considered to be in a failure state.

                      Those numbers are the drive's interpretation of that particular characteristic. There's a reason half of the other attributes also have a 252 there - that's just the highest number the drive considers using (253, 254, and 255 may have special meanings). It's not that 252 of each of those different events happened... that would be extraordinarily unlikely.

                      Some other drives limit it to a percent scale and you'll see a bunch of 100's on the stuff the drive considers to be in perfect condition.
                      Last edited by BradN; 17 February 2015, 12:11 AM.

                      Comment

                      Working...
                      X