Announcement

Collapse
No announcement yet.

EXT4 File-System Updated For Linux 3.11 Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • EXT4 File-System Updated For Linux 3.11 Kernel

    Phoronix: EXT4 File-System Updated For Linux 3.11 Kernel

    Ted Ts'o has already sent in his pull request for EXT4 file-system changes targeting the Linux 3.11 kernel...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I have a personal vendetta against tso. I know he is not the maintainer of badblocks, but he has personally rebuffed hundreds of people who complained that the program is broken and literally can take 2 days to scan a hard drive. I am now forced to use ddrescue and output it to a /dev/null. What a shame.

    Comment


    • #3
      badblocks is a fine program, I use it frequently.

      By default, it does multiple write/read passes -- four. But if you don't want that many passes, you can use a command line option to do fewer passes ( -t ).

      With regard to the speed of each pass, I have no difficulty getting it to write and read at the top speed of my HDDs. If you are having trouble with that, you may want to experiment with using 4096 for the block size ( -b ) and increasing the number of blocks tested at a time ( -c ). Also, if you care about speed, never use the non-destructive read-write mode ( -n ) -- always use the write-mode test ( -w ).

      BTW, exaggerate much? "personally rebuffed hundreds of people".
      Last edited by jwilliams; 03 July 2013, 01:37 AM.

      Comment


      • #4
        Snapshots

        What happened to snapshots in EXT4?
        IIRC there was some beta/staging code to implement snapshots in EXT4 last year. Where did it go?
        I really would like snapshots in EXT4.

        Comment


        • #5
          Originally posted by jwilliams View Post
          badblocks is a fine program, I use it frequently.

          BTW, exaggerate much? "personally rebuffed hundreds of people".
          there is a debian bug listing with about a thousand plus comments regarding this issue. they may have fixed it, but it was an absolute abomination on anything bigger than 160GB.

          Comment


          • #6
            Originally posted by garegin View Post
            there is a debian bug listing with about a thousand plus comments regarding this issue. they may have fixed it, but it was an absolute abomination on anything bigger than 160GB.
            that's interesting, I never had performance problems with badblocks, and I've used it primarily on 1TB+ HDDs, on few dozen different machines (from netbooks, up to enterprise RAID arrays). I mean, sure the test takes days to run with a 1TB drive, but not because the problem in badblocks, rather the effect of widening gap between read and write speed (which goes up linearly) and storage density (which goes up exponentially) and the fact that badblocks does test the HDD multiple times just with different bit patterns (4 to be exact). In other words, I always saw nearly maximum read/write speed of the drive.

            Comment


            • #7
              Originally posted by garegin View Post
              I have a personal vendetta against tso. I know he is not the maintainer of badblocks, but he has personally rebuffed hundreds of people who complained that the program is broken and literally can take 2 days to scan a hard drive. I am now forced to use ddrescue and output it to a /dev/null. What a shame.
              It depends on your disk controller. You might want to tune TLER/CCTL/ERC via smartctl (smartmontools):

              Code:
              smartctl -l scterc,10,10 /dev/disk
              The above will set the read/write retry timeout to 1 second (10 deciseconds). See the manpage of smartctl for details. This option helped me greatly when I was recovering 1TB drive from (what I assume was) a head crash that damaged a handful of sectors.

              Comment


              • #8
                I don't think that it's that complicated. other surface scanners work just fine. ddrescue always works at full speed. and it's not even a surface scanner!

                Comment


                • #9
                  Originally posted by garegin View Post
                  I don't think that it's that complicated. other surface scanners work just fine. ddrescue always works at full speed. and it's not even a surface scanner!
                  ddrescue does not perform the same job as badblocks. ddrescue cannot write one or more patterns to all blocks on a drive and then read them back to see if the patterns are correct.

                  But if you want to only read all the sectors from a drive, skipping over large areas of unreadable sectors, then perhaps ddrescue is a better choice than badblocks, since ddrescue has logic to more quickly get past large groups of bad sectors.

                  I use badblocks mostly to test new drives. If a drive has a lot of bad sectors, I do not care about the speed of testing -- I return the drive. badblocks is a tool for testing drives that have few bad sectors. ddrescue is designed to rescue data from drives with a lot of bad sectors. Two different jobs, two different tools.

                  With badblocks, you can specify a maximum number of bad blocks ( -e ) before aborting the test. This is useful if you are qualifying drives and your criteria specifies a failure with a certain minimum number of bad blocks -- no need to continue to test if the drive has already failed.

                  Comment

                  Working...
                  X