Announcement

Collapse
No announcement yet.

Watch Out For BCache Corruption Issues On Linux 5.0 & GCC 9

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Watch Out For BCache Corruption Issues On Linux 5.0 & GCC 9

    Phoronix: Watch Out For BCache Corruption Issues On Linux 5.0 & GCC 9

    If you make use of BCache as a Linux block cache so that an SSD cache for a slower HDD, watch out as there is an active corruption bug...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I always heard that bcache wasn't stable. A shame though.

    Comment


    • #3
      Linux operating systems have problems in HDD/SSD management since 10/15 years at least. HDDs are struggling on I/O operations. The same hard drive used in XP is not so strained making the operation very quick without any noise compared to every linux operating system. This kind of thing happen when the hard drive is too much fragmented or there are problems on the mapping of data.

      Comment


      • #4
        Azrael5 in my experience Windows causes much more noise and slowdowns when running of an HDD. I'm not considering ancient versions like XP, though.

        Comment


        • #5
          Ooooooh, scary.

          Thank you for the warning Michael!

          Comment


          • #6
            Originally posted by Azrael5 View Post
            Linux operating systems have problems in HDD/SSD management since 10/15 years at least. HDDs are struggling on I/O operations. The same hard drive used in XP is not so strained making the operation very quick without any noise compared to every linux operating system. This kind of thing happen when the hard drive is too much fragmented or there are problems on the mapping of data.
            That's a surprising statement to me, given the nature of Linux filesystems as compared to FAT32 and NTFS. Both of the latter have massive problems wit fragmentation, while ext? and others are very good at avoiding it with clever heuristics in design and implementation. That's the theoretical part.

            Now to the hands-on, I would like to add that Linux I/O in general fares better than Windows I/O, not only in benchmarks. It is no surprise given the Unix background of Linux and its dominant use as a server system. It would be a surprise however, that Linux is the dominant OS on file server solutions, if its I/O was not competitive. The BSDs have some strenghts here as well, but Windows?

            Comment


            • #7
              thanks for the report, glad I'm not using a rolling release distro.

              Comment


              • #8
                Originally posted by ypnos View Post

                That's a surprising statement to me, given the nature of Linux filesystems as compared to FAT32 and NTFS. Both of the latter have massive problems wit fragmentation, while ext? and others are very good at avoiding it with clever heuristics in design and implementation. That's the theoretical part.

                Now to the hands-on, I would like to add that Linux I/O in general fares better than Windows I/O, not only in benchmarks. It is no surprise given the Unix background of Linux and its dominant use as a server system. It would be a surprise however, that Linux is the dominant OS on file server solutions, if its I/O was not competitive. The BSDs have some strenghts here as well, but Windows?
                ext4 has no fragmentation? Looooooool. I used to have a MySQL production server where MySQL database files occupied over 40K (!) fragments. It was over 10 years ago, so no SSD, and those files read speed was below 10MB/sec. Also, tell me, how would you defrag free space in ext4? I'm not saying NTFS is perfect - in regard to files fragmentation it's an extremely bad filesystem but ext4 is far from excellent: no free space defragmentation, the number of inodes can't be changed after FS creation, no files compression, random files can't be defragmented for no reasons.

                Comment


                • #9
                  Originally posted by birdie View Post
                  whatever
                  Oh, it's you again. Your little anecdote does not trump the common knowledge in filesystem research.

                  Comment


                  • #10
                    Originally posted by ypnos View Post
                    Oh, it's you again. Your little anecdote does not trump the common knowledge in filesystem research.
                    He stated facts though.

                    Comment

                    Working...
                    X