Announcement

Collapse
No announcement yet.

Linux 5.16.5 Released To Fix Up Btrfs' Botched Up Defragging

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Jannik2099 View Post
    Not sure why this is getting blown up so much. [...]
    It attracts visitors.

    Comment


    • #22
      Originally posted by onlyLinuxLuvUBack View Post
      I could imagine a phoronix benchmark:

      buy several hp z420 on ebay
      add same wd blue 2 TB sata hdd
      have source server hdd net file copy(cat pics or youtube-dled videos) parallel to each z420
      run defrag for btrfs and then check sha512
      one z420 could have newest btrfs
      another z420 could have old distro with btrfs
      measure defrag time
      measure copy time
      count # of sha512 hash cmp fails
      But then phoronix could also have a long in the oven benchmark category with monthly/weekly hilights, maybe use other z420s and see if ext4 or zfs or xfs barFS first?

      Comment


      • #23
        Originally posted by partcyborg View Post
        Yet just a few weeks ago force powercycling my htpc caused irreparable corruption on my last remaining btrfs partition. When your filesystem is so broken that it can't survive unclean unmounts, performance optimization should be the least of your concern.

        The one positive thing to come out of this is that as I said above this was the last remaining btrfs partition on any of my machines. I won't make the mistake of trying it again.
        already two years old and before fedora made btrfs their default fs...shure that it isnt the controler ?


        "
        Bacik concluded with a summary of what has worked well and what has not. He told the story of tracking down a bug where Btrfs kept reporting checksum errors when working with a specific RAID controller. Experience has led him to assume that such things are Btrfs bugs, but this time it turned out that the RAID controller was writing some random data to the middle of the disk on every reboot. This problem had been happening for years, silently corrupting filesystems; Btrfs flagged it almost immediately. That is when he started to think that, perhaps, it's time to start trusting Btrfs a bit more."

        Comment


        • #24
          Originally posted by RahulSundaram View Post

          Reputation should reflect the quality. If there are regular regressions, it should affect the reputation, not just of a single fs but for the kernel on the whole. The goal should to push for more quality control - automated tests and so forth rather than hide the problem.
          In actual reality, the proof of the pudding is in the eating. If you really care, you can't use cutting edge kernels.

          Comment


          • #25
            Mind that I never used BTRFS outside a VM (where I never had problems with it) and mostly read the bad news about it like the broken Raid 5/6 or a missing repair tool, those hopefully got fixed now? All this time I can remember one thing with ext4 that really wasn't a bug in ext4 but how Toolkits/DEs used it.
            Reading bad things about a filesystem whos main purpose is to savely store your files, creates a certain fear. If MDADM had a problem currupting files it would also get blown up and create fear of using it in production.
            Maybe BTRFS is now a flawless filesystem but it has lost trust and now everyone is looking for the tinyest thing because of its troubled history.

            Comment


            • #26
              Originally posted by CochainComplex View Post

              already two years old and before fedora made btrfs their default fs...shure that it isnt the controler ?

              I wouldn't bother. They already made their mind. Every time in the last two-three years someone brings up that "it corrupted itself" FUD they never did a proper troubleshooting. Surviving a crash or sudden power cycle is the most tested feature of btrfs.

              transid missmatch has become a meme at this point; pro/consumer hard drives and SSDs suck big time. And RAM too (too bad memtest is useless in some cases). And PSUs too.

              It puzzles me that people can't recall serious bugs in other filesystems (you Just have to go openzfs' issue tracker in github, for example). ext4 and XFS had data eating bugs in the past when they were already "production ready" (zero-byte files after unclean umount amuses anyone?).

              Comment


              • #27
                There are also plugged memory leaks, device property fixes and plenty of offset and trunc fixes, and a few div-by-zeros fixed. Even a package routing issue fix for Realtek. 5.16.5 is not about btrfs.

                I love how there's a single fix from a Siemens engineer, for tty, of course!

                Comment


                • #28
                  Originally posted by jo-erlend View Post

                  In actual reality, the proof of the pudding is in the eating. If you really care, you can't use cutting edge kernels.
                  That entirely depends on what you care about. If you care about kernel quality, you should be using the latest versions to test and provide feedback.

                  Comment


                  • #29
                    Unfortunate bugs do occur. I got hit by this one:



                    Which made me glad for my backups.

                    This issue is not so much that bugs occur, but how often and what type. Too many and/or slapdash bugs can give a project a poor reputation. I am carefully not saying that this is the case for btrfs. It is a risk for any project.

                    People living on the (b)leeding edge are providing a great service to others by testing new kernels, and presumably have good backups and mitigation routines. If you are using new/recent kernels, especially release candidates, without doing so, then you have a greater appetite for risk than I do.

                    Comment


                    • #30
                      Originally posted by Jannik2099 View Post
                      Not sure why this is getting blown up so much. Performance regression in a new major release that now got fixed. It's also by far not the only thing that got fixed in 5.16.5 or was broken in 5.16.0
                      Indeed, this is getting blown up too much. I have btrfs on three systems with SSDs and only one of them was mounted with autodefrag (which was added by me, not by default in the distro). For a few days I was wondering why a kernel thread called btrfs-cleaner was hammering my CPU constantly. I had already disabled that mount option after some PSA on reddit, but I'm glad they fixed this one!

                      Comment

                      Working...
                      X