Announcement

Collapse
No announcement yet.

There's A Proposal To Switch Fedora 33 On The Desktop To Using Btrfs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by pal666 View Post
    this is idiotic conclusion. databases should use nocow files.
    So the advantage of having to manually tune fs features vs. no need to care about is.. ?

    Comment


    • Originally posted by pal666 View Post
      lol, btrfs is better on all those metrics. it is faster than ext4 if you know what you are doing, it is more stable than xfs(facebook statistics), and it has more advanced features than zfs from day one
      Could you show how to configure it to be faster than EXT4?

      Comment


      • Originally posted by Neuro-Chef View Post
        So the advantage of having to manually tune fs features vs. no need to care about is.. ?
        that you get btrfs features and you don't tank performance?

        Comment


        • Originally posted by Space Heater View Post
          Checksumming is great when you have redundant copies of data so that you can repair the corruption, but btrfs' raid capabilities seem to still be incomplete/immature even with raid 1. In the end, detecting corruption is always nice, but it's not nice if it results in dropping a user to a rescue shell where the average user is going to be completely confused and will assume the filesystem is toast. From what I have seen and experienced, there appears to be zero empathy for the user experience in the btrfs community, unfriendly or confusing behavior is just brushed off as an edge case.

          I truly hope I'm completely wrong about btrfs and that it's done a 180 from where it was just a few years ago, but you casually mentioning that you lost data as recently as kernel 5.2 is the opposite of convincing (also Fedora does not stick to LTS kernels). In the end no amount of the advanced features matter if the file system itself goes sour.
          Well with BTRFS you are basically opting in for a system that either protects your data... Meaning it has to go read only instead of returning garbage data. Other filesystems may return faulty data. You have to decide what you prefer.

          I do agree with your that there are quite a few things read should have been improved from a user's perspective, but then again auto recovery from failure and spare devices is not straight forward with btrfs due to the complex configurations that are possible. Besides the user interface is clean and simple to use and gives you more control than any automated system. Going read only is very and in case of a corruption. Just think about it.

          As for data loss in kernel 5.2. that was the only royal screwup in years. And it was an early release kernel that was patched later. Nobody should run filesystems on non LTS kernels unless they are prepared to use their backups anyway.

          http://www.dirtcellar.net

          Comment


          • Originally posted by waxhead View Post
            Well with BTRFS you are basically opting in for a system that either protects your data... Meaning it has to go read only instead of returning garbage data. Other filesystems may return faulty data. You have to decide what you prefer.
            Looks like btrfs is *more* likely to lose your data in the case of a drive failure than ext4 is. The common case is on a single disk and most users suck at making backups, remember that. So ultimately btrfs being able to warn users of corruption (but not be able to do anything about it in the *common* case) is tempered by the fact that when disk failures happen btrfs will generally behave worse.

            Originally posted by waxhead View Post
            I do agree with your that there are quite a few things read should have been improved from a user's perspective
            Great, so you agree and you won't brush it off as an edge case right?

            Originally posted by waxhead View Post
            but then again auto recovery from failure and spare devices is not straight forward with btrfs due to the complex configurations that are possible. Besides the user interface is clean and simple to use and gives you more control than any automated system. Going read only is very and in case of a corruption. Just think about it.
            ​​​​​​​
            Oh right, dealing with errors is an edge case so it's ok.

            Originally posted by waxhead View Post
            As for data loss in kernel 5.2. that was the only royal screwup in years. And it was an early release kernel that was patched later. Nobody should run filesystems on non LTS kernels unless they are prepared to use their backups anyway.
            Blaming the users for data loss. It's like you have zero empathy for users, what a surprise.

            Just think about it.

            Comment


            • Originally posted by starshipeleven View Post
              with btrfs you can nocow on a file/folder basis too, so no need for partitions.

              But in general I always, always separate OS partition from payload partitions as this allows me to quickly clone around generic OS images in a snap, without pulling around terabytes of databases and other random crap from one server to the next.
              It still has a performance disadvantages to nocow on a btrfs partition compared to e.g. XFS though

              Comment


              • I support this, almost all the buzz about performance is from people that does nothing relevant enough to be protected and as so, should be using something else where walk on ropes with a unicycle is tolerable. ZFS would be a good choice but lets stop this kiddish arguing, it is just more juridic juggling with knifes.

                My only complain is quota performance, it do result in freezes while rebalancing, deleting subvolumes, etc, and as this really affect use experience this need to be worked before it turns a default, another point that bothers me is the lack of native encryption, but this is not like there is a easy desktop friendly and flexible solution native to the kernel or other juridic safe file systems.

                Comment


                • Originally posted by phoronix View Post
                  better handling when running out of disk space
                  Can I get the citation on that? I do wonder which filesystem has a greater performance for disks that are nearly full or completely full.

                  Comment


                  • Originally posted by Space Heater View Post

                    Looks like btrfs is *more* likely to lose your data in the case of a drive failure than ext4 is. The common case is on a single disk and most users suck at making backups, remember that. So ultimately btrfs being able to warn users of corruption (but not be able to do anything about it in the *common* case) is tempered by the fact that when disk failures happen btrfs will generally behave worse.
                    You seem to forget that corruption is also lost data. Ext2/3/4 has nothing that does anything about corrupted data either e.g. it can't fix it so you are left with a "seems to work , but may not work, and you sure as hell wound not know if it does not work" situation. If you want to compare ext4 then you have to compare with the same feature set.

                    Originally posted by Space Heater View Post
                    Great, so you agree and you won't brush it off as an edge case right?
                    Yes and no - the biggest issue I currently have with BTRFS is that it will not kick or temporarily blacklist a disk that is misbehaving from a pool. That can easily be scripted, but there may be better solutions. Such as giving it a lesser priority , making that disk read only depending on if it only fails for writes for example.

                    ​​​​​​​
                    Originally posted by Space Heater View Post
                    Oh right, dealing with errors is an edge case so it's ok.
                    No, you WANT to misunderstand. Try to understand instead...

                    Originally posted by Space Heater View Post
                    Blaming the users for data loss. It's like you have zero empathy for users, what a surprise.
                    And you seem to have zero empathy for developers. People work on something that you can use for free, they run test on BTRFS and try to catch all regressions etc. That is very hard to do regardless of the software project. Have you ever written code yourself?! The bug that appeared in 5.2 (or maybe it was 5.1) was not initially obvious , I did quite a few things with my filesystem at that time which happened to be just about when that bug was present. Had I not been so eager to test stuff and waited a bit then I would have avoided this problem entirely. Accidents happen - that is why you need backups if you value your data at all. Any sane user with a minimal technical understanding should realize that.

                    Originally posted by Space Heater View Post
                    Just think about it.
                    Perhaps you should think a bit as well.

                    http://www.dirtcellar.net

                    Comment


                    • Originally posted by waxhead View Post
                      You seem to forget that corruption is also lost data. Ext2/3/4 has nothing that does anything about corrupted data either e.g. it can't fix it so you are left with a "seems to work , but may not work, and you sure as hell wound not know if it does not work" situation. If you want to compare ext4 then you have to compare with the same feature set.
                      You seem to forget that I clearly stated the common case is a single disk system. In this case, btrfs will only be able to tell the user corruption occurred, the warnings are not actionable by the user. Most users do not have good backups, so the ability to detect corruption is strongly countered by btrfs being more likely to lose your data in the event of disk failure.

                      File system durability is the foundation, it doesn't matter how many extra features another file system has if in the end you are sacrificing some of its ability to not corrupt data. Not to mention that end users will not be exposed to the advanced features of btrfs, and therefore few if any will take advantage of its features.

                      Originally posted by waxhead View Post
                      And you seem to have zero empathy for developers. People work on something that you can use for free, they run test on BTRFS and try to catch all regressions etc. That is very hard to do regardless of the software project.
                      I'm glad you're not denying that you don't have empathy for users, and so there's not much more to discuss about btrfs being ready as a default. You've openly said that anyone not using an LTS kernel should expect/deserve data loss, that's not how kernel development works at all and that's certainly not how Fedora delivers kernels to their users.

                      As for your dubious claim about me, saying a file system handles failure worse than ext4, and has worse data recovery abilities than ext4 is not some personal insult to the developers, it's reality. You're unable to respond to what I'm saying and citing other than to dismiss it and then say I'm being mean to btrfs developers for citing an academic paper and pointing out common shortcomings users run into. Further, btrfs developers have received multiple reports about its unfriendly behavior and the response from them has been radio silence, I'd say that's in line with a lack of empathy for end users.

                      Why do you have zero empathy for the ext4 developers that worked hard to make their file system more durable? How do other file system developers manage to avoid dropping their users to a recovery shell as much as btrfs?

                      Originally posted by waxhead View Post
                      Have you ever written code yourself?!
                      Yes I have written code all by myself, have you?

                      Originally posted by waxhead View Post
                      The bug that appeared in 5.2 (or maybe it was 5.1) was not initially obvious , I did quite a few things with my filesystem at that time which happened to be just about when that bug was present. Had I not been so eager to test stuff and waited a bit then I would have avoided this problem entirely. Accidents happen - that is why you need backups if you value your data at all. Any sane user with a minimal technical understanding should realize that.
                      Yeah I'm sure that any sane user should blame themselves when a file system loses their data. You're divorced from reality if you think most users have backups, regardless of whether or not they should have backups. Once you wrap your head around that you will realize how non-trivial it is for a file system to lose data, especially a file system that would be used by default. You are not the average user.

                      Comment

                      Working...
                      X