Announcement

Collapse
No announcement yet.

Bcachefs File-System Is Working On Going Upstream In The Linux Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by oiaohm View Post
    https://lwn.net/Articles/747633/
    Sorry XFS is a data copy on write filesystem. XFS includes in current form data copy on write. What has been generally recognised as requiring a full COW file system turns out to be wrong. Hybrid between data copy on write and update in place metadata has been demoed. The hybrid prototype seams to show that it can do all the functionality of full copy on write file systems without the worst overheads and issues of full copy on write file systems.

    Data copy on write file system is a lot simpler item to create than a full copy on write file system. Data copy on write allows effective duplication reduction.
    That's pretty interesting, technically speaking it's not an issue to have non-CoW metadata if you have journaling for that.

    I also like a lot the analysis of what is actually needed and trying to provide the same functionality in different ways.


    I would really like if XFS had any kind of data checksumming though, and its own RAID capabilities (that are required for checksumming to work decently). I suspect that not having that makes their job easier, but still it would be interesting to see this developer's take on adding parity and checksumming to XFS.
    Really we might end up hating ZFS because its method that btrfs and other attempted to follow might have been completely the wrong way.
    Unless they knew beforehand that it was the wrong way and did it anyway, there is nothign to hate. Pioneers have to try new stuff that might or might not pan out.


    THat XFS developer said how they are learning lessons from btrfs and others, this is "building on the shoulders of giants", not "everyone before the new good solution was wrong and should be hated".
    Last edited by starshipeleven; 10 May 2018, 04:55 AM.

    Comment


    • #22
      Originally posted by oiaohm View Post
      I guess the stuff I quoted was written before that changed in XFS, but these changes are at least from 2016... that seems bad to keep such information for so long for an FS dev...

      Comment


      • #23
        Originally posted by geearf View Post
        I guess the stuff I quoted was written before that changed in XFS, but these changes are at least from 2016... that seems bad to keep such information for so long for an FS dev...
        Or it was writtent with the intent of deceiving people

        Comment


        • #24
          BTRFS seems to work well on cached LVM. Even more so when using zstd compression.

          Comment


          • #25
            Originally posted by starshipeleven View Post
            Unless they knew beforehand that it was the wrong way and did it anyway, there is nothign to hate. Pioneers have to try new stuff that might or might not pan out
            I was meaning hate as a solution. As in it comes one of the last things on earth you would dare use.

            Having a file system that handles ENOSPC badly is not a particular good thing. Btrfs and ZFS have both have this problem. Both it comes to not being able to calculate how much space is going to be consumed to complete an operation dependably this is coming from the COW metadata. The history of COW metadata file systems have brought up the same problems. The one thing is xfs developers do properly torture tested their file system.

            Originally posted by starshipeleven View Post
            THat XFS developer said how they are learning lessons from btrfs and others, this is "building on the shoulders of giants", not "everyone before the new good solution was wrong and should be hated".
            Yes and lot things the XFS developer found in btrfs and zfs was not particular good. So leading to lets try another way.

            I do agree it would be nice for XFS to get data checksum zfs has meta checksum.

            Comment


            • #26
              Originally posted by nazar-pc View Post
              If it ends up been more reliable and/or faster than btrfs
              lol
              /dev/null is already faster, switch to it

              Comment


              • #27
                Originally posted by geearf View Post
                I think the codebase is much smaller than btrfs.
                as is feature set. there are smaller codebases than bcachefs just for you, btw
                Originally posted by geearf View Post
                and if it is better at what btrfs is not good at
                it is better at not working, that is true
                Originally posted by geearf View Post
                I believe he is.
                He has a patreon for that.
                lol
                and patreon even lists how much he is asking and how much he is getting

                Comment


                • #28
                  Originally posted by Vistaus View Post
                  While $1754 may not sound like a lot for a full-time job, it depends on his IRL job.
                  if he does have irl job, then 1754 surely can't be a full-time job, it can only be part-time
                  Originally posted by Vistaus View Post
                  I earn less per month than what he gets from Patreon donations
                  do you live in san francisco? because he is. and i remember this story https://www.cnbc.com/2017/04/24/twit...ends-meet.html

                  Comment


                  • #29
                    Originally posted by oiaohm View Post
                    Really we might end up hating ZFS because its method that btrfs and other attempted to follow might have been completely the wrong way.
                    its "method" is integration of fs with device management, which is not wrong. and last time i checked xfs had nothing to compete with it

                    Comment


                    • #30
                      Originally posted by starshipeleven View Post
                      That's pretty interesting, technically speaking it's not an issue to have non-CoW metadata if you have journaling for that.
                      technically speaking you will have to duplicate metadata of whole filesystem atomically during each snapshot. i guess if you can do it atomically, then you have cow. so they can't. and it will take space

                      Comment

                      Working...
                      X