Announcement

Collapse
No announcement yet.

Bcachefs Merged Into The Linux 6.7 Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by blackiwid View Post

    Again you brought that onto you with your elitist speech and promoting the garbage os of BSD that uses that much ram, and it's on you to fight the myth, make a benchmark with 2-4gb system and 2 10tb drives or something or 4 and compare it btrfs and put it on a website much more effective and point the finger and say for home users Freebsd is in most cases a bad choice because it wastes lot's of ram.

    Heck me as somebody that don't like ZFS has probably done more for educating about the ram Issue (figured out it's probably mostly a freebsd issue) than most of you elitist ZFS snops.
    It can scale down pretty well...

    Is this feasible? Yes it is! OpenZFS on Linux compiled from sources for RISC-V, it will work with less than 250MB of RAM for a nearly fully filled 2TB zfs volume with 225.000 files! Even stressing the system and run in parallel (via Wifi): a…


    Also, ARC improving performance by caching data in RAM you aren't using is a good thing. If the kernel needs it for something else it will take it.
    Last edited by pWe00Iri3e7Z9lHOX2Qx; 01 November 2023, 12:38 AM.

    Comment


    • #52
      Originally posted by Britoid View Post

      Apart from having a bus factor of 1 and having no major deployment and testing in production, btrfs doesn't have either of these issues.
      Whining about a new FS not having any major deployments yet?

      Comment


      • #53
        Originally posted by blackiwid View Post

        "A special VDEV for faster metadata" so some special setup? So I don't think that refers to my statement about "default config".

        How big was it? I mentioned 20TB harddisks, which would according to the rule of thumb need 20gb to run smooth. Again with tweaking the settings there is probably lot possible (maybe more recently and not when zfs started out).
        4x12TB hdd and 2 mirrored 512GB sata ssd
        a special vdev is not a special setup though

        Comment


        • #54
          It's merged and I'll be running some fresh Bcachefs file-system benchmarks soon on Phoronix.‚Äč
          I think traditional performance benchmarks for filesystems are a little bit boring. Very few people care and those who need best performance metrics will not choose a CoW fs anyway. Of course it can be nice to learn about certain outlier metrics where the filesystem could improve.

          If we want to learn from some of the disappointments with btrfs: IMHO 'benchmarking' a file system should include loads of controlled and repeatable resiliency tests/scenarios (sudden power loss during a write, self-healing capabilities from corruption, etc.). I understand those results would be a little bit more esoteric and difficult to quantify in a neat little histogram, but it's much more interesting ("Unplugging a drive with this fs during a write causes unrecoverable file system 5% of the time").
          Last edited by Deathcrow; 01 November 2023, 02:34 AM.

          Comment


          • #55
            giphy.gif I've never thought i'd live long enough to see the day. Hopefully it will treat the data much better than btrfs while having the same features.

            Comment


            • #56
              Originally posted by ptrwis View Post
              But can it over street?
              It kent.

              Comment


              • #57
                Originally posted by skeevy420 View Post
                While I can see the appeal of that for specific use cases, mainly enterprise or niche, it requires a lot of planning, forethought, disks, and ports to have a mirror here, a mirror there, a riadz over yonder...
                A lot of nice stuff about Linux and open source software comes from enterprise features / high performance enhancements that trickle down nicely into the desktop use case. Not all of them do, but things scale better and chances improve when you have a general mechanism in place, rather than everybody having their own niche thing.

                Comment


                • #58
                  Why so much offtopic about Btrfs and ZFS? Is it some kind of conspiracy?

                  Comment


                  • #59
                    Originally posted by Deathcrow View Post

                    I think traditional performance benchmarks for filesystems are a little bit boring. Very few people care and those who need best performance metrics will not choose a CoW fs anyway. Of course it can be nice to learn about certain outlier metrics where the filesystem could improve.

                    If we want to learn from some of the disappointments with btrfs: IMHO 'benchmarking' a file system should include loads of controlled and repeatable resiliency tests/scenarios (sudden power loss during a write, self-healing capabilities from corruption, etc.). I understand those results would be a little bit more esoteric and difficult to quantify in a neat little histogram, but it's much more interesting ("Unplugging a drive with this fs during a write causes unrecoverable file system 5% of the time").
                    I agree. CoW filesystems are very interesting, providing reliability and data integrity advantages but there's tradeoffs.

                    Is there a possibility to also make data integrity and reliability "benchmarking" too?

                    Comment


                    • #60
                      Originally posted by timofonic View Post
                      Why so much offtopic about Btrfs and ZFS? Is it some kind of conspiracy?
                      Didn't you get the memo about CoW filesystems being one of those "Ride or Die" kinda deals?

                      Comment

                      Working...
                      X