Announcement

Collapse
No announcement yet.

"Project Springfield" Is Red Hat's Effort To Improve Linux File-Systems / Storage

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by polarathene View Post
    LVM snapshots aren't as good as the BTRFS kind.
    Sure but AFAIK they are practically similar (both systems get slow with to many snapshots)

    Originally posted by polarathene View Post
    Stability? BTRFS is pretty stable these days, are you biasing this based on issues from years back or features that are marked as unstable(RAID 5/6)?
    Just last week there were 12 bug fixes:

    it's fine for most but a company will want to select a stable default to reduce support and maintain the image of providing a reliable product.

    Originally posted by polarathene View Post
    Speed when full? What issues does BTRFS have regarding that?
    This might be an old ZFS limitation, maybe phoronix will do a modern test for us.

    Originally posted by polarathene View Post
    SMR...license
    yep again I was referencing ZFS (maybe should have split the issues per fs)

    Originally posted by polarathene View Post
    It's database friendly afaik, just disable CoW...
    With LVM one can db and snapshot the same volume, one should not with alternatives (AFAIK).

    Originally posted by polarathene View Post
    ..you don't have to assign a single filesystem to all your storage space...
    But if you want to shuffle file systems LVM is your friend and once you are already using LVM (and cryptsetup) the advantages of btrfs diminish. When btrfs gets encription and caching it will make choosing between it and LVM&friends easier..
    Last edited by elatllat; 01 July 2020, 01:35 PM.

    Comment


    • #22
      That's a frontend for the same kernel's software RAID subsystem just as mdadm is. It's just a different userspace tool.

      I've had (far) better performance with bcache than with lvmcache, which is why I've mentioned it instead of lvmcache.


      Comment


      • #23
        Originally posted by elatllat View Post
        Just last week there were 12 bug fixes:

        it's fine for most but a company will want to select a stable default to reduce support and maintain the image of providing a reliable product.
        Measuring quality by bugfixes is hilarious. That's why I brought it up here when EXT4 had a pile of bugfixes. You'd think, after all, that if this was an important measurement of quality, then EXT4 is not fit for use.

        Comment


        • #24
          Originally posted by Zan Lynx View Post

          Measuring quality by bugfixes is hilarious. That's why I brought it up here when EXT4 had a pile of bugfixes. You'd think, after all, that if this was an important measurement of quality, then EXT4 is not fit for use.
          If you count the number of bug fixes for both file systems over the last few years you will find there is a significant difference, which leads to individuals experiencing these bugs on the older kernels shipped with most distributions. it's quite understandable why many never have problems while many still consider btrfs unstable.

          EDIT:
          The commit counts are similar;
          Code:
          # git clone --branch linux-5.4.y https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git
          # cd linux
          # git log --since="2019-11-25" --grep="ext4" --pretty=oneline | grep -c ext4
          50
          # git log --since="2019-11-25" --grep="btrfs" --pretty=oneline | grep -c btrfs
          91
          But the severity must differ because I've had, and read more people having, data loss due to bugs on btrfs than ext4.
          Last edited by elatllat; 01 July 2020, 01:26 PM.

          Comment


          • #25
            It's worth remembering that Red Hat also acquired Gluster and Inktank (Ceph) in recent years, not to mention the enterprise storage portfolio that IBM bring into the mix.

            Comment


            • #26
              Ceph has lax documentation but is nice (ideal?) if you are always networked (no single point of failure like btrfs/zfs/stratis).

              Comment


              • #27
                Originally posted by elatllat View Post
                If you count the number of bug fixes for both file systems over the last few years you will find there is a significant difference, which leads to individuals experiencing these bugs on the older kernels shipped with most distributions.
                ..severity must differ..
                It's almost as if btrfs was still under significant development while ext4 is well into the maintenance stage at this point.

                Comment


                • #28
                  Originally posted by elatllat View Post
                  Ceph has lax documentation but is nice (ideal?) if you are always networked (no single point of failure like btrfs/zfs/stratis).
                  Depends from the project's scale. Ceph is for SANs and clusters where you have a lot of servers anyway because either you need a lot of CPU/GPU power or you are working with completely ridicolously huge datasets that will never fit in a single server of normal size.

                  I mean who builds a SAN with 4 different servers to store like 10TB of data?

                  Comment


                  • #29

                    Originally posted by starshipeleven View Post
                    It's almost as if btrfs was still under significant development while ext4 is well into the maintenance stage at this point.
                    The commit log was bugfixes only (no development) on the most recent LTS, but yah it's new that's why it's slightly more buggy in commonly used features.


                    Originally posted by starshipeleven View Post
                    Depends from the project's scale. Ceph is for SANs and clusters where you have a lot of servers anyway because either you need a lot of CPU/GPU power or you are working with completely ridicolously huge datasets that will never fit in a single server of normal size.

                    I mean who builds a SAN with 4 different servers to store like 10TB of data?
                    Well if you want to apply the weekly kernel bugfix without interrupting your better half watching a vid from your SAN, then spend $222 on (4 odroid C4) servers to keep your $1k of drives up... OK I don't do that but I can see the advantage of Ceph in small cheap clusters.

                    Comment


                    • #30
                      Originally posted by elatllat View Post
                      Well if you want to apply the weekly kernel bugfix without interrupting your better half watching a vid from your SAN, then spend $222 on (4 odroid C4) servers to keep your $1k of drives up... OK I don't do that but I can see the advantage of Ceph in small cheap clusters.
                      Eh. The issue is CPU power, especially for single threaded operations, also considering that Ceph and other distributed filesystems do require more CPU power to run than a local filesystem.

                      Raspi and other embedded toy boards can be turned in clusters and kubernetes swarms and whatnot, but it's almost always complete garbage in CPU department so it's little more than a research project.

                      Comment

                      Working...
                      X