Announcement

Collapse
No announcement yet.

Arch Linux Powered CachyOS Now Defaults To Btrfs Rather Than XFS

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by ptr1337 View Post
    ehansin
    I did reset the repo, it is weird that it didnt work. Maybe some kind of user hosted mirror is currently problematic. I have checked all of our hosted mirrors and they are completly synced.
    I have anyways, made a repo reset just in case there is anything wrong, but I could not reproduce it locally.
    You could try it again.
    Okay and thanks - I will try again tomorrow and report back here!

    Comment


    • #52
      Originally posted by Anux View Post
      It's still an Redundant Array of Independent Disks, it just shouldn't be called Raid 10 because that is a somewhat defined arrangement.

      In the end you can't bet on any Raid 10 with a single disc failure because of Murphy's law. If you need more redundancy you need Raid 1s with 3 discs (maybe collected in a Raid 0) or Raid 6, etc.
      Yes but the redundancy is defined as on disk level, not on chunk level hence the old way of handing/treating RAID was under the presumption that if part of the disk fails then the entire disk fails and you would swap out that entire disk.

      They shouldn't call it RAID whatsoever, its insanely confusing.

      Comment


      • #53
        Originally posted by cynic View Post
        in btrfs you can make a "raid 1" with 3 copy of data (and, If i recall correctly, with 4 too, but not sure about this one)
        It would be strange to limit the amount of mirrors for Raid 1, there is no technical reason to do so.
        Originally posted by pWe00Iri3e7Z9lHOX2Qx View Post
        ... Maybe something else (RAIC ).
        ‚ÄčThis might actually be a good one.

        Comment


        • #54
          ptr1337 Hi Peter - writing to you here from a freshly installed CachyOS! Thanks again for all of the help. I tried again this morning with Bcachefs set as the file system, but that failed. Maybe to be expected, was more of an experiment anyway (termbin below if of any value to you.) Did that twice just to confirm, then decided just go with defaults (systemd-boot, Btrfs, and KDE) and here we are Time to kick the tires a little on this thing, looks nice so far!

          Termbin.com is a command line pastebin - easy way to share your terminal output.

          Comment


          • #55
            Originally posted by Anux View Post
            It would be strange to limit the amount of mirrors for Raid 1, there is no technical reason to do so..
            I see at least 1 main reason: You may want to limit the amount of mirror to have more space. If you have 8 disks, I don't think that raid1 over 8 disks (where each disks is a copy of the others) is more robust (from practical point of view) than a a raid 1 over 4 disk (where each disks is a copy of the other). So if you do a raid1c4 (how btrfs call a raid1 over 4 disks) on 8 disks, you got 2 times the space than a classic raid1 over 8 disks.

            Comment


            • #56
              Originally posted by mdedetrich View Post

              Yes but the redundancy is defined as on disk level, not on chunk level hence the old way of handing/treating RAID was under the presumption that if part of the disk fails then the entire disk fails and you would swap out that entire disk.
              In what the BTRFS raid1 would be different from what you wrote ?

              Comment


              • #57
                Originally posted by muncrief View Post

                Thank you for your response Chugworth. That makes sense. Both my desktop and media server have a large amount of storage, around 6 TB and 20 TB respectively, and I rely on ZFS to detect silent corruption so ZFS is more appropriate in my case. But if Btrfs is reliable I can see that it makes more sense for the average user. It's built-in to the kernel as well, so that may have played into the default change. But so long as CachyOS continues to transparently support ZFS I will continue to use it and support it financially.
                Beware, ZFS does not protect you more from silent corruption than Btrfs (ask Linus from LinusMediaGroup about his huge storage lost on ZFS), both needs you to scrub the FS from time to time, because otherwise, you can detect errors (when all redundancies are corrupted) way too late (read: when needing the files)

                Comment


                • #58
                  Originally posted by kreijack View Post
                  If you have 8 disks, I don't think that raid1 over 8 disks (where each disks is a copy of the others) is more robust (from practical point of view) than a a raid 1 over 4 disk (where each disks is a copy of the other).
                  I don't understand, is there some magic happening after the 4th copy? What's the logic behind this?

                  If there are diminishing returns it should still be left to the user to decide not artificially limited. What does the dev gain if he adds code to prevent the user from using more than 4 copies?
                  So if you do a raid1c4 (how btrfs call a raid1 over 4 disks) on 8 disks, you got 2 times the space than a classic raid1 over 8 disks.
                  That's Raid 10

                  Comment


                  • #59
                    Originally posted by Anux View Post
                    I don't understand, is there some magic happening after the 4th copy? What's the logic behind this?
                    In (e.g.) raid1C4, the space is allocated in chunk, and each chunk spans 4 disks. So if you allocated a chunk of 1GB, in the "most free" disks, 4 slices of 1Gb each are allocated.
                    The next time that you need to allocate another chunk of 1GB, the same logic is used. This allow you to have an usable space near to: sum_of_all_disks_size/number_of_mirror.


                    Originally posted by Anux View Post
                    If there are diminishing returns it should still be left to the user to decide not artificially limited. What does the dev gain if he adds code to prevent the user from using more than 4 copies?
                    You can set the number of mirror, to 2, 3....4 I don't know if there is a realistic gain to go further than 4 mirrors.

                    Originally posted by Anux View Post
                    That's Raid 10
                    Not really. In a raid10 the data is stored on different disk each (e.g.) 64k. This allows you to increase the parallelism of the reads. If you have 4 disks and a raid 10 (nr mirror = 2), and you want to read 256kb, you can theoretically read this data from 4 disks. And the usable space is sum_of_all_disks_size/2.

                    If with the same disks set you want to use raid1, with 4 mirrors, you still can read the data from 4 disks at the same time, but the usable space is sum_of_all_disks_size/4. Half of the previous case.

                    If with the same disks set you want to use raid1, with 2 mirrors, you can read the data from *only* 2 disks in parallel at the same time, but the usable space is sum_of_all_disks_size/2 again.
                    Last edited by kreijack; 12 June 2024, 03:37 PM.

                    Comment

                    Working...
                    X