Announcement

Collapse
No announcement yet.

Ubuntu 24.04 Supports Easy Installation Of OpenZFS Root File-System With Encryption

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by Classical View Post

    Void takes very long when it builds ZFS for the kernel.

    Alpine Linux is the fastest and most reliable ZFS implementation in the Linux world. It completely installs an new ZFS version in 3 seconds.

    ZFS also works fine on Calculate Linux and Gentoo.

    ALT Linux has a new ZFS implementation and ROSA Linux has an old ZFS version.

    A few other Linux systems also offer ZFS: https://openzfs.github.io/openzfs-do...ted/index.html

    Btrfs is kind of a failed project, this is why Red Hat abandoned Btrfs.
    Doesn't matter how many hopes we put in them, some projects are essentially technical failures (it is my belief that the BTRFS RAID5/6 is a broken design; but that's not the only problem, failed disk recovery is still a problem, and the performance for not-COW-friendly applications such as databases with B trees is bad).

    Yes, I'm well aware that people will be pissed off with me for these statements. To me, acknowledging the problems is critical to fix them ... Pretending that those are not problems is ultimately a political act, that more-or-less encourages not fixing them.

    Comment


    • #42
      Originally posted by kengreen View Post
      .... a matter of copying the partitions from one disk to the other ...
      Originally posted by LinuxNoob View Post
      But this would not allow booting if the first drive failed, correct?
      Oh oh. You mean you want also redundant EFI and Boot partition? That might be a good idea. But you'd also need to keep both ESP synchronized. But is it worth? Even if it's a pain it's rather easy to recover in case of boot failure. For example, you can just keep a EFI unified kernel image executable on a USB device in case of full drive failure, or boot from live system and chroot to recover. The important part is not losing your data in the zfs pool.

      My main point was that you don't need to copy (dd or whatever) the content of a zfs partition to another disk prior to attaching/resilvering. You just attach your new partition to your vdev and everything is copied and controlled by resilvering and scrubbing processes.

      Comment


      • #43
        I followed Agno suggestion and added a slightly smaller nvme to the rpool. I then removed the original rpool drive, forcing a copy of the data to the new drive. Then I attached the old drive partition back to the rpool creating a mirror. Bpool is not backed up, but I will try to create an EFI boot drive on a USB stick for booting. Suggestions welcome

        Comment


        • #44
          Originally posted by vladpetric View Post

          Doesn't matter how many hopes we put in them, some projects are essentially technical failures (it is my belief that the BTRFS RAID5/6 is a broken design; but that's not the only problem, failed disk recovery is still a problem, and the performance for not-COW-friendly applications such as databases with B trees is bad).

          Yes, I'm well aware that people will be pissed off with me for these statements. To me, acknowledging the problems is critical to fix them ... Pretending that those are not problems is ultimately a political act, that more-or-less encourages not fixing them.
          So are SUSE engineers crazy?
          It's not a question of defending something, it's just a question of seeing things for what they are. No company in their right mind would use a failing file system for their customers, be it SUSE, Facebook, or others.
          It's not that if a software isn't used by RH it's a failure, RH isn't god, it just made different choices.​

          Comment


          • #45
            Originally posted by woddy View Post

            So are SUSE engineers crazy?
            It's not a question of defending something, it's just a question of seeing things for what they are. No company in their right mind would use a failing file system for their customers, be it SUSE, Facebook, or others.
            It's not that if a software isn't used by RH it's a failure, RH isn't god, it just made different choices.​
            Nope, never said anything like that (definitely not calling anybody crazy).

            Those companies adopted BTRFS while believing that the bugs and issues can ironed out (and based on a presentation from years ago, Facebook fixed many one-time-only issues). You make decisions without knowing if they pan out ... as simple as that. I lost hope that that can actually be done to bring BTRFS to ZFS level. Obviously, they can totally prove me wrong. And I'm not talking about RAID 5/6 here, which is simply not fixed.

            Comment


            • #46
              Still no sign of ZFS Boot Environments - while ZFSBootMenu is available since quite long time - sad to see so much potential lost.

              Comment


              • #47
                Originally posted by vladpetric View Post

                Nope, never said anything like that (definitely not calling anybody crazy).

                Those companies adopted BTRFS while believing that the bugs and issues can ironed out (and based on a presentation from years ago, Facebook fixed many one-time-only issues). You make decisions without knowing if they pan out ... as simple as that. I lost hope that that can actually be done to bring BTRFS to ZFS level. Obviously, they can totally prove me wrong. And I'm not talking about RAID 5/6 here, which is simply not fixed.
                I didn't write that you called someone crazy, the ? it has a meaning.
                I repeat, no sane enterprise distribution uses a broken file system, if as you say they made wrong choices thinking that the bugs would be solved and instead it wasn't, nothing stopped SUSE and its engineers in the new versions from moving to a different syustem file. They haven't, they've been using it for years and don't seem to want to change.​

                This makes me think that it is not as broken or at least no less broken than other file systems.

                Comment


                • #48
                  Originally posted by woddy View Post

                  I didn't write that you called someone crazy, the ? it has a meaning.
                  I repeat, no sane enterprise distribution uses a broken file system, if as you say they made wrong choices thinking that the bugs would be solved and instead it wasn't, nothing stopped SUSE and its engineers in the new versions from moving to a different syustem file. They haven't, they've been using it for years and don't seem to want to change.​

                  This makes me think that it is not as broken or at least no less broken than other file systems.
                  Ok, maybe I'm too harsh on BTRFS and it has improved a lot over the last 2 years or so. I am totally willing to grant you that (while also reserving some judgement). Do we have some metrics on reliability these days? Actually perf metrics would be helpful, too
                  Last edited by vladpetric; 04 May 2024, 07:01 PM.

                  Comment


                  • #49
                    Originally posted by vladpetric View Post

                    Ok, maybe I'm too harsh on BTRFS and it has improved a lot over the last 2 years or so. I am totally willing to grant you that (while also reserving some judgement). Do we have some metrics on reliability these days? Actually perf metrics would be helpful, too
                    Honestly I do not know.

                    Comment

                    Working...
                    X