Announcement

Collapse
No announcement yet.

Ubuntu 23.04 Desktop's New Installer Set To Ship Without OpenZFS Install Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by oiaohm View Post

    Today was update day1. Then the expected unexpected happened: The ZFS module was missing from initramfs. Desktop's dead in the water. I boot up my laptop to quickly flash an Arch live ISO onto a USB drive, and while at it also upgrade that one. Knowing that what went …

    Welcome to latest round of opps it broken again. Yes got fixed but it keeps on happening.

    Skeevy420 you end up in rock and hard place. OpenZFS regularly gets broken by some _GPL flag update so this caused having to lag behind in kernel version.
    This causes 2 options.
    1) delay all kernel updates going out until OpenZFS fixes their DKMS driver sometimes this is sometimes a week sometimes a few months.
    2) ship a kernel without a ZFS module so you get updates out quickly for everyone not using ZFS.

    Distributions that choose 2 are ubuntu, arch....



    No Ubuntu/canonical was not exactly shortsighted there is a bigger picture. Ubuntu due to it having commercial customers end up in a very hard place. A Linux kernel CVE gets release being a comercial distribution they have to push out the Linux kernel update as soon as possible to as many user as possible. Lets say that update also breaks ZFS guess what canonical hand is forced they will be pushing out the kernel that does not support ZFS so everyone not using ZFS can close CVEs.

    Lot of ways Ubuntu/Canonical removing root ZFS makes sense they are a commercial distribution with commercial customers with support contracts that mandate fixing CVE issues inside fixed time frame. Making ZFS non functional is not breach of Canonical/Ubuntu support contracts and if they did make it breach of support contract they would have rock and hard placed themselves between requirement to fix CVE and keep ZFS running.

    Why does it make sense for root not to be ZFS it is the same problem. Lets say kernel has been updated due to some CVE major issue that remote exploitable not being able to mount you ZFS home directories yet boot up system you are able to read the notes of the last kernel update and see opps we have problem. Better even that ZFS is not working you can still run update with newer kernel to get ZFS module when it will work again.

    Yes it could be wifi driver that person uses to connect to public internet to update in the kernel has a remote CVE issue or equal remember that.

    ZFS root file system sounds good in theory. Turns out for commercial Linux distributions its basically a no go due to the requirement to fix CVE in timely way and when ZFS breaks causes them to have to push out kernels without it. Those using a community support distribution do need to be aware they will be taking slightly higher security risk going the ZFS root file system even using a distribution that makes sure all distribution linux kernel updates have ZFS driver because of the delays the randomish breakages cause due to mainline Linux kernel coming incompatible with ZFS module.

    Really I wish someone could write a ZFS kernel driver that was GPLv2 and submit it mainline so fixing this problem correctly.
    CVE/bugfixes to the kernel rarely (never?) involve a new *_GPL flag, especially in stuff ZFS touches. To think otherwise is wildly unrealistic.

    In the years since gaining Linux support, *_GPL changes have held up ZFS working with a new version of the kernel a number of times I can count on one hand.

    Most delays occur due to regular code churn when symbols are renamed and code is refactored. ZFS code has to be refactored to match, and right now OpenZFS are doing most of the work upstream. In this regard it would be vastly improved if canonical invested more in build-testing all their out-of-tree modules and making corrections.

    It's also not like ZFS is the only out-of-tree code they ship, yet proprietary wifi modules see hardly any breakage at all. (eg: my laptop's bcom card) Maybe they're given a higher priority because breakage inconveniences more users and prevents installation of updates?

    PS: a GPLv2 OpenZFS isn't all sunshine and roses either. It had better be dual-licenced GPL and MIT like much of the graphics infrastructure and MESA. There are a LOT more users of OpenZFS than just linux (eg FreeBSD and illumos) and they rely on ZFS' more permissive CDDL licence to be able to track the OpenZFS mainline.
    Last edited by Developer12; 18 April 2023, 12:43 PM.

    Comment


    • #42
      Originally posted by Developer12 View Post
      CVE/bugfixes to the kernel rarely (never?) involve a new *_GPL flag, especially in stuff ZFS touches. To think otherwise is wildly unrealistic.
      Its not the patch that causes the new _GPL flag. Its in the time frame until ZFS works again.

      Originally posted by Developer12 View Post
      Most delays occur due to regular code churn when symbols are renamed and code is refactored. ZFS code has to be refactored to match
      This is also delay. Remember CVE deployment is mandated by support contracts to be delivered inside X time frame..

      Originally posted by Developer12 View Post
      ​In this regard it would be vastly improved if canonical invested more in build-testing all their out-of-tree modules and making corrections.
      Making corrections take time. Problem support contracts with large companies give fixed time frames to get CVE fixes deployed.

      Originally posted by Developer12 View Post
      ​It's also not like ZFS is the only out-of-tree code they ship, yet proprietary wifi modules see hardly any breakage at all. (eg: my laptop's bcom card) Maybe they're given a higher priority because breakage inconveniences more users and prevents installation of updates?
      Those modules see lower brakes because they are in less core area of the operation system. But they do see breakage thing is lot of these extra devices the modules breaking and no working are less user disrupting than a file system. Remember the old unix world saying everything is a file. File systems are in the area of most change inside the Linux kernel every year and has been that way for 2 decades.

      Root file systems really do need to be mainline development in most OS designs due to how active of a coding area the file system work is going to be.

      Originally posted by Developer12 View Post
      PS: a GPLv2 OpenZFS isn't all sunshine and roses either. It had better be dual-licenced GPL and MIT like much of the graphics infrastructure and MESA. There are a LOT more users of OpenZFS than just linux (eg FreeBSD and illumos) and they rely on ZFS' more permissive CDDL licence to be able to track the OpenZFS mainline.
      CDDL does have its fair share of problems as well. To be correct the graphics stack majority of it in the Linux kernel is not dual license but pure MIT license. It can ship with the Linux kernel because its a GPL compatible license. CDDL not being compatible license is a major headache. The requirement on commercials to deploy CVE fixes means they cannot be stuck waiting around for fixes as what has been happening with ZFS they are needing upstream first policy.

      Please note it worse as well. Part like canonical/redhat-ibm/suse can have notice of a CVE 90 days before it public so they cannot walk up to the ZFS developers and say hey there is a CVE over in this different area of the kernel we need to to alter your code. Yes you see strange patching with the Linux kernel where a author put a patch in area they don't normally work and latter on it turns out to be a CVE fix.

      Canonical/Ubuntu tried providing root ZFS as a option they stayed at it way longer than I thought they would. Particular when you know they are straight up against the rock and hard place of support contracts mandating CVE deployed and ZFS not staying synced with mainline causes.

      Comment


      • #43
        oiaohm


        You have a lot of stuff to reply to so forgive me if I skip some of it. In regards to most all of what you said like the _GPL flag, CVE, and root file systems should be mainline, all of that only matters if you track bleeding edge upstream software. When you use things like the Gentoo or Arch testing repositories. On actual distributions that take OpenZFS into consideration, the only one of warrant is the CVE and, like all CVE's, we're stuck with it until it is fixed in all the relevant places. If Ubuntu/Canonical wanted to speed up the OpenZFS Linux CVE situation they'd have hired an OpenZFS dev to handle them and solve the 90 days behind closed door situation.

        The worst case scenario is that the distribution/end-user might have to revert from a depreciated mainline release to the previous LTS kernel until OpenZFS catches up with upstream Linux. Those situations usually don't take very long or happen very often. Things like that apply to NVIDIA GPUs, WiFi cards, and other proprietary pieces of hardware whenever upstream Linux changes too much between versions.

        IMHO, the only file system that really, REALLY, needs to be mainline is the boot loader partition like the EFI or /boot. Everything else can be chainloaded from a ramdisk.

        As far as the CDDL and GPL go, I made a long post about them on the last OpenZFS release thread from a few days ago.

        Comment


        • #44
          Originally posted by skeevy420 View Post
          If Ubuntu/Canonical wanted to speed up the OpenZFS Linux CVE situation they'd have hired an OpenZFS dev to handle them and solve the 90 days behind closed door situation.
          ​Would not help. Rules of CVE. OpenZFS is not listed as effected party so they are not to be informed. Its not about being Ubuntu/Canonical staff.

          OpenZFS developer would be able to help with CVE against the OpenZFS code base they are not going to be helpful with cases where its a CVE in kernel that need to be put out there now..

          Originally posted by skeevy420 View Post
          The worst case scenario is that the distribution/end-user might have to revert from a depreciated mainline release to the previous LTS kernel until OpenZFS catches up with upstream Linux. Those situations usually don't take very long or happen very often. Things like that apply to NVIDIA GPUs, WiFi cards, and other proprietary pieces of hardware whenever upstream Linux changes too much between versions.
          The worse case has to happen regularly. Yes the issue applies to Nvidia GPU/wifi cards and so on. For the other parts it does not stop user getting into their computer and either reversing the update do doing update to fix it.

          Originally posted by skeevy420 View Post
          The Linux Foundation could always grant either OpenZFS or CDDL a special exception. That's not unprecedented. That would cover the re-licensing part. No big deal, should be an easy decision to make. However,
          This argument need to stop. Please find one example where Linux Foundation has done special exception.​ It is unprecedented by the Linux Foundation to these exceptions.

          When you find the prior examples where special exception like it was granted this is before Linux Foundation and before legal audit. Yes why Linux kernel firmware files were moved out into their own git repository.

          Remember debian, Redhat and Canonical forking cdrecord/cdrtools because of CDDL and GPL mixed.


          Once foundation as in a company was in mix Linux kernel development has to play by the proper rules of copyright. The commercial Linux distributions mandate that licensing is done particular ways. Yes they do disagree but they do all agree that you cannot put CDDL and GPL in the same package without an execption on both licenses. CDDL need exception so that the source code can be GPL and GPL needs exception that source code can be CDDL.

          CDDL allows binaries to be what ever license you like. CDDL does not allow source code to be what ever license you like. Copyleft rock and hard place.

          Next to put a exception on the Linux kernel GPLv2 license requires too many parties agreement to be possible. Yes reason why Linux kernel still GPLv2 not something newer is that is practically impossible to change the license. The Linux foundation spend about half a million dollars in legal attempting to work out if there was any way todo a license change with the Linux kernel if it ever came required. The answer is you would need to start the complete Linux kernel from scratch and only take in each part as each party approved it.

          Comment


          • #45
            Originally posted by basildazz View Post
            lgogdownloader (is this a command line script?), I have used Lutris, this appears to have problems too, it will download and install correctly here, but any large downloads on the same drive (any drive) seem to alter the checksums and then fail the install, invalidating the point of a backup. Even goggalaxy extra downloads behave in this manner for me. My next step was to attempt to zip them immediately after download, but temperance of frustrations is still buffering.

            If lgogdownload is verify the downloads initially, it will be saving me the effort of accruing valid md5's.

            Nevertheless I may try ZFS and see if it is more stable.
            Sounds like a hardware issue to be honest. Memory or IMC or perhaps a SATA cable or failing HDD?

            lgogdownloader is a little application for commandline download of the GOG installers. Github. GOG Forum. It's way more stable and controllable than the old GOG downloader they provided before Galaxy, and I don't use Galaxy because I'm not interested in all of the "value added" features.

            Comment


            • #46
              Originally posted by _r00t- View Post
              Time to add BTRFS support as default.
              I would prefer that too.

              But they made the BTRFS experience also worse.

              If you choose BTRFS they removed the automatic creation of the standard subvolumes (like @home). To do it manually is really not that easy. If I would use Ubuntu, I would install 22.10 and do an upgrade.

              Comment

              Working...
              X