Announcement

Collapse
No announcement yet.

Ubuntu Revisiting Its Initramfs Compression Approach

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Ubuntu Revisiting Its Initramfs Compression Approach

    Phoronix: Ubuntu Revisiting Its Initramfs Compression Approach

    About a year ago Ubunty changed its default compression level for its initramfs handling down to Zstd level one to deal with slow initramfs creation times on low-end systems / development boards. But since then that has resulted in larger initramfs sizes and yielding other bugs like more quickly filling up the /boot partition on Ubuntu systems. Thus the developers have gone back to the drawing board and are trying to figure out a path forward for better initramfs handling that works well for low-end single board computers while also maximizing space savings and working out well for all Ubuntu use-cases...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I guess, disabling the inclusion of modules never loaded on the host would help a in reducing initrd size a lot.

    Comment


    • #3
      Originally posted by slalomsk8er View Post
      I guess, disabling the inclusion of modules never loaded on the host would help a in reducing initrd size a lot.
      That is what some other distros do (the equivalent would be setting MODULES=dep (currently set to most) in initramfs.conf for ubuntu).

      While that works adequately much of the time, one does need to then build a rescue/recovery/emergency kernel and initramfs with all modules so that major changes in hardware can be supported, and expect that the individual will know how to select that alternative when stuff goes bad.

      UKI kernels will change some of the viable trade offs moving forward (and while not everyone will necessarily use UKI kernels, they are likely to be delivered by many of the distro vendors at some point, so should get some consideration in the solution).

      Comment


      • #4
        This is such an unnecessary issue. If your boot partition is at least 800MiB, then you can install at least 6 kernels with lzo and dep=most. On 256+GB drives that’s still nothing.

        Everyone who can’t expand their existing boot partition can put it at the end of their drive; it’s not ideal but it’s way better than these janky solutions.

        Comment


        • #5
          Just have 500MB/1GB of boot partition and stop worrying about these stuffs.

          Comment


          • #6
            I remember writing on Ubuntu Discourse a couple of years ago about a similar issue:

            1. `initramfs` creation process is flawed. Sometimes during a single `sudo apt update` it runs like 3 times instead of one.
            2. Even a single run is very heavy and slow. On my laptop at that time it was around 40 seconds.
            3. I stopped being worried about additional 100MB on a system drive a decade ago if not more. I also have an SSD
            4. Creating a copy of 1GB of the root file system on an SSD should take ~3 seconds, reading even less.
            5. So, I don't see a point in that compression at all on a laptop or PC from modern era. Only if creating the archive and writing it would be faster than just writing 1GB to the disk.
            6. If it's an issue in RPi or older systems, I would like to have an option.
            7. Same thing was happening with Snap and it's `xz` compression which was adding 10 seconds to an app start time to save of disk space. Later they agreed to introduce `lzo`.

            Comment


            • #7
              Originally posted by arun54321 View Post
              Just have 500MB/1GB of boot partition and stop worrying about these stuffs.
              Problem is that the recommendation was to low and after a couple of upgrades you are stuck with 256MB and need to reinstall just because /boot is to small to hold 2 kernels

              Comment


              • #8
                Do the first compression with fast compression, then schedule a minimal-priority task to compress it better in the background? That way it doesn't really matter, even if that second phase took hours to finish.

                Comment


                • #9
                  Originally posted by WereCatf View Post
                  Do the first compression with fast compression, then schedule a minimal-priority task to compress it better in the background? That way it doesn't really matter, even if that second phase took hours to finish.
                  As mentioned in the source thread, not only does the time to compress matter, but, especially on lower end systems, the time to decompress can matter (a lot) in terms of user experience (so can the time it can take to load the file from a slow sd-card, so smaller may be better in those cases, except if the decompress time goes up too high). And some low end systems may have a very limited amount of storage on (say) their bootable eMMC partition(s). Coming up with a one solution that works for almost everyone is never as simple as it might first appear.

                  Comment


                  • #10
                    Originally posted by spyke View Post
                    I remember writing on Ubuntu Discourse a couple of years ago about a similar issue:

                    1. `initramfs` creation process is flawed. Sometimes during a single `sudo apt update` it runs like 3 times instead of one.
                    2. Even a single run is very heavy and slow. On my laptop at that time it was around 40 seconds.
                    3. I stopped being worried about additional 100MB on a system drive a decade ago if not more. I also have an SSD
                    4. Creating a copy of 1GB of the root file system on an SSD should take ~3 seconds, reading even less.
                    5. So, I don't see a point in that compression at all on a laptop or PC from modern era. Only if creating the archive and writing it would be faster than just writing 1GB to the disk.
                    6. If it's an issue in RPi or older systems, I would like to have an option.
                    7. Same thing was happening with Snap and it's `xz` compression which was adding 10 seconds to an app start time to save of disk space. Later they agreed to introduce `lzo`.
                    1. is so obvious and annoying, every time I see this I wonder why nobody fixed this stupid behavior, they try to save milliseconds but here this fat cow takes tens of seconds.

                    PS: I agree with all points, btw.

                    Comment

                    Working...
                    X