Support For Compressing The Linux Kernel With LZ4

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
  • phoronix
    Administrator
    • Jan 2007
    • 67082

    Support For Compressing The Linux Kernel With LZ4

    Phoronix: Support For Compressing The Linux Kernel With LZ4

    A set of patches that allow the Linux kernel image to be compressed with the LZ4 lossless compression algorithm have been published. The size of LZ4-compressed Linux kernel images are larger than using LZO compression, but there's promise that the boot times could be better...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite
  • Ericg
    Senior Member
    • Aug 2012
    • 2585

    #2
    Originally posted by BO$$ View Post
    What is the point of having a compressed kernel? Faster loading times from the HDD?
    Smaller footprint which means: yes faster loading times from hdd, smaller filesize ON the hdd (embedded) and means it can be shoved into RAM if you wanted the entire live system into ram for responsiveness
    All opinions are my own not those of my employer if you know who they are.

    Comment

    • mercutio
      Senior Member
      • Oct 2010
      • 132

      #3
      what happened to btrfs's lz4 compression support?

      it seems lz4 makes sense a lot for always compressing files when not using a sandforce ssd.

      Comment

      • AJenbo
        Senior Member
        • Sep 2011
        • 943

        #4
        I wonder what would happen if a lossy algorythem was used

        Comment

        • bnolsen
          Senior Member
          • Mar 2008
          • 275

          #5
          Originally posted by AJenbo View Post
          I wonder what would happen if a lossy algorythem was used
          then you have a lossy system.

          Comment

          • Ericg
            Senior Member
            • Aug 2012
            • 2585

            #6
            Originally posted by AJenbo View Post
            I wonder what would happen if a lossy algorythem was used
            Then i'd hope that the decompression algorithm was very good about reconstructing the lost data which would probably mean a longer decompression time =P
            All opinions are my own not those of my employer if you know who they are.

            Comment

            • mark_
              Junior Member
              • Apr 2011
              • 46

              #7
              come on... 100ms difference, you cannot even measure that with your watch. But you can measure the init-time after the kernel is loaded...

              Comment

              • oliver
                Senior Member
                • Jan 2007
                • 423

                #8
                Originally posted by Ericg View Post
                Smaller footprint which means: yes faster loading times from hdd, smaller filesize ON the hdd (embedded) and means it can be shoved into RAM if you wanted the entire live system into ram for responsiveness
                I don't think you can have a running kernel, compressed in ram. As far as I know, it gets loaded from some form of storage (flash, hdd, nfs) and decompressed into ram.

                come on... 100ms difference, you cannot even measure that with your watch. But you can measure the init-time after the kernel is loaded...
                This was on their test-system I'm sure. I bet on an ARM-m3 it takes quite a lot longer. So we're talking about 100% faster decompression times (150ms vs 300ms).

                Comment

                • newwen
                  Senior Member
                  • Dec 2012
                  • 287

                  #9
                  I guess Google is also very interested on this getting into mainline kernel.

                  Comment

                  • ryao
                    Gentoo Developer
                    • Jun 2012
                    • 1196

                    #10
                    Originally posted by mercutio View Post
                    what happened to btrfs's lz4 compression support?

                    it seems lz4 makes sense a lot for always compressing files when not using a sandforce ssd.
                    Perhaps you are thinking of ZFS. ZFSOnLinux HEAD has LZ4 support. I believe that btrfs had planned to adopt snappy. The two compression algorithms are roughly equivalent in benchmarks.

                    Comment

                    Working...
                    X