Announcement

Collapse
No announcement yet.

Google Engineer Experimenting With ZRAM Handling For Multiple Compression Streams

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Google Engineer Experimenting With ZRAM Handling For Multiple Compression Streams

    Phoronix: Google Engineer Experimenting With ZRAM Handling For Multiple Compression Streams

    There are patches that provide support for ZRAM to be able to handle multiple compression streams on a per-CPU basis. This kernel module for creating compressed block devices could be made more versatile with this proposed patch series...

    https://www.phoronix.com/news/ZRAM-Multiple-Compression

  • #2
    And yet, nobody is upgrading the Zstd code in the kernel to be in sync with he latest upstream version...

    Comment


    • #3
      Originally posted by Danny3 View Post
      And yet, nobody is upgrading the Zstd code in the kernel to be in sync with he latest upstream version...
      https://github.com/facebook/zstd/issues/3275

      Comment


      • #4
        Finally!

        Thanks for the link!

        Too bad that by the looks of it, it will miss the 6.1 merge window.

        Comment


        • #5
          A bit OT but still on the subject of compression and RAM.
          Everybody talks about ZRAM and swapping over it. Which is brilliant, no questions there.
          But I'd love to have something like tmpfs do transparent compression instead of hogging RAM and then swapping out, after the fact.

          IIRC, the Linux block layer can do inline encryption nowdays (compression too?).
          Always wondered why similar tmpfs stuff never materialized.

          Comment


          • #6
            Originally posted by milkylainen View Post
            A bit OT but still on the subject of compression and RAM.
            Everybody talks about ZRAM and swapping over it. Which is brilliant, no questions there.
            But I'd love to have something like tmpfs do transparent compression instead of hogging RAM and then swapping out, after the fact.

            IIRC, the Linux block layer can do inline encryption nowdays (compression too?).
            Always wondered why similar tmpfs stuff never materialized.
            You can always cheat and use a "real" file system.

            Disable the systemd /tmp services, create a ram disk, format it as BTRFS or ZFS, point its mount towards /tmp, and then wrap everything from a create a ram disk and forward into a systemd service to mount/unmount it.

            You can go one step farther by doing that same thing as well as creating a pool in a loopback image on an SSD so you can have a mirrored or striped /tmp between a ramdisk and SSD. In addition to the mounting services, you'll also have to create attaching and detaching services for startup and shutdown.

            All of that should be fairly simple to automate with ZFS.

            Comment


            • #7
              Originally posted by milkylainen View Post
              A bit OT but still on the subject of compression and RAM.
              Everybody talks about ZRAM and swapping over it. Which is brilliant, no questions there.
              But I'd love to have something like tmpfs do transparent compression instead of hogging RAM and then swapping out, after the fact.

              IIRC, the Linux block layer can do inline encryption nowdays (compression too?).
              Always wondered why similar tmpfs stuff never materialized.
              That's exactly what zram does. You just have to create a regular file system (instead of a swap device, so, for example, use mkfs.ext2 instead of mkswap) on /dev/zramX and then mount it like a regular block device. Only caveat is that you need a separate "backing device" to "swap out" unused/incompressible files because zram won't use regular swap for that like tmpfs does. Other than that it's pretty much like a compressed tmpfs which grows and shrinks depending on what is stored on it. I think the shrinking depends on discard/trim like on SSDs to actually work but this seems to be enabled by default at least on ext2/3/4 when available. Not 100% sure it actually works that way but in any case it does work.

              The zramctl tool can help you with setting the zram device up. Also zram-init can automate the whole dance including file system creation and mounting at boot time: https://github.com/vaeth/zram-init

              Comment


              • #8
                Originally posted by skeevy420 View Post

                You can always cheat and use a "real" file system.

                Disable the systemd /tmp services, create a ram disk, format it as BTRFS or ZFS, point its mount towards /tmp, and then wrap everything from a create a ram disk and forward into a systemd service to mount/unmount it.

                You can go one step farther by doing that same thing as well as creating a pool in a loopback image on an SSD so you can have a mirrored or striped /tmp between a ramdisk and SSD. In addition to the mounting services, you'll also have to create attaching and detaching services for startup and shutdown.

                All of that should be fairly simple to automate with ZFS.
                It isn't the same thing? tmpfs lives on the page cache? ram disks are block devices? So their allocated usage is static?
                I might be wrong here. So no bashing please.

                Comment


                • #9
                  Originally posted by binarybanana View Post

                  That's exactly what zram does. You just have to create a regular file system (instead of a swap device, so, for example, use mkfs.ext2 instead of mkswap) on /dev/zramX and then mount it like a regular block device. Only caveat is that you need a separate "backing device" to "swap out" unused/incompressible files because zram won't use regular swap for that like tmpfs does. Other than that it's pretty much like a compressed tmpfs which grows and shrinks depending on what is stored on it. I think the shrinking depends on discard/trim like on SSDs to actually work but this seems to be enabled by default at least on ext2/3/4 when available. Not 100% sure it actually works that way but in any case it does work.

                  The zramctl tool can help you with setting the zram device up. Also zram-init can automate the whole dance including file system creation and mounting at boot time: https://github.com/vaeth/zram-init
                  I'm repeating myself here (see above, sorry about that). But block devices are not the same as something that lives on the page cache?
                  tmpfs doesn't reserve more RAM than is actually used? And the block device (in RAM) will? Or?
                  Also, tmpfs is used for initramfs.

                  Comment


                  • #10
                    Originally posted by milkylainen View Post

                    It isn't the same thing? tmpfs lives on the page cache? ram disks are block devices? So their allocated usage is static?
                    I might be wrong here. So no bashing please.
                    I'm not positive, either.

                    I've considered creating a very unsafely setup ZFS stripe between a ramdisk and images as a way to expand my TMPFS size now that I have a PC with only 32GB of ram. A 16GB TMPFS isn't enough for some things. ZFS or BTRFS having Zstd was only a side effect. IIRC BTRFS correctly, both file systems have all the necessary tooling to do that without having to pull in LVM.

                    Comment

                    Working...
                    X