Announcement

Collapse
No announcement yet.

ZRAM Will See Greater Performance On Linux 5.1 - It Changed Its Default Compressor

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • oibaf
    replied
    Originally posted by Mario Junior View Post

    I couldn't understand this test. What did you want to show?
    I reported the times of a read/write test of a custom procedure I am using, using a zram partition compressed with 3 different methods.

    Leave a comment:


  • Mario Junior
    replied
    Originally posted by oibaf View Post
    Out of curiosity I did a test comparing rlo-rle vs. lz4 vs. zstd when setting a ext4 partition on a zram device on Ubuntu 20.04. Here is how I created it:

    Code:
    ZRAM_SIZE=2048M
    if [ ! -e /tmp/zram ]; then
    modprobe zram num_devices=1 && \
    echo `nproc` > /sys/block/zram0/max_comp_streams && \
    echo zstd > /sys/block/zram0/comp_algorithm && \
    echo Size: $ZRAM_SIZE
    echo "$ZRAM_SIZE" > /sys/block/zram0/disksize && \
    mkfs.ext4 /dev/zram0 && \
    mkdir -p /tmp/zram && \
    mount /dev/zram0 /tmp/zram && \
    chmod 777 /tmp/zram && \
    chmod +t /tmp/zram
    fi
    I rebooted at every algo change and repeated a test procedure I am using on that partition 3 times. I did those tests within a 2vCPU VM on a host with other VMs running, so it's not very reliable, but that's my use case. The tests are not very conclusive:
    • with lzo-rle (default): 149s - 141s - 144s
    • with lz4: 160s - 147s - 132s
    • with zstd: 148s - 133s - 139s
    I couldn't understand this test. What did you want to show?

    Leave a comment:


  • oibaf
    replied
    Out of curiosity I did a test comparing rlo-rle vs. lz4 vs. zstd when setting a ext4 partition on a zram device on Ubuntu 20.04. Here is how I created it:

    Code:
    ZRAM_SIZE=2048M
    if [ ! -e /tmp/zram ]; then
      modprobe zram num_devices=1 && \
       echo `nproc` > /sys/block/zram0/max_comp_streams && \
       echo zstd > /sys/block/zram0/comp_algorithm && \
       echo Size: $ZRAM_SIZE
       echo "$ZRAM_SIZE" > /sys/block/zram0/disksize && \
       mkfs.ext4 /dev/zram0 && \
       mkdir -p /tmp/zram && \
       mount /dev/zram0 /tmp/zram && \
       chmod 777 /tmp/zram && \
       chmod +t /tmp/zram
    fi
    I rebooted at every algo change and repeated a test procedure I am using on that partition 3 times. I did those tests within a 2vCPU VM on a host with other VMs running, so it's not very reliable, but that's my use case. The tests are not very conclusive:
    • with lzo-rle (default): 149s - 141s - 144s
    • with lz4: 160s - 147s - 132s
    • with zstd: 148s - 133s - 139s

    Leave a comment:


  • ipsirc
    replied
    Originally posted by Mario Junior View Post
    So, LZ4 or zstd?
    lz4 on single core, pzstd on multicore

    Leave a comment:


  • Mario Junior
    replied
    So, LZ4 or zstd?

    Leave a comment:


  • MasterCATZ
    replied
    thanks for the tip :P

    Leave a comment:


  • lperkins2
    replied
    Originally posted by MasterCATZ View Post
    maybe this is why my performance has tanked once I start transferring files over 1GB / sec PC grinds to a halt , rarely see's 2GB/Sec and used to move 4x the data

    2x Avago 9302-16e 12Gb/s PCIe 3.0 with Multipathing anymore than ~20 drives being Read / Written it becomes very unresponsive now

    wish their was a way to force all of web-browsing content to be compressed in memory and leave everything else alone
    ( chrome eats 30 Gig RAM very quickly )
    There is. Put chrome in its own cgroup, then limit the total memory for the cgroup. memory.soft_limit_in_bytes is where it will start swapping, memory.limit_in_bytes is the effective memory limit, it won't go oover that (will trip the OOM killer if needed).

    Leave a comment:


  • MasterCATZ
    replied
    maybe this is why my performance has tanked once I start transferring files over 1GB / sec PC grinds to a halt , rarely see's 2GB/Sec and used to move 4x the data

    2x Avago 9302-16e 12Gb/s PCIe 3.0 with Multipathing anymore than ~20 drives being Read / Written it becomes very unresponsive now

    wish their was a way to force all of web-browsing content to be compressed in memory and leave everything else alone
    ( chrome eats 30 Gig RAM very quickly )

    Leave a comment:


  • StuartIanNaylor
    replied
    It would be really great if Phoronix could do a compression and zram test.
    Prob best way would be to create and mount a drive and use various disk benchmarks as well as swap/compression tests
    Compare at least against lz0, lz4, zstd, zlib ?

    Leave a comment:


  • daverodgman
    replied
    Originally posted by andreano View Post

    I'm amazed to see that zstd in its fastest setting almost keeps up with these special-purpose fast compressors, and actually manages to beat regular lzo in decompression! We don't have the numbers for lzo-rle, and it's hard to extrapolate 30% from regular lzo, since we don't know how much comes from compression and decompression, but assuming it's a pure decompression speedup (since that's what you get by making the algorithm more complex), that would be upwards of 60%, and a close race between lzo-rle and zstd on the decompression side. However, nothing that would dethrone lz4 as the bilateral speed king. Of course, the result will depend a bit on your test data.
    Actually the perf benefits were split between compression and decompression (it's much faster to detect a run of zeros than run through the lzo compression loop). I don't have the data to hand, but roundtrip perf ends up being a win over lz4 if I remember rightly, as the benefits to improving the slowest part (compression) have more impact than the fastest part (decompression) - because we spend more time on the slowest part. Zram does about 2.25x more compression than decompression (some pages are never decompressed again), so this also skews the importance to compression.

    Dave

    Leave a comment:

Working...
X