Originally posted by Weasel
View Post
Announcement
Collapse
No announcement yet.
Linux 6.8 Looks To Upgrade Its Zstd Code For Better Compression Performance
Collapse
X
-
- Likes 21
-
Originally posted by terrelln View Post
Author of the patch set here. To be clear there is a ~1% regression in decompression speed. We regularly see 1-2% differences in decompression speed as we make releases, both slight improvements and slight regressions. These are mostly noise, due to slight differences in compilation, and are impossible to control. If these small differences add up to large regressions over time, we look into them, otherwise they tend to cancel out.
Thanks.
- Likes 5
Comment
-
Originally posted by mat1210 View Postthe article mentions a minor increase in decompression time, but the patch linked mentions a fix for it below it?
Comparison of the two updates:Note 1: I didn't copy the compression results over to this chart, which was posted in (what I call) patch 1.Component Level D. time delta (patch 1) "Read time delta" (patch 2) Btrfs 1 +1.8% -7.0% Btrfs 3 +3.2% -3.9% Btrfs 5 +0.0% -4.7% Btrfs 7 +0.4% -5.5% Btrfs 9 +1.5% -2.4% Squashfs 1 +1.0 % -9.1%
Note 2: Different wording used for the benchmarks in patch 1 vs patch 2, looks like the same benchmark to me, I might be mistaken.
Note 3: I assume "patch 1" was benchmarked against the previously current Zstd version in the Linux kernel and "patch 2" was benchmarked against "patch 1".
Source: https://lore.kernel.org/lkml/2023112...gmail.com/t/#u
Originally posted by terrelln View Post
Author of the patch set here. To be clear there is a ~1% regression in decompression speed. We regularly see 1-2% differences in decompression speed as we make releases, both slight improvements and slight regressions. These are mostly noise, due to slight differences in compilation, and are impossible to control. If these small differences add up to large regressions over time, we look into them, otherwise they tend to cancel out.
Also, I see you are testing with an i9-9900K, do you know whether the same results are seen on AMD CPUs or even different architectures?Last edited by Eudyptula; 21 November 2023, 09:08 PM.
Comment
-
Originally posted by HD7950 View Post
Hi, can you explain why is not possible yet to choose negative compression values on kernel? Things like file compression or zram needs faster zstd levels.
Thanks.
- Likes 7
Comment
-
Yes, it should be possible, and I don't think there should be any issues getting Zstd working, as long as you have sufficient memory available. If you run into problems with Zstd on these platforms, please open an issue so we can fix it.
- Likes 6
Comment
-
Originally posted by Eudyptula View PostThis is my interpretation as well; a nice improvement in decompression time.
Initial patch:Note 1: I didn't copy the compression results over to this chart, which was posted in (what I call) patch 1.Component Level D. time delta (patch 1) "Read time delta" (patch 2) Btrfs 1 +1.8% -7.0% Btrfs 3 +3.2% -3.9% Btrfs 5 +0.0% -4.7% Btrfs 7 +0.4% -5.5% Btrfs 9 +1.5% -2.4% Squashfs 1 +1.0 % -9.1%
Note 2: Different wording used for the benchmarks in patch 1 vs patch 2, looks like the same benchmark to me, I might be mistaken.
Source: https://lore.kernel.org/lkml/2023112...gmail.com/t/#u
Or am I interpreting the patch notes wrong?
Also, I see you are testing with an i9-9900K, do you know whether the same results are seen on AMD CPUs or even different architectures?
I didn't benchmark the kernel (de)compression on AMD or AArch64. We do benchmark our releases on Aarch64 and AMD, and we saw similar results for the v1.5.5 release, but I don't know if I have an exact number. If you do notice speed regressions on any architecture, please open an issue so that we can investigate. We pay the most attention to x86-64 speed, so it is definitely possible that regressions on other architectures slip through the cracks.
- Likes 6
Comment
-
Originally posted by HD7950 View Post
I totally agree and let me add that negative compression levels shouldn´t exist. To avoid confusion Zstd:1 should be the fastest compression mode and Zstd:64 (for example) the slowest.
Actually zstd:1 is not the best choice for an NVME hard drive using Btrfs compression, as you can see it here.
however this synthetic benchmark doesn't keep in count some issues that emerges during real life usage of compressed btrfs FSs.
In particular, compressed data chunk are limited to 128k (if I'm not wrong) and that creates huge metadata and an high degree of fragmentation.
This might be a minor issue on modern solid state drives, but it kills performance on HDDs.
- Likes 1
Comment
-
Originally posted by HD7950 View PostHi, can you explain why is not possible yet to choose negative compression values on kernel? Things like file compression or zram needs faster zstd levels.
- Likes 1
Comment
-
Originally posted by terrelln View PostAuthor of the patch set here. To be clear there is a ~1% regression in decompression speed. We regularly see 1-2% differences in decompression speed as we make releases, both slight improvements and slight regressions. These are mostly noise, due to slight differences in compilation, and are impossible to control. If these small differences add up to large regressions over time, we look into them, otherwise they tend to cancel out.
Are you sure it's noise though? I mean, did you touch the decompression code? If you did, did you try to bisect exactly where it happens? If it's completely unrelated, then yeah it's okay to treat it as noise, since it means it's just unlucky alignment or compiler issue.
Sometimes it's not very obvious.
Comment
Comment