Originally posted by DoMiNeLa10
View Post
Announcement
Collapse
No announcement yet.
Zstd-Compressing The Linux Kernel Has Been Brought Up Again
Collapse
X
-
-
Originally posted by andreano View PostNeed motivation? Steve Jobs (an expert motivator) used the analogy of saving lives.
- Likes 1
Leave a comment:
-
Originally posted by phuclv View PostIt is really relevant in a lot of cases. Modern laptops can boot up in 5-10s which means a 1s reduction is a massive improvement
- Likes 1
Leave a comment:
-
Originally posted by Zan Lynx View PostWindows systems can boot from hibernate in about 3 seconds. Why should Linux take longer?
(If I intend to not use my laptop for a time long enough to drain the battery in S3 sleep, I don't mind doing an actual shutdown.)
- Likes 1
Leave a comment:
-
Originally posted by ms178 View PostI don't get it why there was backlash to add Zstd, the developers showed them the numbers and it makes sense to use it in several widely used scenarios. Other obsolete algorithms can be dropped after this lands. So why does it take so long to get something beneficial like this into the Kernel?
So be careful here.. however with this one I agree with you. Zstd or LZ4 sounds more optimal.
- Likes 3
Leave a comment:
-
Originally posted by dwagner View PostIf shaving off 1s of boot time is relevant for you, then you clearly have a severe stability issue with your operating system.
BTW: (De-)compression algorithms need very thorough testing against security vulnerabilities. If you put such an algorithm into the kernel, make very, very sure that you have tested it with all kinds of random and maliciously crafted input.
As for Zstd from a security prospective, you do realize zstd is already in the Linux kernel in different spots as it is. Btrfs uses it as does squashfs. The algorithms (as opposed to the implementation, don't conflate the two) have been around for decades already, the zstd group just put them together in a unique way to produce better performance than previous incarnations. Other examples of zstd being used in critical places would be FreeBSD's version of OpenZFS which also contains zstd compression similar to btrfs and Ubuntu's version of deb packages which uses zstd by default since 18.10.Last edited by stormcrow; 10 June 2019, 04:12 PM. Reason: Edited for a bit more clarity for embedded device issues.
- Likes 2
Leave a comment:
-
Originally posted by eva2000 View Post
Yeah my testing is more focused on compression speed and compression ratio as it relates to backup speed and lz4 has neither of those strengths i.e. for tar + zstd backup tested speeds
Leave a comment:
-
Originally posted by LoveRPi View PostUbuntu got it right with lz4. Adding support for Zstd isn't a problem. Going forward, IO will continue to scale faster than CPU which is already the case.
Its the default for example Oracle Databases, it don't compromise too much the speed, and depending on database content, you could earn a lot with it..
A comparison for several algos
- Likes 1
Leave a comment:
-
Originally posted by Raka555 View Post
Wow, very nicely done!
It is exactly the graph I wanted to see. Glad you didn't use log scale for x-axis.
Pitty lz4 was'nt in your test.
I am very surprised at how well pigz did at high speed compression.
Would have been awesome if you had the same graph for decompression speed vs compression ratio. (Edit: Its hard to get a good picture from the data alone)
Leave a comment:
-
Guest repliedOriginally posted by dwagner View PostIf shaving off 1s of boot time is relevant for you, then you clearly have a severe stability issue with your operating system.
BTW: (De-)compression algorithms need very thorough testing against security vulnerabilities. If you put such an algorithm into the kernel, make very, very sure that you have tested it with all kinds of random and maliciously crafted input.
Leave a comment:
Leave a comment: