Announcement

Collapse
No announcement yet.

Linux 6.8 Looks To Upgrade Its Zstd Code For Better Compression Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Anux View Post
    While that is probably the most intuitive way, it also limits your ability to add faster compression methods without losing backwards compatibility.

    My idea would be to use float. Zstd 0 would equal to no compression at all and 1 should be the standard (best compromise between size and speed with the most basic algorithm). That way you can add 0.1 and 0.13 etc to extend it on the faster side and 5, 64, 22222 for slower modes. And that still gives you the possibility to sneak in newer finer grained options like 5.1234567, 5.1234568, ... while you can be sure that Zstd 1 or 5 will always give the same results.

    Otherwise you regularly have to recreate the number system.
    True, but you need a feedback that tells you what the real level of compression is. Not all algorithms scale in float. For example.0.0001 - 0.1111 may equal level 1 and 0.1112- 0.2222 may equal level 2 so if you select 0.0555 you should known if it has any effect at all

    http://www.dirtcellar.net

    Comment


    • #22
      Originally posted by terrelln View Post

      Our team was just discussing this, as this thread shows that there is demand. The version of zstd in the kernel does support negative compression levels. Now it is up to the users of zstd to support selecting negative compression levels. I'd be happy to review patches to do that. We plan on chatting with the btrfs folks, and working on exposing negative compression levels there.
      Slightly related to that, I was having a discussion in another thread about Zstd and Arch's mkinitcpio and that made me wonder:

      Is there a reason why there's an --ultra flag; why the higher levels can't be enabled by default and --ultra depreciated?

      I get why there's a --fast flag, the negative numbers of Zstd-mas past, since --fast changes all the meaning of all the levels, but --ultra only enables higher levels, 20-22, and doesn't change how Zstd behaves on 19 and lower so it seems like something that doesn't have to be there to accomplish what it does.

      Comment


      • #23
        Originally posted by skeevy420 View Post
        Is there a reason why there's an --ultra flag; why the higher levels can't be enabled by default and --ultra depreciated?
        It's much more RAM intensive and it could lead to a non starting system if booted with low RAM.

        Comment


        • #24
          Originally posted by terrelln View Post
          The "initial patch" results are actually the overall results for the entire series. The first patch's table shows a regression in decompression speed, which the second patch mitigates.
          Late reply, just wanted to thank you for clearing this up for those who were confused about the commit post. As I saw at least one other person that was uncertain about the commit post, I hoped that someone, hopefully you, would correct me if I was indeed mistaken. Which I was.

          Comment


          • #25
            Originally posted by Anux View Post
            It's much more RAM intensive and it could lead to a non starting system if booted with low RAM.
            there are basically only two types of systems with so little RAM that they might have that problem:
            • embedded systems
            • ancient systems from the 90s and early 2000s
            ​​​​​​if you're not dealing with either of those, gating the highest compression levels behind --ultra seems pretty silly.

            Comment

            Working...
            X