Announcement

Collapse
No announcement yet.

Facebook Looking To Add Zstd Support To The Linux Kernel, Btrfs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by boxie View Post
    you guys and your standard STD jokes, why don't you think outside the box and get infected with different strains of thought :P
    And put effort into a bad joke? No way man

    Comment


    • #12
      Originally posted by SaucyJack View Post
      Great, Linux is getting the zuckerberg std.
      Zstd. Oh, now I get the joke!

      Comment


      • #13
        It has really great compression ratios, especially for the speed. Still, I prefer lz4 for the decompression speeds, roughly 3x faster than zstd.

        Still, it is better in virtually every way over zlib, brotli, lzo, snappy, and lzf. I'd like to see it in the kernel.

        Comment


        • #14
          One big problem with zstd, that I just can't see the Linux kernel team being happy with, is that over-reaching PATENT agreement. It's not really accurate to call zstd BSD licensed. It's BSD + a whole big "You can't claim patent infringement against us, ever"

          Zstandard - Fast real-time compression algorithm. Contribute to facebook/zstd development by creating an account on GitHub.


          The license granted hereunder will terminate, automatically and without notice, if you (or any of your subsidiaries, corporate affiliates or agents) initiate directly or indirectly, or take a direct financial interest in, any Patent Assertion: (i) against Facebook or any of its subsidiaries or corporate affiliates, (ii) against any party if such Patent Assertion arises in whole or in part from any software, technology, product or service of Facebook or any of its subsidiaries or corporate affiliates, or (iii) against any party relating to the Software
          I was really interested in leveraging zstd at work, we've got some places it would be very useful and provide some strong advantages. The patent clause makes it pretty much a no-go.

          Comment


          • #15
            Originally posted by Garp View Post
            One big problem with zstd, that I just can't see the Linux kernel team being happy with, is that over-reaching PATENT agreement. It's not really accurate to call zstd BSD licensed. It's BSD + a whole big "You can't claim patent infringement against us, ever"

            Zstandard - Fast real-time compression algorithm. Contribute to facebook/zstd development by creating an account on GitHub.


            I was really interested in leveraging zstd at work, we've got some places it would be very useful and provide some strong advantages. The patent clause makes it pretty much a no-go.

            Yeah, I think think they'll have to get an exception from Facebook legal to have any hope of integrating it.

            Comment


            • #16
              Originally posted by microcode View Post


              Yeah, I think think they'll have to get an exception from Facebook legal to have any hope of integrating it.
              If they will, everyone else probably gets too because GPL. This seems exactly the same reason ZFS isn't in kernel

              Comment


              • #17
                Originally posted by Garp View Post
                One big problem with zstd, that I just can't see the Linux kernel team being happy with, is that over-reaching PATENT agreement. It's not really accurate to call zstd BSD licensed. It's BSD + a whole big "You can't claim patent infringement against us, ever"

                Zstandard - Fast real-time compression algorithm. Contribute to facebook/zstd development by creating an account on GitHub.




                I was really interested in leveraging zstd at work, we've got some places it would be very useful and provide some strong advantages. The patent clause makes it pretty much a no-go.
                Of course when someone comes up with a great new algorithms, fucking patents ruin everything. So it will be possible to use this in 2041?

                Comment


                • #18
                  I've been off facebook for years, for my own reasons. But these days, they are into heavy censorship and thought control. I cannot find a more manipulative company. Apple looks like saints in comparison.

                  Maybe they should read Google's simple three words of wisdom: "Do no evil".

                  Remember how they failed their HTML5 absolute commitment?

                  I need to investigate this. Faster than zlib at both compress AND decompress? And approaching lzma? This is math, there is no free lunch. Unless they figured out something fundamental, I can't believe this. The most gains, to my knowledge, have been either increasing the asynchronous speeds of compression and decompression (i.e. lzma), or focusing on optimizing work for particular data-sets which have Bayesian inference properties.

                  Comment


                  • #19
                    Originally posted by caligula View Post

                    Of course when someone comes up with a great new algorithms, fucking patents ruin everything. So it will be possible to use this in 2041?
                    From diagonal reading, the licence allows unlimited internal use, and commercial use, as long as it's not used against them. In lawyer retainer talk, it's clearly a "don't use this to hack us". Of course, if you pay your lawyers by the hour, it's not clear and they need to do some research.

                    It's a very nice licence, actually. I'm speaking relative. RMS would not approve.

                    Comment


                    • #20
                      Originally posted by AndyChow View Post
                      I've been off facebook for years, for my own reasons. But these days, they are into heavy censorship and thought control. I cannot find a more manipulative company. Apple looks like saints in comparison.

                      Maybe they should read Google's simple three words of wisdom: "Do no evil".

                      Remember how they failed their HTML5 absolute commitment?

                      I need to investigate this. Faster than zlib at both compress AND decompress? And approaching lzma? This is math, there is no free lunch. Unless they figured out something fundamental, I can't believe this. The most gains, to my knowledge, have been either increasing the asynchronous speeds of compression and decompression (i.e. lzma), or focusing on optimizing work for particular data-sets which have Bayesian inference properties.
                      They are using ANS entropy coding which is better than Huffman and approaches arithmetic coding in compression ratios while being faster.

                      Also the implementation is written with modern CPUs in mind, exploits instruction-level parralelism etc.

                      Comment

                      Working...
                      X