Announcement

Collapse
No announcement yet.

Google Is Already Experimenting With WebP2 As Successor To WebP Image Format

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Hmm this is a bit odd given AVIF, but I wonder if it's more aiming at being a stopgap way of having small images that decode on CPU faster than AVIF, which would make sense I guess...

    Comment


    • #22
      Originally posted by davidbepo View Post
      why even do this? WebP is already good, for the best u want AVIF
      For lossy, Jpeg XL is looking like the champion with AVIF only beating it in extremely bitstarved scenarios, also Jpeg XL has the perfect upgrade path for existing Jpeg as it losslessly recompresses them into Jpeg XL with ~20% better compression.

      As for WebP, its crowning achievement is that of lossless compression, where it outshines the de facto lossless standard PNG in every metric, and by the sound of it, they are improving on it in WebP V2.

      Comment


      • #23
        Originally posted by cl333r View Post
        Netflix doesn't seem fond of JPEG XL [1], in their huge blog they only mention it as a one short paragraph. Do you think both Netflix and Google engineers are stupid?
        Google have developers paid to work on Jpeg XL, like Jyrki Alakuijala who made WebP and Brotli for Google.

        Comment


        • #24
          Originally posted by davidbepo View Post
          why even do this? WebP is already good, for the best u want AVIF
          AFAIK webp doesn't support more than 8bit-per-color.

          Comment


          • #25
            I see a lot of comments here talking about using different formats, but I think they miss the obvious usecase for webp (and webp2). If you want to extract an image from a webm media file, you have a frame that is already encoded using a lossy encoder. Throwing that frame in to another random image format which is lossy will cause additional artifacts in the image which you want to try to avoid. So, by having an image format that works more inline with the webm media, it might be possible to reuse the first encoding used for the media, and hence avoid any additional rendering artifacts. Webp and Webp2 may never replace any of the big image formats, but maybe that was never the idea.

            Comment


            • #26
              Originally posted by cl333r View Post
              Netflix doesn't seem fond of JPEG XL [1], in their huge blog they only mention it as a one short paragraph. Do you think both Netflix and Google engineers are stupid?
              Maybe.

              Maybe they are politically or otherwise biased. Naturally, as both Google and Netflix are part of AOM, I could imagine a legal bias towards a technology that is protected by AOM — or just a bias towards a technology they have already sunk some manpower into.

              Originally posted by cl333r View Post
              I suspect they're not, thus there must be something we don't know about or misappreciate.
              This is a reduction to authority, thus your conclusion is invalid. There indeed may be something I don't know about or misappreciate, but you will need to show what it is.

              Comment


              • #27
                Originally posted by wswartzendruber View Post

                JPEG XL's royalty status is murky at best. The reference implementation is available under an Apache-2.0 license, but going by the license text, only that implementation is covered as royalty-free.
                It is not murky at all. Besides the Apache-2.0 license, which indeed is not just a FOSS copyright license but also contains a perpetual and irrevocable patent grant, the contributors to JPEG XL (Google and Cloudinary) have repeatedly emphasized that they are aiming for a royalty-free codec, and both have them have filed Type-1 declarations according to the ISO/IEC/ITU Common Patent Policy: "The Patent Holder is prepared to grant a Free of Charge license to an unrestricted number of applicants on a worldwide, non-discriminatory basis and under other reasonable terms and conditions to make, use, and sell implementations of the above document."

                Anyone is free to make alternative royalty-free implementations of JPEG XL, either by forking the reference implementation or by starting from scratch.

                Of course it is impossible to guarantee that there will never be a patent troll who claims to hold patents relevant to JPEG XL. That is true for all royalty-free technology, also e.g. AVIF or WebP. But the actual contributors to the JPEG XL standard have made a strong commitment to making a royalty-free codec; in fact the Apache-2.0 license helps us to fight patent trolls since it contains defensive termination clauses exactly for that purpose.

                Comment


                • #28
                  Originally posted by cl333r View Post
                  Netflix doesn't seem fond of JPEG XL [1], in their huge blog they only mention it as a one short paragraph. Do you think both Netflix and Google engineers are stupid?
                  I suspect they're not, thus there must be something we don't know about or misappreciate.


                  [1] https://netflixtechblog.com/avif-for...ng-b1d75675fe4
                  Netflix engineers certainly aren't stupid. I suspect the main reason they did not pay much attention to JPEG XL in their blogpost of 9 months ago is that JPEG XL was still in development at that point – the first "format release candidate" version 0.1 was just released this weekend.

                  We'll have to see whether or not Netflix will be fond of JPEG XL once JPEG XL becomes available as an option, i.e. once the bitstream is completely frozen and the library api and other tooling aspects are sufficiently stable. AVIF has the advantage of being available right now, but it also has its disadvantages.
                  I wrote a blogpost half a year ago to make a preliminary comparison of JPEG XL, AVIF and HEIC: https://cloudinary.com/blog/how_jpeg...r_image_codecs

                  Comment


                  • #29
                    An issue with Google/Alphabet, or anyone else, designing an image format to satisfy their use-case is that it imposes significant costs on everyone else. While a 'better' still image format could well benefit Google/Alphabet by decreasing their storage requirements (and maybe cpu requirements), imposing it on the world via market control of web-browsers also means that everyone else's image processing software needs to support it. This is not cost-free. A benefit of a standards-based approach is that it is also a 'level-playing field' based approach.
                    Of course, one can have a philosophical argument about the freedom to not follow standards, allowing market-based competition to choose the winner (like, for example the battle between VHS and Betamax video standards), and in a free market, such arguments have merit. However, it is not a free market. I have started seeing sites that are 'best viewed with Chrome', or indeed work only with Chrome as a browser, which is pushing minor participants into being required to support whatever Chrome does.
                    The international standards process is by no means perfect, and can be gamed and subverted, with RAND as well as the shenanigans around the acceptance of ISO/IEC 29500:2008, however pushing 'standards' that benefit market dominant players is not compatible with a free market.

                    Basing still-image formats on video compressors will not give you the best still-image format.

                    Comment


                    • #30
                      Originally posted by hellomoto View Post

                      Can you explain why?
                      Its completely useless, not even beating Jpeg, while not having some features like incremental loading. From https://en.wikipedia.org/wiki/WebP :

                      In September 2010, Fiona Glaser, a developer of the x264 encoder, wrote a very early critique of WebP.[19] Comparing different encodings (JPEG, x264, and WebP) of a reference image, she stated that the quality of the WebP-encoded result was the worst of the three, mostly because of blurriness on the image. Her main remark was that "libvpx, a much more powerful encoder than ffmpeg's jpeg encoder, loses because it tries too hard to optimize for PSNR" (peak signal-to-noise ratio), arguing instead that "good psycho-visual optimizations are more important than anything else for compression."[19]

                      In October 2013, Josh Aas from Mozilla Research published a comprehensive study of current lossy encoding techniques[60] and was not able to conclude WebP outperformed JPEG by any significant margin.

                      Comment

                      Working...
                      X