Announcement

Collapse
No announcement yet.

Google Is Already Experimenting With WebP2 As Successor To WebP Image Format

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Lossy WebP is based on VP8, which is limited to 16k by 16k because at the time that seemed a reasonable limit for video. I don't think there's any other reason.

    As for previous JPEG standards: Safari does support JPEG 2000 for a long time now. (btw it does not support HEIC afaik)
    Outside of Apple, JPEG 2000 didn't really catch on (except in niches like medical and digital cinema), probably because of several reasons:
    - compression is not THAT much better than old JPEG
    - computational complexity is/was an issue
    - open source implementations didn't exist initially, and are still not as good as proprietary implementations

    Then there is JPEG XT, which is a backwards compatible way to do extensions to JPEG like alpha and HDR. It also didn't catch on. The problem with this approach is that compression cannot be better (because it needs to be decodable by legacy decoders) and you can't really use the new features like HDR because existing decoders will just ignore it.

    Then there is JPEG XR, which is the same as WDP. It was pushed by Microsoft but nobody else was really interested. Compression was between JPEG and JPEG 2000, so meh. Tooling was poor and abandoned by Microsoft, so while the codec was in IE and Edge, nobody seems to be interested in it anymore.

    Finally there is JPEG XS, which is not really meant as an image compression file format but rather a lightweight ultra-low-latency compression method to replace uncompressed transmission in e.g. video cables. It's not intended for the web.

    JPEG XL is different/better than previous attempts in various ways:
    - significantly better compression
    - good open source reference software from the start
    - computational complexity is good (single-core somewhat slower than jpeg, but parallelizable so in practice on current CPUs faster than jpeg)
    - not bw-compatible like JPEG XT, but legacy-friendly (can transcode existing JPEG effectively without any additional loss, something no other codec can do)
    - also great at lossless / non-photographic (something that traditionally did not get that much attention in JPEG)

    Comment


    • #72
      Originally posted by curfew View Post
      Why even do this? WebP has worse performance than JPEG, so WebP2 might barely only catch up with it.
      your parallel universe is weird

      Comment


      • #73
        Originally posted by davidbepo View Post

        your parallel universe is weird
        He's not that wrong. While I do think that WebP is not worse than JPEG, it's also not that much better. There are images for which WebP is worse than JPEG. Typically you can get a similar-perceptual-quality WebP with 20-30% less bytes than with JPEG, but there are certainly cases when WebP is larger than JPEG for the same perceptual quality, or even worse, when WebP just cannot reach the desired quality because of its obligatory 4:2:0 chroma subsampling and its obligatory relatively harsh chroma quantization. I've seen images where even cwebp -q 100 still produces problematic visible color banding.

        Of course there are also some areas where WebP is better than JPEG, e.g. alpha support. But it's not the case that WebP is strictly better than JPEG on all fronts:
        • WebP does not support lossy 4:4:4
        • WebP does not support progressive decoding
        • Highest quality in WebP can in some cases still be too low to reach desirable visual quality
        • Compression density is ON AVERAGE better than JPEG, but not for every image
        • Generation loss is way more problematic for WebP than for JPEG (and it is already a real problem for JPEG)
        WebP 2 is promising to improve some aspects, e.g. it will support 4:4:4 and the compression density will probably be sufficiently improved to beat JPEG on all images. But we'll have to see if it will improve on all aspects where WebP 1 was lagging behind. My feeling is though that progressive decode and high-fidelity encoding are not design goals for WebP 2.

        Comment


        • #74
          Originally posted by Jon Sneyers View Post
          [...]
          • WebP does not support lossy 4:4:4
          • WebP does not support progressive decoding
          • Highest quality in WebP can in some cases still be too low to reach desirable visual quality
          • Compression density is ON AVERAGE better than JPEG, but not for every image
          • Generation loss is way more problematic for WebP than for JPEG (and it is already a real problem for JPEG)
          ... none of which have ever been a showstopper for users and user-cases targeted by WebP.
          This list may be what you perceived is important (data? numbers?), but in practice no WebP users complained about their lack to the point of deeming the format unusable for them.

          Reminds me of the "you absolutely need 24bit/sample audio!" argument.

          Comment


          • #75
            Originally posted by polarathene View Post
            Most of those issues aren't a problem for web display, although the 16k limitation is an interesting one I wasn't aware of. There's nothing wrong with the format being tailored to a specific use, I don't think it's that well supported by user GUI apps? It's fantastic for web content however.
            There is no GUI use or Web use. It it just a file format. The wider the application, the better the format. Surely creators of GIF did not know that their inclusion of animation capability from 1987 will be a success on the 2020 social media.

            16 k size limitation is a problem for large textures. Maybe it sounded large 10 years ago, but it is not horribly large today, and this limitation will be ridiculous in a couple of years. Good file formats did not have pointless limitations, so they continue to be used 30 years after their creation.

            Originally posted by polarathene View Post
            DPI is useful how btw? Images just store the pixel data, DPI is irrelevant there, a displays physical size and resolution are what defines the DPI (technically I think the OS also has a notion of DPI too, I remember that being relevant when "printing" PDFs of web pages). Browsers also manage DPI with CSS 96px == 1 inch, a CSS px isn't necessarily 1 pixel on a display, there's usually a meta tag in the HTML to indicate how to scale the content to a device ratio that the DPI can inform.

            300 pixels of image data in both width/height may use 300 pixels each way on a 100 DPI display, and still 300 pixels each way on a 300 DPI display, however it'd physically be 1/3 smaller on the screen. What is the DPI metadata going to do here? Notice that the content was created with an intent representing 100 DPI and upscale the image to match physical size by duplicating pixels?

            In the web you'd get such anyway, and in something like photoshop I believe you just set what DPI you want to scale that 300 px width/height to, all it does is adjust the pixel count in the same manner by duplicating pixel data, just like if you zoomed an image. Other software like image viewers would need to support this "feature", doing so via metadata rather than per image format support seems more likely to be the way that'd be supported.
            Do you have any experience in the real world? Out there, things have dimensions in units. Image needs to have embedded dpi, so that it can be printed in 1:1 size without any further settings. Texture can be mapped to the object, without having to set any zoom factor. 3D object has a real world size, texture with DPI has real size, so it can be automatically displayed correctly.
            There are also use cases, where horizontal and vertical dpi is different; image is stretched in one direction by a non-integer factor.
            Like I said, creators of JPEG, PNG, TIFF, BMP, PCX, all understood the importance of dpi, while creators of WebP did not.

            Originally posted by polarathene View Post
            Colorspace is again not going to matter much, especially with the web. Can generally assume sRGB and convert from there (like DPI, yay for metadata when this may be relevant?). This is required for some software like Blender, textures may be in sRGB, but for normal maps it has to be set not to be treated as sRGB. You then have the render settings which IIRC this stuff applies too, I remember having to handle it as well when writing some GLSL / three.js code. I rather not have to think, "has this image potentially be encoded in some different colorspace?", as that sort of code doesn't play well with such and expects a specific input colorspace and output colorspace, else shit looks messed up and calculations are wrong. For the niche cases where it's relevant, use metadata and have software that uses that.
            You do not appear to have experience in developing color-accurate applications, so you fail to understand the needs of developers.

            Originally posted by polarathene View Post
            CMYK isn't all that useful beyond print is it? I don't see why anyone would be using webp for that? Main advantage of webp is file size for network transfer isn't?
            I also would not use CMYK, but people, who prepare documents for print, still do. This is one of the main reason why they do not use GIMP. They have a mental model and experience of what will happen in CMYK, so they continue to use it. WebP could be used out of its narrow scope, just like we use PNG, JPEG or TIFF. It WebP would compresses better than other file formats, it would make sense to use it there as well.

            Finally, so I don't sound too negative about WebP - it supports Alpha channel (transparency) with lossy compression, and this is its biggest advantage over JPEG or PNG.

            Comment


            • #76
              Originally posted by skal View Post

              ... none of which have ever been a showstopper for users and user-cases targeted by WebP.
              This list may be what you perceived is important (data? numbers?), but in practice no WebP users complained about their lack to the point of deeming the format unusable for them.

              Reminds me of the "you absolutely need 24bit/sample audio!" argument.
              OK, let's first of all distinguish image authoring and image delivery.

              For authoring, you need lossless and sufficient bit depth to have room for manipulation. At least 16-bit is needed for that, preferably more.

              For delivery, you don't need lossless, and you don't need more bit depth than what the display devices can actually render.

              There are other use cases like archival, medical, printing but in terms of limits, precision and fidelity they tend to be somewhere between authoring and delivery.


              Obviously WebP is not designed for authoring, only for delivery. Which is OK, but it does limit the scope of the codec quite a bit and some people might be interested in a single codec that can do both.

              But even within delivery, there are various fidelity targets. For some use cases, reducing bandwidth is the biggest concern, and compression artifacts are not a big deal, especially if they are not annoying (i.e. smoothing is fine, blockiness isn't). For other use cases though, fidelity matters too. In particular high-end brands do want the images representing their products to look great.

              In practice, WebP is actually unusable for some web delivery use cases.
              • There are images where 4:2:0 is just too destructive, and we do need to fall-back to 4:4:4 JPEG (or J2K on iOS/Safari) because otherwise we just cannot reach the desired visual quality.
              • There are other images where the minimum chroma DC quantization of WebP is too aggressive to avoid visible color banding (i.e. cwebp -q 100 is too low quality, and cwebp -lossless produces a file that is too large), and again we do need to fall-back to JPEG (4:2:0 might be OK, just need to avoid the harsh chroma quantization)
              • Chroma quantization in general is an issue in WebP: we regularly get complaints about the shade of subtle off-whites not being accurate enough in WebP. This can often be resolved by bumping up the quality enough, but then compression tends to suffer to the point that JPEG becomes a better choice.
              • Use cases / apps targeting users with typically rural on-the-road network conditions can make progressive decoding (getting the DC as a preview after 10-15% of the bytes) a must-have
              • The dimension limits of 16k x 16k are occasionally a reason to fall-back to JPEG. Sometimes web devs do want to have very wide or tall images, for whatever reason. It's obviously rare, but it does happen.
              It is of course a good choice in 97% of the cases, but there certainly are cases where some of the limits of WebP actually have been a showstopper to use it as a codec for web delivery.

              Comment


              • #77
                Originally posted by dpeterc View Post
                There is no GUI use or Web use. It it just a file format. The wider the application, the better the format.
                It's kind of evident in the format/extension no, "WebP", it's intent to be designed for the web primarily is right there... You can use it for other use-cases if you like, but if it is lacking for any requirements for such, it's not the fault of WebP, it clearly wasn't developed with the intent to be the best format choice for everything everywhere. That's not a bad thing, it does allow the format to be tailored/optimized on where it wants to specialize.

                Certain limitations might be unfortunate, but they may also exist for a reason as a tradeoff, a new iteration WebP2 kind of validates that it wasn't developed with the intention to be around for 30 years or so as a dominant format, but in many cases it's still a terrific choice for the web vs the existing options, so still a win. Perhaps AVIF will make WebP/2 irrelevant and achieve a wider adoption beyond web, that'd be great

                Originally posted by dpeterc View Post
                Surely creators of GIF did not know that their inclusion of animation capability from 1987 will be a success on the 2020 social media.
                You know that the majority of such platforms don't treat GIF as actual `.gif`, it's `.gifv`(pretty much mp4) or `.mp4`? Video is proven to be a much better substitute for the large file size of GIF, otherwise when MP4/WEBM aren't sufficient, there's WEBP when supported (but it's not as good as the video codecs for this).

                Originally posted by dpeterc View Post
                16 k size limitation is a problem for large textures. Maybe it sounded large 10 years ago, but it is not horribly large today, and this limitation will be ridiculous in a couple of years. Good file formats did not have pointless limitations, so they continue to be used 30 years after their creation.
                This wasn't a limitation regarding image formats to my knowledge. It affected GPUs in general, specifically on mobile phones/tablets. Generations from a few years back would fail on any texture beyond 8k, just rendering black. Current stuff today fails beyond 16k IIRC. I haven't personally tested beyond 16k, and haven't done much 3D work since 2017.

                Clearly though if the browsers are limited with Canvas, and that's been a problem for quite some time, it's not difficult to see why Google went with a 16k limitation too, I don't now technical details of specification, I think I attributed this to 32-bit int as the limitation, as 64-bit int isn't as widely available/supported (especially within JS?) perhaps that had something to do with that decision, or it helped with performance/optimizations.

                Originally posted by dpeterc View Post
                Do you have any experience in the real world? Out there, things have dimensions in units. Image needs to have embedded dpi, so that it can be printed in 1:1 size without any further settings. Texture can be mapped to the object, without having to set any zoom factor. 3D object has a real world size, texture with DPI has real size, so it can be automatically displayed correctly.
                There are also use cases, where horizontal and vertical dpi is different; image is stretched in one direction by a non-integer factor.
                Like I said, creators of JPEG, PNG, TIFF, BMP, PCX, all understood the importance of dpi, while creators of WebP did not.
                My experience in the real world is mostly with Web and 3D not print. I have helped some devs troubleshoot issues, some regarding pictures and their display size on the web. They were mac users and having confusion about embedded DPI not having relevance in their website display.

                Because they insisted that the information was important, it just gave them more hassle than any value, the tooling that was optimizing images didn't give a shit, it just processed pixels, if it was to take the embedded DPI into account, it'd also need additional context that wasn't all that portable for processing (eg CI or some third-party user if open-source). The web didn't care, since it already normalizes displays regardless of their DPI to 96 DPI (px units are representative of actual pixels).

                Given these concerns, for a web focused format, it's not something that needs to matter to WebP, the format works and displays correctly as it should without it. As you state, it should be a smooth process without any further settings, embedded DPI complicates that when it has no relevance. Pixels are pixels, what's changing is the DPI of a display that changes the physical representation of that, having an input DPI that differs from that is just inviting trouble/confusing scenarios imo.

                With 3D, colour space can likewise mess with shaders. It should all be normalized, otherwise you need to have your shaders account for all those variations which is a mess vs normalizing it. This is different from output colour space for the display to render with.

                I have no idea what you're on about regarding textures mapping to an object without setting a zoom factor? I've not had to do this before with any of my 3D work regarding textures and displays/resolutions. Perhaps a misunderstanding, could you please clarify?

                3D objects can have a world size (yes.. kinda?). I was involved in photogrammetry projects where we reconstructed physical real-world objects with digital replicas for clients to use in VR. Texture DPI having a real size, doesn't make much sense. All that matters in these cases is that the quality for the mesh and textures is sufficient based on viewport camera size that is rendering the scene. Such as something round not looking blocky, this is better handled with multiple levels of detail (LoD) based on distance to a mesh, or displacement maps and the like if you want to further refine details.

                For textures, you've got the equivalent with mipmapping, some scenes I worked with had 100 8k UDIMs (1 UDIM tile is an 8k UV 0-1 texture space, with it's own colour/normals/ORM maps), we used virtual texture atlas with that for being efficient. What mattered for textures looking right was having a consistent texel density. That means the pixels stay roughly the same size across the rendered surface (minimal to no distortions), texels differ from pixels (the 2D pixels you have on the texture or the rendered frame for display), they're the mapping of the textures pixels across the 3D mesh. If some UV islands were sized to different proportion, you'd get different scaled texels, especially noticeable side by side across a seam. DPI has no relevance here and colour space is already decided, you don't mix random colour spaces for the inputs relying on embedded metadata to fix things for you.

                I have heard of non-square pixels before. I've never had a situation where it's mattered and required me to accommodate something like that before.

                Originally posted by dpeterc View Post
                You do not appear to have experience in developing color-accurate applications, so you fail to understand the needs of developers.
                I am a developer? (and partly a 3D graphical artist)

                I'm not experienced with professional graphic design and print though. I've developed colour applications, but this was where 8-bit RGB was the only concern. Beyond that I've only had to deal with 8-bit RGB colour picker that more accurately matches the colour of lights such as a Philips Hue product, where you have to deal with a different colour space and account for gamma or something along those lines (was 4-5 years ago). I've worked with HSL and a few variants there IIRC in the past as well, but definitely not an area I'm heavily experienced or confident in.

                I'm sure that there are those who have needs for this additional info, but I don't see how it's relevant to the web with WebP, just a wishlist of what WebP should have done to be more widely adopted outside of the web? It's not like any of this would be making colour accuracy better in the web browser across my multiple displays which don't all accurately reproduce their own supported gamuts? (I have rather cheap and old displays)

                Comment

                Working...
                X