Announcement

Collapse
No announcement yet.

FFmpeg Lands JPEG-XL Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • intelfx
    replied
    It would seem that JPEG-XL truly is an advanced and well-thought-out codec and container. I really, really hope it takes off (and hopefully rises to the same level of ubiquity as it is now with JPEG).
    Last edited by intelfx; 26 April 2022, 09:24 AM.

    Leave a comment:


  • arun54321
    replied
    Originally posted by coder View Post
    you didn't tell us enough information to derive the compression ratio of the .jxl file, over the .jpg.
    The point of the post is to show jpeg transcode process is lossless.

    Leave a comment:


  • arun54321
    replied
    Originally posted by coder View Post
    Okay, you missed my point. What I meant is that you should then decode both the .jpg and .jxl versions to something like BMP and compare those.

    Also, either I'm misreading that output or you didn't tell us enough information to derive the compression ratio of the .jxl file, over the .jpg.
    Why should it be same? jpeg trancode mode use jpg as input and regular jxl files (both lossy and lossless) are encoded using a lossless source.

    Leave a comment:


  • coder
    replied
    Originally posted by arun54321 View Post
    Code:
    /tmp ❯ cjxl john-french-SNW4DWZEy8I-unsplash.jpg test.jxl
    JPEG XL encoder v0.6.1 [AVX2,SSE4,SSSE3,Scalar]
    Read 1920x1280 image, 71.7 MP/s
    Encoding [Container | JPEG, lossless transcode, squirrel | JPEG reconstruction data], 2 threads.
    Compressed to 236958 bytes (0.771 bpp).
    1920 x 1280, 27.71 MP/s [27.71, 27.71], 1 reps, 2 threads.
    Including container: 237445 bytes (0.773 bpp).
    
    /tmp ❯ djxl test.jxl decoded.jpg
    JPEG XL decoder v0.6.1 [AVX2,SSE4,SSSE3,Scalar]
    Read 237445 compressed bytes.
    Reconstructed to JPEG.
    1920 x 1280, 21.17 MP/s [21.17, 21.17], 2.34 MB/s [2.34, 2.34], 1 reps, 2 threads.
    Allocations: 349 (max bytes in use: 4.304357E+07)
    
    /tmp ❯ md5sum decoded.jpg
    9d2f4ca592f572678a7442fbb2b7617f decoded.jpg
    
    /tmp ❯ md5sum john-french-SNW4DWZEy8I-unsplash.jpg
    9d2f4ca592f572678a7442fbb2b7617f john-french-SNW4DWZEy8I-unsplash.jpg
    Okay, you missed my point. What I meant is that you should then decode both the .jpg and .jxl versions to something like BMP and compare those.

    Also, either I'm misreading that output or you didn't tell us enough information to derive the compression ratio of the .jxl file, over the .jpg.
    Last edited by coder; 26 April 2022, 03:56 AM.

    Leave a comment:


  • coder
    replied
    Originally posted by Quackdoc View Post
    who knows, not me for sure. but the potential is there. 5 years from now, 10 years. how old is jpeg now, 20 years old? I think this is a smart "future proofing" step.
    Most likely, it's there for corner cases where someone wants to encode like 1Bx1. There are datasets where you might do something like that, and perhaps being able to encode multiple channels in that way could make the format more attractive than other options.

    1Bx1B would be enough to hold an image of the Earth's hemisphere at about 53 pixel/m, which I think is significantly higher-res than the best quality data you'd find on Google Earth. And outside heavily-populated areas or other points of interest, most data on Google Earth is far lower-res.

    It would also occupy about 3 Exabytes of memory, if decoded to RGB @ 8bpc. For it to be a practical on-disk representation, the file format would need some support for tiles and indexing. Otherwise, you're better off storing tiles as separate files.

    Originally posted by Quackdoc View Post
    for instance kickstarter had to override a change from their CDN to use gifs instead of avifs since it broke firefox compat.
    So, they were ready to break everyone on an older browser or hose the CPU on any client with a lower spec device? Nice.
    Last edited by coder; 26 April 2022, 03:59 AM.

    Leave a comment:


  • arun54321
    replied
    Originally posted by coder View Post

    That's merely a "computationally cheap" conversion. Not doing a full decode doesn't make it inherently lossless. Even if it's reversible, that still doesn't make it lossless. For it to be truly lossless, it would have to decode to the exact same pixel values, which isn't going to happen if you're changing the DCT size and colorspace.
    Code:
    /tmp ❯ cjxl john-french-SNW4DWZEy8I-unsplash.jpg test.jxl
    JPEG XL encoder v0.6.1 [AVX2,SSE4,SSSE3,Scalar]
    Read 1920x1280 image, 71.7 MP/s
    Encoding [Container | JPEG, lossless transcode, squirrel | JPEG reconstruction data], 2 threads.
    Compressed to 236958 bytes (0.771 bpp).
    1920 x 1280, 27.71 MP/s [27.71, 27.71], 1 reps, 2 threads.
    Including container: 237445 bytes (0.773 bpp).
    
    /tmp ❯ djxl test.jxl decoded.jpg
    JPEG XL decoder v0.6.1 [AVX2,SSE4,SSSE3,Scalar]
    Read 237445 compressed bytes.
    Reconstructed to JPEG.
    1920 x 1280, 21.17 MP/s [21.17, 21.17], 2.34 MB/s [2.34, 2.34], 1 reps, 2 threads.
    Allocations: 349 (max bytes in use: 4.304357E+07)
    
    /tmp ❯ md5sum decoded.jpg
    9d2f4ca592f572678a7442fbb2b7617f decoded.jpg
    
    /tmp ❯ md5sum john-french-SNW4DWZEy8I-unsplash.jpg
    9d2f4ca592f572678a7442fbb2b7617f john-french-SNW4DWZEy8I-unsplash.jpg

    Leave a comment:


  • coder
    replied
    Originally posted by Joe2021 View Post
    I have to admit that I made the very same prediction, but reality proved me wrong. I converted a huge JPG archive to JPEGXL and its now reduced to 82% of the former size.
    That's interesting, but not as useful as if we knew more about those images. Given their estimate of a 20% size reduction, I'd expect your files are higher-resolution or higher-quality what's needed or justified by their content (i.e. they have comparatively low-entropy). How much detail do they have, at 1:1 scale? Are they a little blurry, when zoomed in that far? Are there large parts of the images with relatively little variation?

    If an 82% reduction were typical, you'd think they'd quote that in their "JPEG lossless conversion" FAQ. So, I'm just trying to figure out what makes your case special.

    Originally posted by Joe2021 View Post
    The point is: You can convert it back with the same tool and you will get the bit-identical original JPEG! Same sha256 fingerprint.
    That's cool. Thanks for the datapoint. Doesn't mean it's lossless, though.

    Originally posted by Joe2021 View Post
    So, as this is a bidirectional lossless conversion
    For it to be lossless, the decoded JPEG and JPEG-XL version would have to be bit-identical. Please try that, on some of your 82%-reduced files, and let us know.

    Originally posted by Joe2021 View Post
    Big kudos to the JPEG-XL-People!
    Yeah, sounds like it.

    Leave a comment:


  • coder
    replied
    Originally posted by curfew View Post
    Instead of spewing technical jargon and meaningless crap,
    Accusing me of "spewing technical jargon" disqualifies your subsequent accusation of "meaningless crap".

    What I said is technically accurate, and based on having spent more time on the inside of IJG & libjpeg-derived source code than probably anyone else in this thread.

    Originally posted by curfew View Post
    you could just read the JPEG XL FAQ and see how they spin it in there:

    "The JPEG image is based on the discrete cosine transform (DCT) of 8x8 with fixed quantization tables. JPEG XL offers a much more robust approach, including variable DCT sizes ranging from 2x2 to 256x256 as well as adaptive quantization, of which the simple JPEG DCT is merely a particular case."
    Many of the operations described on that page won't result in a truly lossless conversion. It might be an inexpensive conversion and one that's mostly reversible, but as soon as you start changing the transform size or colorspace, you've sacrificed true losslessness.

    Originally posted by curfew View Post
    "As a result, you do not need to decode JPEGs to pixels to convert them to JPEG XLs. Rather than relying on the JPEG internal representation (DCT coefficients), utilize JPEG XL directly.
    That's merely a "computationally cheap" conversion. Not doing a full decode doesn't make it inherently lossless. Even if it's reversible, that still doesn't make it lossless. For it to be truly lossless, it would have to decode to the exact same pixel values, which isn't going to happen if you're changing the DCT size and colorspace.

    Originally posted by curfew View Post
    Even though only the subset of JPEG XL that corresponds to JPEG is used, the converted images would be 20 percent smaller."
    20% certainly isn't bad, but it's not on par with the advantages you could get from a full re-encode. And that was my main point.

    The way Skeevy420 was talking about it seemed to give the impression that lossless conversion from JPEG would give the full advantages advertised about the format.

    Leave a comment:


  • coder
    replied
    Originally posted by arzeth View Post
    https://github.com/archlinux/svntogi...s/imlib2/trunk Since yesterday, imlib2 in [testing] now supports JPEG-XL too! Now I am able to use feh (image viewer) to view .jxl!

    I've just done lossless compression results comparison on a very-highly-detailed 1280x720 2D-image (non-overclocked Ryzen 5 2600, all these packages are recompiled with my CFLAGS):
    Code:
    optipng -o7 a.png # → 1 170 748 bytes ~2 minutes IIRC.
    avifenc -j 12 -s 0 --lossless a.png losles.avif # → 1 072 339 bytes, 13 327 ms.
    cwebp -mt -m 6 -q 100 -lossless on a.png -o losles.webp # → 937 850 bytes, 6432 ms.
    cwp2 -q 100 -effort 9 a.png -o losles.wp2 # → 869 486 bytes. 10 minutes, only 1 thread was used. Just compiled from git.
    cjxl a.png -q 100 -e 8 losles8.jxl # → 851 817 bytes, 5300 ms, only 2 threads?
    cjxl a.png -q 100 -e 9 losles9.jxl # → 828 897 bytes, 50 748 ms (9.5x slower).
    Thanks for the data, though I think lossless compression performance probably doesn't extrapolate to lossy.

    Leave a comment:


  • coder
    replied
    Originally posted by arun54321 View Post
    What do you mean? jxl is not computationally heavy to encode.
    Wouldn't that depend heavily on what features and parameters you're using, as well as your target quality level?

    Leave a comment:

Working...
X