Announcement

Collapse
No announcement yet.

FFmpeg Lands JPEG-XL Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • coder
    replied
    Originally posted by arun54321 View Post
    Are you aware of the fact that jpg output differs between jpg decoders?
    Sure, there are differing DCT/IDCT implementations, as well as how chroma interpolation is performed, how much intermediate precision is carried forward to the colorspace transform, and how it rounds the results. Perhaps I was naive in assuming the reference JPEG-XL tools would treat each of these in a manner consistent with the original libjpeg.

    One could control for some of that by using a monochrome or 4:4:4 JPEG file and potentially even skipping the colorspace transform - either by outputting native YUV or encoding in RGB (i.e. if the image isn't monochrome).

    However, that would tell one less about what we actually want to know, which is the nature and significance of the differences between the original JPEG file and the JPEG-XL version. For that, it would be instructive to stick to more "typical" YUV 4:2:0 files and either analyze the differences between the two decoded outputs or ideally to compute PSNR vs. a raw original.

    The potential benefit from this investigation could be informing users interested in maximizing the fidelity of their image collection that their best option would in fact be a full transcode.

    Leave a comment:


  • arun54321
    replied
    Originally posted by coder View Post
    Yes, I'm aware of that. That's not how people view them, however.


    Since the FAQ page on lossless JPEG conversion mentions changing the DCT size & colorspace, it's quite possible they won't decode to the same result. Either way, it'd be good to know.
    Are you aware of the fact that jpg output differs between jpg decoders?

    Leave a comment:


  • Quackdoc
    replied
    Originally posted by coder View Post
    You quoted improvements in the range of 5% to 50%. What was the total before/after? From that, we can estimate the average (perhaps a bit skewed by the PNGs).
    it's hard to say, not only because of PNGs but also gifs that got encoded to avif. and since I no longer have the source folder... the original source folder was about 28gb. iirc the jxl images are 12gb in size. as for specifically how much of that was the removal of gifs from the pool and compression is in the air

    I can probably just scrape a bunch of sites for jpgs and compress them though, probably wouldn't be hard to do

    Leave a comment:


  • coder
    replied
    Originally posted by LinAGKar View Post
    If it's reversible you could convert the JXL back to a JPEG and then decode that to get the same result as decoding the original JPEG.
    Yes, I'm aware of that. That's not how people view them, however.

    Originally posted by LinAGKar View Post
    It would be really weird if decoding the JXL directly gave a different result.
    Since the FAQ page on lossless JPEG conversion mentions changing the DCT size & colorspace, it's quite possible they won't decode to the same result. Either way, it'd be good to know.

    Leave a comment:


  • LinAGKar
    replied
    Originally posted by coder View Post
    It doesn't do that. As I explained, reversible isn't the same as lossless. Lossless would mean the decoded output of .jxl is the same as that of the original .jpg file.

    The reason I asked about file sizes is that, in the event it turns out to be lossless, it would be nice to know how much further compression you're seeing.
    If it's reversible you could convert the JXL back to a JPEG and then decode that to get the same result as decoding the original JPEG. It would be really weird if decoding the JXL directly gave a different result. Do if it's reversible it would be weird if it wasn't lossless.

    Leave a comment:


  • Joe2021
    replied
    Originally posted by coder View Post
    That's interesting, but not as useful as if we knew more about those images. Given their estimate of a 20% size reduction, I'd expect your files are higher-resolution or higher-quality what's needed or justified by their content (i.e. they have comparatively low-entropy). How much detail do they have, at 1:1 scale? Are they a little blurry, when zoomed in that far? Are there large parts of the images with relatively little variation?
    It is an archive of 12 years of photography with DSLR containing ten-thousands of JPEGs.

    Originally posted by coder View Post
    If an 82% reduction were typical, you'd think they'd quote that in their "JPEG lossless conversion" FAQ. So, I'm just trying to figure out what makes your case special.
    I am not talking about typical, but about my experience, which is not an synthetic example, but very real world at least to me.

    Originally posted by coder View Post
    That's cool. Thanks for the datapoint. Doesn't mean it's lossless, though.

    For it to be lossless, the decoded JPEG and JPEG-XL version would have to be bit-identical. Please try that, on some of your 82%-reduced files, and let us know.
    Again, the back-converted files have an identical sha256 fingerprint compared with the original - admitted, that is not a proof, but sufficient likely to me.


    Leave a comment:


  • coder
    replied
    Originally posted by Quackdoc View Post
    this is the result of re-encoding a 20-30gb library of photos and anime art. some of the jpeg quality was fine, some was trash, some was encoded with with trash settings some optimized. and of course some were PNGs
    You quoted improvements in the range of 5% to 50%. What was the total before/after? From that, we can estimate the average (perhaps a bit skewed by the PNGs).

    Leave a comment:


  • coder
    replied
    Originally posted by arun54321 View Post
    The point of the post is to show jpeg transcode process is lossless.
    It doesn't do that. As I explained, reversible isn't the same as lossless. Lossless would mean the decoded output of .jxl is the same as that of the original .jpg file.

    The reason I asked about file sizes is that, in the event it turns out to be lossless, it would be nice to know how much further compression you're seeing.
    Last edited by coder; 27 April 2022, 12:07 AM.

    Leave a comment:


  • Quackdoc
    replied
    Originally posted by coder View Post
    Thanks for the datapoint. Would you tell us more about it? Is it huge in resolution? Does it have large areas with relatively little variation? What was the base JPEG quality level?
    this is the result of re-encoding a 20-30gb library of photos and anime art. some of the jpeg quality was fine, some was trash, some was encoded with with trash settings some optimized. and of course some were PNGs

    FWIW, JPEG and PNG can both do progressive.
    and there are image codecs that can't.

    Leave a comment:


  • Quackdoc
    replied
    Originally posted by coder View Post
    So, they were ready to break everyone on an older browser or hose the CPU on any client with a lower spec device? Nice.
    avif has been around in chrome for a while, and I can play back avifs on a dual core celeron n3050. lower spec devices are a non issue. as for older browsers... just update lol.

    Leave a comment:

Working...
X