Announcement

Collapse
No announcement yet.

Mozilla Comes Out Neutral On JPEG-XL Image Format Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by billyswong View Post

    It could be done on server side by intelligently feeding different amount of the same file by different file request. A baseline for "100%/96dpi", extra data chunk for "150%/144dpi", another for "200%/192dpi", and so on. The infrastructure to fetch different "file" for different monitor dpi in browser client side is already there.
    Of course anything can be done; it just has to be standardized first. There is already a standard mechanism for fetching variable Image sizes intelligently and it works regardless of the image first used. Therefore this JPEG-XL specific hack bears no chance if becoming a standard.

    Comment


    • Originally posted by pkese View Post
      Sad.
      JPEG-XL is currently the only format that's actually friendly for developers.

      When somebody pastes a bunch of pixels to your software, the current procedure is to first test if it needs to be compressed with a lossy (jpeg) or lossless (png) format before proceeding (or else you either get a screenshot image with fuzzy text, or a huge size photo crop).

      JPEG-XL is currently the only format that does a decent job for all, lossy, lossless and nearly lossless image compression.
      In addition it can produce a single file for multiple resolutions.
      can it also produce image variations other than resolutions?

      Comment


      • Originally posted by curfew View Post
        Any new code is a security risk. It isn't all that surprising to see image decoding libraries contain proper vulnerabilities that allow an attacker to execute arbitrary code on the compromised computer.

        Adding a big chunk of code will also increase compilation time and execution time for test suites and so on. All these things cost Mozilla money and human resources, so it isn't just about pouring some new code into the app and forgetting about it.
        maybe they can waste their time rewriting it int rust instead of putting image processing into a different process among the other 100 browser sandboxed processes.

        Comment


        • Originally posted by Quackdoc View Post

          I wouldn't say that mozilla doesn't have the balls, its more like they couldn't be asked to waste any more resources that they couldn't pocket for themselves. I wouldn't really blame google either, just blame mozilla for not being the same mozilla it once was. google can take blame for google crap. but not moz's
          Well yes, they are struggling and have lost to the parasitic marketing practices of Google has used to push Chrome on people (years ago several people I gave PC-Support had it installed 'by accident', now the default Android browser). It is a logical and sane step for Mozilla to push forward for now.

          But why is it like this? Maybe because the major entity on the market has nuked its implementation while it was still hidden behind an advanced feature flag. And I also agree with what people said earlier in here: In the end, the majority of the users do not care. They do not understand the underlying technologies. But this is a highly stupid basis for technological development… the web came so far because it was pushed by people with deep understandings and technological ideals, not pure monetary incentives. And among all the post JPEG2k image formats, JPEG-XL kind of united most of the best ideas in it, even the licensing. It had so much potential, if it had only been adopted. Now image formats will remain a mess for many more years.

          Comment


          • Originally posted by cj.wijtmans View Post
            can it also produce image variations other than resolutions?
            JXL supports lots of arbitrary layers, like alpha channel, depth, thermal etc. so I guess so​

            Originally posted by Draget View Post

            Well yes, they are struggling and have lost to the parasitic marketing practices of Google has used to push Chrome on people (years ago several people I gave PC-Support had it installed 'by accident', now the default Android browser). It is a logical and sane step for Mozilla to push forward for now.

            But why is it like this? Maybe because the major entity on the market has nuked its implementation while it was still hidden behind an advanced feature flag. And I also agree with what people said earlier in here: In the end, the majority of the users do not care. They do not understand the underlying technologies. But this is a highly stupid basis for technological development… the web came so far because it was pushed by people with deep understandings and technological ideals, not pure monetary incentives. And among all the post JPEG2k image formats, JPEG-XL kind of united most of the best ideas in it, even the licensing. It had so much potential, if it had only been adopted. Now image formats will remain a mess for many more years.
            Mozilla has no one to blame but themselves, they have an extremely loyal userbase, that can some how manage to fund their CEO to nearly 3mil in salary. in the same year lost 250 employees to "coronavirus" (despite the pay raise).

            don't blame google. yes google did some shady shit, but mozilla has the majority blame to take. they are simply not the same company that they used to be.

            Comment


            • Originally posted by billyswong View Post

              There won't be any data quota saving or bandwidth saving if the browser doesn't tell servers how much data it want. Servers can't send less data for lower dpi devices without knowing beforehand. Even if you only cut off in the middle of transmission, it is still either you save no data quota, or you leak your resolution / dpi.
              If I were to send my exact DPI or resolution, it would be much more accurate. If I only load X percent of a image file, the server could only estimate those values and a simple zoom or window size change would already give another fingerprint.

              Comment


              • Originally posted by Anux View Post

                If I were to send my exact DPI or resolution, it would be much more accurate. If I only load X percent of a image file, the server could only estimate those values and a simple zoom or window size change would already give another fingerprint.
                These sentences sound like a support of using progressive JXL. Without progressive image encoding, client always has to expose the exact DPI unless one is willing to experience degraded imagery or waste bandwidth for the biggest image files.

                Comment


                • Yes progressive, isn't that what I was talking about? That's what gives us the possibility to do this. With JXL the progressive image quality has become really good, no comparison to gif, png or jpg. And it needs really few bytes to display the first preview, one could even prioritize certain areas to be decoded first.

                  Comment


                  • The ideal solution would be that the browser would download enough of the image to simply download the proper resolution to fit the window. This makes it ideal since if the viewport changes in size, the browser can then just download more of the image to get the higher resolution, or change to the lower resolution. this makes JXL ideal for galleries, as you can now just progressively download the image for both the preview, resized view, and full view, instead of downloading multiple copies of the image. this makes it great for both CDNs and people on metered and slow networks.

                    the server never has to be aware, since the browser can arbitrarily decided to stop downloading whenever it wants, OFC a feature like this would be a good amount of effort to work on. but it's not necessary, just a very nice to have feature.

                    Comment


                    • Originally posted by Quackdoc View Post
                      The ideal solution would be that the browser would download enough of the image to simply download the proper resolution to fit the window. This makes it ideal since if the viewport changes in size, the browser can then just download more of the image to get the higher resolution, or change to the lower resolution. this makes JXL ideal for galleries, as you can now just progressively download the image for both the preview, resized view, and full view, instead of downloading multiple copies of the image. this makes it great for both CDNs and people on metered and slow networks.
                      Choosing appropriate image sizes is already a standard in HTML and it supports each and every image file format and is purely a client-side solution. Progressive decoding isn't appropriate for thumbnailing.

                      Originally posted by Quackdoc View Post
                      the server never has to be aware, since the browser can arbitrarily decided to stop downloading whenever it wants, OFC a feature like this would be a good amount of effort to work on. but it's not necessary, just a very nice to have feature.
                      The server will always be aware what data the client has requested.

                      Comment

                      Working...
                      X