Announcement

Collapse
No announcement yet.

Mozilla Comes Out Neutral On JPEG-XL Image Format Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    I think there are some very bad hombres on both sides.

    Comment


    • #62
      Originally posted by curfew View Post
      Any new code is a security risk. It isn't all that surprising to see image decoding libraries contain proper vulnerabilities that allow an attacker to execute arbitrary code on the compromised computer.

      Adding a big chunk of code will also increase compilation time and execution time for test suites and so on. All these things cost Mozilla money and human resources, so it isn't just about pouring some new code into the app and forgetting about it.
      They're a software company. It's what they do. It's a simple image codec. If it's that high risk to add, they need to shut down as their browser can't be trusted at all. Pathetic excuse. They should be good at adding them and have people good enough to make sure it isn't a security risk. It's not too much to ask as the maker of a web browser used by hundreds of million of people.

      Comment


      • #63
        Originally posted by curfew View Post
        Let's also consider the end-user: he sees a nice image on the web and downloads it. The image software on his computer doesn't necessarily support all those quirks of JPEG-XL, so the downloaded image might appear as lower resolution or lower quality than what the user saw on the web. It might also be unclear how to extract the best quality image for archival purposes. Dragging and dropping the image file into, say, a spreadsheet can yield varying results.
        The software on the computer either supports the JPEG-XL standard or it doesn't... there will not be a partial implementation - no idea what you're talking about. Also unless the user partially downloads the image for whatever reason (internet is cut off or something), it will always be transferred fully.. same also when D&D.

        Originally posted by curfew View Post
        I don't see what the real-life benefit is for a single format supporting both lossy and lossless compression. It probably requires twice the amount of code, e.g. amounts to an equal burden of maintenance for the developers as having JPEG and PNG separately...
        There are situations, where either encoding is useful. For example it is best to encode transparency losslessly, while you can lossily encode the RGB channels. Also certain kind of content compresses better with lossless techniques (screen grabs - monotone colors). Having both in a codec means you can add a modular mode, where the codec decides depending on content which one to use. Also from user perspective it's in both cases a raster bitmap image - there is no need to have a distinction of what kind of compression is used. Since JPEG 2000 pretty much all lossy image codecs have a lossless mode too.

        Originally posted by curfew View Post
        While the technical aspects of JPEG-XL might seem exciting for some HC programmers or techjunkies, they most likely pose a challenge for regular users to be able to understand what is going on under the hood. These features also might even require custom UI implementations to e.g. support switching between the embedded image files.
        No idea what you're talking about. From user perspective it is just an bitmap image just like a progressive JPEG is.

        Originally posted by curfew View Post
        ​Embedding multiple versions of an image into a single file is also really bad for the web. Web wants to minimize the amount of downloaded data, not bundle unnecessary junk together.
        There are no multiple versions of the image in a single file. The amount of downloaded data isn't bigger either.

        Comment


        • #64
          The author of JPEG-XL, Jon Sneyers has a habit of writing horrifically slow compression algorithms, just like he did with FLIF.

          Comment


          • #65
            abott

            ” format notched off once it's added. In reality, a few fixes over a few years is what it'll needed.“

            i disagree that it’s tribial, all of these projects reimplement the same things in different ways with different APIs, the amount of redundancy is ridiculous.

            to say that various pieces of software can be fit together like a puzzle is just not how this stuff works at all.

            Comment


            • #66
              Originally posted by bumblebritches57 View Post
              The author of JPEG-XL, Jon Sneyers has a habit of writing horrifically slow compression algorithms, just like he did with FLIF.
              and even he couldn't make it anywhere near as slow as AVIF.

              Comment


              • #67
                Originally posted by OneTimeShot View Post
                JPEG-XL lives in the same world as gold-plated HDMI cables. No different to any other image format, but lots of people claiming they can tell the difference!
                this is BS, anyone can look at their file picker and see the difference, if you have more then a couple hundred images stored, the difference can be greater then a gigabyte, which may seem like peanuts to some, but it's a lot to others. and you can sure as shit see the difference when it comes to progressive decoding since one image is showing and one isn't

                Originally posted by Artim View Post

                In theory, yes. But then every website would have started serving WebP at least back in 2021 when Safari started supporting it, and would already be switching to AVIF. But my guess is, that couldn't be farther from the truth. And with your arguments, they would have switched to WebP even years earlier, with JPEG being only a Fallback for Apple users. But even that's not the case.
                webp actually losses to mozjpeg anyways so no, most people probably looked at webp and told themselves, "this is terrible" webp is literally only good for specific lossless files. webp is a terrible format that should die

                Originally posted by Artim View Post

                In this context anything that requires the use of JXL. Whatever that might be. That's simply not a thing.
                not sure what you are getting at here.​​

                Comment


                • #68
                  Originally posted by curfew View Post

                  I have no idea what you mean by "friendly for developers" in this context. Supporting all those highly unique features requires huge amounts of work. Let's also consider the end-user: he sees a nice image on the web and downloads it. The image software on his computer doesn't necessarily support all those quirks of JPEG-XL, so the downloaded image might appear as lower resolution or lower quality than what the user saw on the web. It might also be unclear how to extract the best quality image for archival purposes. Dragging and dropping the image file into, say, a spreadsheet can yield varying results.

                  I don't see what the real-life benefit is for a single format supporting both lossy and lossless compression. It probably requires twice the amount of code, e.g. amounts to an equal burden of maintenance for the developers as having JPEG and PNG separately...

                  While the technical aspects of JPEG-XL might seem exciting for some HC programmers or techjunkies, they most likely pose a challenge for regular users to be able to understand what is going on under the hood. These features also might even require custom UI implementations to e.g. support switching between the embedded image files.

                  Embedding multiple versions of an image into a single file is also really bad for the web. Web wants to minimize the amount of downloaded data, not bundle unnecessary junk together.
                  it's a single format that supports a large variety of features that are great for scientific, artistic, and technical purposes. with JXL you don;t need to serve multiple sizes of images for instance, a single image can support multiple resolutions, meaning you can use a single image on everything from a smartwatch, to a large screen billboard. it's also good for applications, (web apps included) as this means you don't need multiple sets of icons for varying resolutions.

                  it has good support for HDR and alpha, very high resolution image support for extremely large photos, so on and so forth, it means you don't need to implement a bunch of different image formats, you can implement on into your app and a good chance it will cover many usecases. it's a lot easier to support a single slightly complex library, then it is to support many smaller ones IMO. but thats just me personally.​

                  Comment


                  • #69
                    Originally posted by Quackdoc View Post
                    webp actually losses to mozjpeg anyways so no, most people probably looked at webp and told themselves, "this is terrible" webp is literally only good for specific lossless files. webp is a terrible format that should die.​​
                    Thanks for proving you are just a blind fanboy. I compress all images I serve with WebP (as long as the browser supports it), with much lower quality setting than their JPEG counterpart, yet they are visually indistinguishable - maybe except when you zoom in the maximum your software allows, but that's not real world usage.

                    Comment


                    • #70
                      Originally posted by Artim View Post
                      Thanks for proving you are just a blind fanboy. I compress all images I serve with WebP (as long as the browser supports it), with much lower quality setting than their JPEG counterpart, yet they are visually indistinguishable - maybe except when you zoom in the maximum your software allows, but that's not real world usage.
                      if its lower quality then it's lower quality, you can easily check using metrics like ssimulacra2 which does a good job are representing perceptual quality, mozjpg and cwebp m6 are similar ssimu2 when comparing bpp, until the very low end when webp wins but you lose features like progressive decoding, near universal support and it takes longer to decode

                      Comment

                      Working...
                      X