Announcement

Collapse
No announcement yet.

Updated Basis Universal Yields High Quality Compression, 3~4x Smaller Than JPEG/PNG

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Updated Basis Universal Yields High Quality Compression, 3~4x Smaller Than JPEG/PNG

    Phoronix: Updated Basis Universal Yields High Quality Compression, 3~4x Smaller Than JPEG/PNG

    For those wondering what the buzz was about earlier this month when there was word of a high quality GPU compression codec going open-source, details on that have now been revealed...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    So, the point is, from what I get from the linked article, that storing images in this format on the GPU uses 3-4x less memory then when storing a JPG/PNG on the GPU. So it's not related to compressing images in general, but for storage and usage in GPU memory.

    Comment


    • #3
      3-4 times smaller than a decompressed ​​​​​​ jpeg/png, way way bigger than the source jpeg in most circumstances. That's some higher level BS marketing.

      This seems to be aimed at textures-like use cases, not your standard webpage.

      Comment


      • #4
        That announcement is confusing, to say the least. I am no native english speaker, but still i think the fault is not on my side here. He first writes this:

        "The original Basis Universal codec created images that were 6-8 times smaller than JPEG on the GPU while maintaining a similar storage size."

        and then writes this:

        "Today we release a high quality Basis Universal codec that utilizes the highest quality formats modern GPUs support, finally bringing the web up to modern GPU texture standards—with cross platform support. The textures are larger in storage size and GPU compressed size, but are still 3-4 times smaller than sending a JPEG or PNG file to be processed on the GPU, and can transcode to a lower quality format for older GPUs."

        So, the new updated codec provide only 3-4 times smaller size than a JPEG or PNG file on the GPU, while the original codec produces 6-8 times smaller size? So they made it less effective? Does not compute, someone could clarify what he meant?

        Comment


        • #5
          Their PR statement could have explained to laymen like us what all of this means. I guess it's about saving memory bandwidth in the end while providing higher quality. Also the compression/decompression seems to be faster than traditional methods. But as this would be beneficial for desktop rendering, I don't quite get it why they focus on mobile use cases in particular. Sure, there the enhancements are more meaningful as such there but as the technology is broader in scope I'd like to see it used in AAA games on the desktop as well. And they could have mentioned this, too.

          Comment


          • #6
            There are different versions, one "basic" quality which is 6-8 smaller, and another "high" quality one that does not replace the previous one.

            Quality is never quantified, so it's all more than vague still

            Comment


            • #7
              Before JPG/PNG textures (on WebGL or any platform, including desktop GL) are uploaded to the GPU, the images are always first decompressed. This is because PNG or JPG compression is simply not supported by GPUs (nor APIs). JPEG/PNG is only used for storage. WebGL adopted those formats, because it's the only formats browsers supported at the time.

              What is supported by GPUs is a small variety of compression schemes that are optimized for sampling on GPUs. These are all block based compression schemes, which allow efficient partial on-the-fly decompression (random access based on XY coordinates). JPEG and PNG do not allow that; you have to decompress the entire image before you can sample a few pixels. So PNG/JPEG are fundamentally incompatible with GPUs.

              Binomial's Basis is a pre-compressed intermediate image/texture format, that is not based on PNG or JPEG, but block-based compression like most GPU compression schemes (such as ETC/S3TC/PVRTC/ASTC etc), and allow browsers to support that as an image format for WebGL. The browser will only have to transcode (i.e. translate; without re-compressing) from Basis to ETC, or DXT, or whatever, which is really fast, and prevents quality loss due to re-compression (as it would for JPEGs)

              And on top of that, Basis applies some further compression, to further reduce the already compressed image data (kinda like how RAR-ing a .zip can produce a still smaller file).
              So yes, the claim is correct. Compression ratios are always relative to the uncompressed image size. JPEG and PNG were never uploaded directly while still compressed*, so the use of Basis does result in reduced GPU memory usage (and improved texture upload speed). And because Basis files will be doubly compressed, they can be smaller than JPG and PNG, and thus save bandwidth on download.

              Do keep in mind that the Basis compression will be lossy compared to PNGs, but likely offer better quality than JPEGs of similar file size (I think that's part of the claim).

              The "6 to 8 times smaller" figure simply comes from the block compression schemes. For DXT1, the compression ratio is 6:1, at worst is DXT5 4:1, and its easy to get the ~8:1 figure if you properly encode (DXT is pretty inefficient) and further compress the compressed data.

              I guess the PR is a bit fuzzy, since it's mostly aimed at browser and (game engine) toolkit developers.

              * Technically, some browsers may have transcoded JPEG to DXT, but most do not AFAIK.
              Last edited by Remdul; 21 March 2020, 12:40 PM.

              Comment


              • #8
                Thanks Remdul for the great explanation.

                Comment


                • #9
                  I am not exactly sure why you need the graphics card to decode a single image in 2020. Thousands of images in a very short time like a movie or game yes but individual images? I will keep an open mind but so far it feels like a solution looking for a problem.

                  Comment


                  • #10
                    Originally posted by TemplarGR View Post
                    That announcement is confusing, to say the least. I am no native english speaker, but still i think the fault is not on my side here. He first writes this:

                    "The original Basis Universal codec created images that were 6-8 times smaller than JPEG on the GPU while maintaining a similar storage size."

                    and then writes this:

                    "Today we release a high quality Basis Universal codec that utilizes the highest quality formats modern GPUs support, finally bringing the web up to modern GPU texture standards—with cross platform support. The textures are larger in storage size and GPU compressed size, but are still 3-4 times smaller than sending a JPEG or PNG file to be processed on the GPU, and can transcode to a lower quality format for older GPUs."

                    So, the new updated codec provide only 3-4 times smaller size than a JPEG or PNG file on the GPU, while the original codec produces 6-8 times smaller size? So they made it less effective? Does not compute, someone could clarify what he meant?
                    My understanding is that they are now offering a high quality version, so it is bigger but looks better.

                    Comment

                    Working...
                    X