Announcement

Collapse
No announcement yet.

Google Comes Up With Its Own Image Format: WebP

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    The moment someone suggests any type of change gets shot down. This is why Linux is going nowhere. Because every time someone tries to innovate everyone else complains, but when someone makes a decision that makes little sense nobody does.

    Comment


    • #17
      Originally posted by karasu View Post
      Because jpg2000 is patent encumbered.
      ...as is WebM. Google never proved that VP8/WebM is not encumbered by patents, but other people proved that VP8/WebM uses the EXACT same Technologies as H.264. And the MPEG LA holds a huge bunch of patents on H.264.

      Originally posted by NoEffex View Post
      The moment someone suggests any type of change gets shot down.
      If you suggest changes, please prove that they make sense. Googles "proof" is downloading a million COMPRESSED still images from the web, compressing them again with VP8, and then claiming the result was smaller.

      Well, it HAS to be.

      On the other hand the x264 devs delivered som real results (http://x264dev.multimedia.cx/?p=541), and as it seems VP8 can not even beat JPEG at the moment - a 20 year old codec.

      Sure, Google promises to deliver improvements and new features like transparency, but they also promised to improve WebM, and nothing has happened in the last four weeks. I will believe them when I see results.

      A new image container without lossless compression (you know, there ARE people who use it) and animations is just plain stupid anyways. The gain over JPEG and GIF is too small.

      Originally posted by NoEffex View Post
      This is why Linux is going nowhere.
      This has nothing to do with Linux, nice try.

      Comment


      • #18
        Originally posted by sturmflut View Post
        other people proved that VP8/WebM uses the EXACT same Technologies as H.264.
        Not exactly, no.
        A lot of what is in h.264 is perfectly free. The vast majority of it, in fact is made up of functions that NOBODY has a claim to.

        If you suggest changes, please prove that they make sense. Googles "proof" is downloading a million COMPRESSED still images from the web, compressing them again with VP8, and then claiming the result was smaller.

        Well, it HAS to be.
        Actually, no again, that is not how compression works.
        The "recompression" actually begins with a DECOMPRESSION. The RAW image, (i.e. AFTER decompression) is then compressed with the new system. The second compression doesn't gain anything from the first one -- they are NOT cumulative. I.e., in some cases, a "zip of a zip" might be smaller than the first zip. This is not the case here, since an image can only be compressed from raw.

        Now here's the funny part of this;
        Source compression can actually have some seriously bad effects that crop up with multiple recompressions -- especially if you change the encoding scheme. You know how a photocopy of a photocopy will degrade in appearance? Well it changes even MORE when you change the encoding scheme.... like taking a PICTURE of a PHOTOCOPY, getting a print, and photocopying it. That would end up REAL ugly.

        If you are starting with a degraded image and want to maintain it as NOT SIGNIFICANTLY WORSE, you need to compress VERY VERY LIGHTLY! Again, if you're changing encoding schemes, the effects become more pronounced, which means BIGGER STILL!!!

        Here's the worst part of it... that x264 page is STARTING with a DEGRADED IMAGE, and subjecting it to three encoding schemes, two of which are the same as the one that initially degraded it!


        And then, of course.... this guy cheated with jpeg by applying jpegcrush! Sorry, but NO -- that is not allowed! You need to compress that jpeg more to get the file size down, not apply cheats to one but not the other! Similar cheats are equally possible for VPX, but this guy isn't offering that advantage. You want fair results? Perform a fair test!

        Comment


        • #19
          Why don't they use PGF.
          It has very nice features such as better compression and fast decode/encode speed compared to jpeg2000.
          (This is NOT meant as an advertisement for it.)

          http://www.libpgf.org/

          Comment


          • #20
            Originally posted by sturmflut View Post
            ...as is WebM. Google never proved that VP8/WebM is not encumbered by patents, but other people proved that VP8/WebM uses the EXACT same Technologies as H.264. And the MPEG LA holds a huge bunch of patents on H.264.



            If you suggest changes, please prove that they make sense. Googles "proof" is downloading a million COMPRESSED still images from the web, compressing them again with VP8, and then claiming the result was smaller.

            Well, it HAS to be.

            On the other hand the x264 devs delivered som real results (http://x264dev.multimedia.cx/?p=541), and as it seems VP8 can not even beat JPEG at the moment - a 20 year old codec.

            Sure, Google promises to deliver improvements and new features like transparency, but they also promised to improve WebM, and nothing has happened in the last four weeks. I will believe them when I see results.

            A new image container without lossless compression (you know, there ARE people who use it) and animations is just plain stupid anyways. The gain over JPEG and GIF is too small.



            This has nothing to do with Linux, nice try.
            By Linux I meant open source community.

            Comment


            • #21
              Originally posted by NoEffex View Post
              By Linux I meant open source community.
              Though google does tend to play ok with open source, they aren't exactly the epitome of open source, nor should their behavior be considered particularly relevant within the context of open source.

              It is nice that they are into this brainstorming and have come out with webm for video, BUT, unlike with video, there wasn't/isn't anywhere near as great of a need for more image formats.

              And quite frankly, this didn't come out with nearly as much media hype as webm did, so I tend to interpret this more along the lines of... "hey cool, you guys know there's this other neat thing that we can do with VPX."

              Comment


              • #22
                Originally posted by droidhacker View Post
                Though google does tend to play ok with open source, they aren't exactly the epitome of open source, nor should their behavior be considered particularly relevant within the context of open source.

                It is nice that they are into this brainstorming and have come out with webm for video, BUT, unlike with video, there wasn't/isn't anywhere near as great of a need for more image formats.

                And quite frankly, this didn't come out with nearly as much media hype as webm did, so I tend to interpret this more along the lines of... "hey cool, you guys know there's this other neat thing that we can do with VPX."
                Open source is in reference to open standards as well, which google wants a part of (Probably tired of paying license fees tbh).

                The problem is people like Apple refuse to implement them (Theora) due to "unknown" patents. Google wants to eliminate this, by basically putting patents on it, then releasing said patents.

                Comment


                • #23
                  Originally posted by droidhacker View Post
                  And then, of course.... this guy cheated with jpeg by applying jpegcrush! Sorry, but NO -- that is not allowed!
                  Why not? The ultimate goal was to reduce web traffic by making our pictures smaller.
                  If jpegs can be saved more efficiently without reducing picture quality, then that's the way to go. It's easier to build an open source jpeg exporter and get it into popular image editors than it is to get support for a new picture format altogether.

                  When comparing formats, you must use the best encoder available for each format. They obviously used the best available for WebP, and if they released results before applying all needed optimizations, they have only themselves to blame.

                  Originally posted by droidhacker View Post
                  Here's the worst part of it... that x264 page is STARTING with a DEGRADED IMAGE, and subjecting it to three encoding schemes, two of which are the same as the one that initially degraded it!
                  Are you sure their source images isn't lossless? I have trouble spotting any artefacts there. The blog post sounds like it'd be straight from the camera, before exporting to a lossy format.

                  Comment


                  • #24
                    Originally posted by NoEffex View Post
                    By Linux I meant open source community.
                    This has gotten shot down because it's not winning enough on technical merits. And it will fail due to non-open software will refuse to implement support.

                    Comment


                    • #25
                      Originally posted by Micket View Post
                      This has gotten shot down because it's not winning enough on technical merits. And it will fail due to non-open software will refuse to implement support.
                      If Chrome, Firefox and WebKit browsers (Safari!) implement it, then it will probably succeed regardless of what Microsoft does. And that's due to Macs having a big market share, and the Windows users who use browsers other than IE.

                      Comment


                      • #26
                        Originally posted by RealNC View Post
                        If Chrome, Firefox and WebKit browsers (Safari!) implement it, then it will probably succeed regardless of what Microsoft does. And that's due to Macs having a big market share, and the Windows users who use browsers other than IE.
                        I very much doubt it. Shit like IE6 is still around and haunting the web.
                        And i didn't specifically say Microsoft, I doubt Adobe will be quick to pick this up either, and where do people prepare images for the web?
                        What about hardware, i.e. cameras?

                        Who is going to disregard over half of the visitors?
                        If it had some technical merits to boast with, sure, but this is very mediocre, as many have pointed out already. It's just not that good. Smoothes images out alot. Alot more complicated to decode, an actual concern for weaker devices.

                        Ogg has support here and there, with typically all open source software you found, and can hardly be called a success so far.

                        Comment


                        • #27
                          Originally posted by Micket View Post
                          I very much doubt it. Shit like IE6 is still around and haunting the web.
                          And i didn't specifically say Microsoft, I doubt Adobe will be quick to pick this up either, and where do people prepare images for the web?
                          What about hardware, i.e. cameras?

                          Who is going to disregard over half of the visitors?
                          If it had some technical merits to boast with, sure, but this is very mediocre, as many have pointed out already. It's just not that good. Smoothes images out alot. Alot more complicated to decode, an actual concern for weaker devices.

                          Ogg has support here and there, with typically all open source software you found, and can hardly be called a success so far.
                          Tons of websites already serve up different pages based on what browser is viewing them. It wouldn't be difficult to serve WebM images to some browsers and JPEG to the rest. Is it worth the effort? Probably not for the desktop, but if it's significantly faster download/render on Android, that could be enough to drive adoption.

                          Comment


                          • #28
                            It's not faster, the x264 dev estimated webp is about 3x as intensive to decode as jpeg.

                            Comment


                            • #29
                              Originally posted by smitty3268 View Post
                              Tons of websites already serve up different pages based on what browser is viewing them. It wouldn't be difficult to serve WebM images to some browsers and JPEG to the rest. Is it worth the effort? Probably not for the desktop, but if it's significantly faster download/render on Android, that could be enough to drive adoption.
                              But it is NOT significantly faster to download, and especially not to render.
                              It might be marginally faster to download.


                              Everyone is speaking like jpeg is really bad, but I don't see it.
                              Lack of alpha channel maybe? That's about it.

                              Someone also mentioned PGF in this thread, and I desperately searched for some some examples, but i just couldn't find any. I managed to download a test set of images, so i compiled the library myself and decoded the images. The results where pretty shitty; I compared by encoding the orignals to an equal size, and visually comparing the results. PGF blurred out some details, otherwise roughly the same. JPG clearly won.

                              I never found jpeg2000 to be particulary impressive either. JPG is simply very good. Very few people care about transparency, as it's typically photographs.

                              Comment


                              • #30
                                Originally posted by Micket View Post
                                I never found jpeg2000 to be particulary impressive either. JPG is simply very good. Very few people care about transparency, as it's typically photographs.
                                jp2 allows higher bit-depths, its not just transparency.

                                Compare the filesize of a 48bpp tiff and the same image as a jp2. That is why jp2 is interesting.

                                Anyway, I think the obvious point that everyone missed is that someone at Google needs to justify their continued employment, they are doing so with pointless crap that doesn't really fill a niche but impresses their boss.

                                Comment

                                Working...
                                X