AFAIK, while ETC2 must be supported by the OpenGL implementation, most major desktop graphics vendors will implement it in software. (I know for a fact that nVidia is doing it this way and plans to continue doing it this way for future hardware as well.) This allows them to be "OpenGL compliant," but without actually making ETC2 a viable usable choice.
This quote from the article is wrong:
It should say:
If we're going to wait for a texture compression scheme to get widespread and implemented commonly in hardware, we might as well wait for ASTC. From what I know it's better than ETC2.
This quote from the article is wrong:
Graphics texture compression reduces memory usage and avoids congesting the bus, thereby trying to avoid a performance bottleneck.
Hardware-accelerated graphics texture compression reduces memory usage and avoids congesting the bus, thereby trying to avoid a performance bottleneck. When implemented in software, all it does is add another decoding step which the OpenGL driver must implement, wasting memory and not doing anything at all for bus congestion; this is similar to the reasons why you should always use GL_RGBA instead of GL_RGB, even when you're not using the alpha channel.
Comment