Code was already written to accelerate OpenGL in QEMU when using x86 software on the guest and host, followed by support for an ARM guest on an x86 system, and then a library was also written for translating OpenGL ES calls to OpenGL.
This Ubuntu-developed OpenGL QEMU acceleration method works by having a fake OpenGL ES library on the guest that implements the EGL, OpenGL ES 1.1, and OpenGL ES 2.0 APIs. This fake library passes the calls onto a kernel module via iomem. There's then some hacked-up QEMU code that reads these OpenGL ES calls from registers and copies the buffers from the guest user-space memory to host user-space. This component though is targeted for being rewritten so that it's much cleaner. This would also be needed for potential upstream QEMU acceptance.
There's also another library in-development that does the EGL / GLES 1.1 / GLES 2.0 translations to GLX, Windows GL, and Apple GL. This part is not dependent upon QEMU but just having a host with OpenGL 2.1+ support. It's working with the proprietary NVIDIA/AMD drivers and the Intel classic Mesa driver, but not much else in terms of the other Mesa / Gallium3D drivers.
This is the rough solution for what Canonical's after, but they haven't explored what Red Hat may be doing for OpenGL acceleration with KVM and SPICE. It would also be interesting to leverage a virtual Gallium3D driver such as what VMware does with their virtualization stack, but that's not a target for these Ubuntu / Linaro developers.
More details in the UDS notes.