It seems to me (being uneducated in the inner workings of the hardware, of course) that this is more like a random program that can hard lock the kernel through a combination of syscalls rather than a program that just crashes itself. Or even worse, that a regular process could hard lock the entire CPU with some sequence of instructions. That would be total crap.
If you told me a program could send a series of GPU commands that would crap out that rendering context but which the kernel could recover from, I wouldn't be too terribly worried. Only indirect rendering would suffer from a serious security flaw in that case, and then only if the GL library exposed such a weakness through the GLX API directly. However, if all processes have the ability to access the DRM through a library without going through another process (which is what we have here), and some particular stream of commands completely hard locks the GPU, then we have the situation where any process can hard lock the GPU either by accident or intent. That blows.
If it can't be fixed with the command stream checker in the DRM, then in all honesty this may be a very strong argument for graphics drivers in kernel space or at least pushing towards a higher-performance pure-indirect-rendering approach. If the only capability that user processes have is to tell the graphics framework which high-level operations to perform and only privileged code can actually construct command sequences then the hardware security holes are not exploitable by user code. All of the arguments for keeping graphics code out of kernel space pretty much come down to the idea that having that code in user space is more secure and more stable; if that's blatantly not true, then the argument against in-kernel graphics drivers mostly evaporates. That is all a big "if" of course, dependent on the (unknown to me) specifics of what exactly can cause these hard locks.