Aside from the engineering challenge of rewriting stuff to use a more strict compiler like gcc 4.x (let alone a less popular compiler like Intel's ICC or AMD's Open64 or PathScale's EKOPath), the other question is: would it provide a measurable performance benefit?
The answer really depends on how CPU-limited most 3d rendering is within fglrx. From the benchmarks I've seen, the general idea is that fglrx is rather heavily GPU-limited in most cases, which is what it should be. Don't get me wrong, it eats CPU intensively while rendering -- but it's not like the code is egregiously inefficient, to the point that the GPU is sitting there waiting for a command but the CPU can't chew through inefficient code fast enough to feed the GPU. If those situations were being hit, then you'd be able to measure it by plugging in a faster GPU. If a more capable GPU doesn't provide a consequent increase in performance, then it's not GPU-bound, so it must be either memory-bound or CPU-bound. But I've seen enough fglrx benchmarks on Phoronix that it's pretty clear to me that bigger card == better FPS.
PathScale EKOPath, for its part, doesn't seem to have anything to do with a GPU; it's just a very efficient C/C++ compiler for the CPU. So if fglrx isn't CPU-bound, then increasing the efficiency of the parts of fglrx that run on the CPU is not going to result in a noticeable performance increase -- especially with less-capable graphics cards, where more than likely the CPU will sit there waiting for the GPU to finish processing, rather than the reverse.
Still, it's good information. I find it intriguing, but not surprising, that they use GCC 3.2. And I don't think there will be a whole lot of pressure to use something different.
The gallium3d drivers, on the other hand, tear through CPU like nobody's business. Reducing the CPU-boundedness of the open source graphics stack would be a huge win.