Seriously, GPU instructions have always been much closer to RISC than CISC, especially ATI/AMD GPUs. They're all single-clock, and there are relatively few instructions -- lots of opcodes but those are mostly subtle variants of the same basic function (eg 6.02*10^23 different compare operations).
One could argue that VLIW RISC and CISC are both "complex" from a sufficiently abstract point of view but I don't think they are generally regarded as interchangeable. The transition really is from VLIW RISC to non-VLIW RISC.
I think it's fair to say that the instruction sets for both scalar and vector units can be considered RISC, just like the instruction set for the VLIW core, but the RISC vs CISC topic is almost as dangerous and open to debate as religion or coding standards.
The whole discussion is made more complicated because we used to talk about "vector operations" (eg the RGBA components of a pixel) being handled in a single vector instruction on 3xx-5xx GPUs or by using 4 of the VLIW slots on a 6xx-Cayman GPU. With GCN and beyond "vector" is being used the other way, referring to the 16 elements of the SIMD as a vector.
That's why the SIMD aspect seems new -- VLIW was visible to the programmer while SIMD was not, so it got talked about the most. Now that VLIW is out of the picture SIMD is the most visible thing, and we have to talk about it because a CU contains SIMDs *and* a scalar engine, so the natural terminology is "vector" for the SIMDs and "scalar" for the... um... scalar engine.
What we call a SIMD used to work on a 16x4 or 16x5 array of data, now it works on a 1D vector of data.
But it's still RISC