The challenge here is that it's not a question of "code both and see which runs best", the question is whether the benefits (from being able to leverage ongoing work by the llvm community) will outweigh the costs (from replacing the current GPU-centric-ish IRs with an arguably CPU-centric IR plus GPU extensions and GPU-aware middleware) over time.
It's a very timely question, but even an initial implementation is only likely to demonstrate that llvm IR can work "OK" with GPUs. The big argument in favor of this proposal is that CPUs and GPUs are becoming more alike over time. I hadn't really thought of GPU architecture in terms of AoS or SoA, so I'll probably have to read the proposal a few times to get those terms mapped onto SIMD and superscalar/VLIW
Keith W summed the situation up pretty well :
It's a very timely question, but even an initial implementation is only likely to demonstrate that llvm IR can work "OK" with GPUs. The big argument in favor of this proposal is that CPUs and GPUs are becoming more alike over time. I hadn't really thought of GPU architecture in terms of AoS or SoA, so I'll probably have to read the proposal a few times to get those terms mapped onto SIMD and superscalar/VLIW
Keith W summed the situation up pretty well :
So basically I think it's necessary to figure out what would
constitute evidence that LLVM is capable of doing the job, and make
getting to that point a priority.
If it can't be done, we'll find out quickly, if it can then we can
stop debating whether or not it's possible.
constitute evidence that LLVM is capable of doing the job, and make
getting to that point a priority.
If it can't be done, we'll find out quickly, if it can then we can
stop debating whether or not it's possible.
Comment