If this is your first visit, be sure to
check out the FAQ by clicking the
link above. You may have to register
before you can post: click the register link above to proceed. To start viewing messages,
select the forum that you want to visit from the selection below.
I'm trying to get at least one of our developers up/down (depending on the developer ) to XDC to talk about this as well. I think it's fair to say that we are leaning towards using LLVM IR for compute but staying with TGSI for graphics (or potentially GLSL IR if the community moves that way).
Obviously having two different IRs implies either some stacking or some duplicated work, so it is a good topic for discussion.
LLVM IR didn't seem like a great fit for graphics on GPUs with vector or VLIW shader hardware since so much of the workload was naturally 3- or 4-component vector operations, but for compute that isn't necessarily such an issue.
Great to know, I already proposed the talk. Alex is already coming, are you planing on sending someone else too?
Ha, yes, understatement of the year. I only meant to say that while we can likely expect further improvements on the code here, i don't think you can expect this developer to finish coding a full GPU implementation. That's going to take at LEAST another GSoC, and possibly more. I wasn't aware of what all needed to be done - modifying Gallium drivers to accept LLVM directly seems like a good idea, but that's probably multiple GSoC's right there just for that part.
And wasn't the developers consensus largely that they weren't that hot about LLVM? IIRC, it sounded like the least controversial option to drop TGSI was to replace it with the GLSL IR, and some devs in particular were pretty anti-LLVM. I have no idea if that supports the sort of operations it would need to or not.
Well, compute != graphics. The developer consensus on LLVM is that it's not well-suited for shaders, i.e. graphics. Compute is a different matter.
If we're lucky, we can get Gallium driver developers to add LLVM IR support to their drivers, if someone comes up with a good way to code-generate for GPUs using LLVM. But I would guess that the LunarGLASS developers have already figured that part out. Or at least, I think they've figured out a way to code generate for targets that require structured control flow. I think they said a few months ago that they had gotten it working when targeting Mesa IR, which requires structured control flow like GPUs. So maybe some of their work can be adapted to Clover.
The API is complete, that means that any application can now use Clover and will not fail due to missing or unimplemented symbols.
The implementation in itself is complete: there are no stubs, and all the API actually does something.
The interesting part: Clover can launch native kernels (Phoronix spoke about that two months ago), and compiled kernels. So, it is really feature-complete.
The only thing missing are built-ins functions. It means that even if we can create memory objects, images, events, command queues and all the OpenCL objects, and that we can compile and launch OpenCL C kernels, these kernel cannot yet use functions like clamp(), smooth(), etc.
The most complex built-ins are implemented though, like image reading and writing, and barrier() (a built-in that will be described in detail in the documentation as it uses things like POSIX contexts (man setcontext)).
I'll write the documentation in the following days (I already begun). It will be in Doxygen format, and I was pleased yesterday to see that Doxygen is now able to produce exceptionally good and beautiful documentation, in regards of what it produced one or two years ago. Then, I'll rework some part of the image functions (they are currently tied to the x86 architecture using SSE2, I will reimplement them in a more architecture-neutral way).
The documentation will be available in the source code and also on my people.freedesktop.org page, so anybody will be able to view it.
Clover implements the complete OpenCL API and can execute OpenCL programs. But at the moment, all of it runs on the CPU, and getting it to actually run on GPUs will be a hitload of work and nobody really knows how it should be done.
And this is due to a few factors:
- the situation regarding IRs in Gallium is muddy: Mesa IR seems to be on the way out, with TGSI being generated directly, but there are attempts to use LLVM too, and GLSL IR is also in the mix
- TGSI is good for graphics (shaders), but is not good for computing, like OpenCL, which is why people are proposing LLVM
- LLVM sucks at graphics
So we're likely looking at (at least) two IRs coexisting: one for shaders, one for OpenCL stuff?
Just to say that the documentation I'm writing is available on http://people.freedesktop.org/~steck...ver/index.html . Sorry for the bad English (it is already a bit special in short messages, so I don't imagine in whole documentation pages), comments and corrections welcome.
This documentation will be expanded in the following days. I recommend to clone the Clover's git repository (http://cgit.freedesktop.org/~steckdenis/clover) and to build the documentation yourself, as Freedesktop.org uses a very outdated version of Doxygen and Dot. By using a modern distribution, you'll get a way more beautiful documentation (smoothed dot graphs, nicer page layout, logo, etc).