Originally posted by Temar
View Post
Announcement
Collapse
No announcement yet.
Intel Linux Graphics Developers No Longer Like LLVM
Collapse
X
-
-
Originally posted by Tobu View PostThis feels like Unladen Swallow, another project that had high hopes and a team of Googlers working on it, to be given up in disgust. It was meant to speed up the Python interpreter (CPython) with an LLVM-based JIT. The conclusion was that LLVM is too inflexible to perform well outside of ahead of time compilers.
Comment
-
Originally posted by the303 View PostThat makes sense.
My comments are of course biased with an anti-LLVM attitude.
I am an electrical engineer so I have always been irritated by the idea of JIT/AOT instead of static compiling. It makes me wanna puke because it is pointless and waste of resources. The machines may be powerful enough for end user not to notice the overhead but on large scale (all computers over the world), it will cause some considerable energy loss and for what? Just because a developer was too lazy to pre-compile his/her program for X platform and he would rather every PC to compile the same thing on the fly each and every time they run their program in the case of JIT and compile once in the case of AOT, as if there were a thousand platforms to target (even that could be automated).
On the subject:
Well if they also wrote special optimization passes (if that is possible) they would not need to use LLVM at all I guess, I mean that will be almost everything they will need to create if they were to not use LLVM anyway?
Still IMHO this is an inflexibility that LLVM should not have had to make the project worth something.
I mean as far as I know Intel has an army of open source developers compared to other companies and still find it not worthwhile to integrate LLVM. That is a bad sign.
There may be different CPU types in the future that have huge differences compared to today's. The IR of LLVM that they like to refer as "the Universal IR" will require a enormous amount of work (on the optimization passes or whatever) to make it optimal for those? Intel decided to create its own instead of integrating LLVM after all, how is creating something from scratch can be a better way to use such a "modular and great" platform called LLVM?
Comment
-
Originally posted by the303 View PostHe told that to general purpose executables referring to my post. You are right the frequent changes in GPU architectures and the need to get the maximum out of the GPU causes the shader to be JIT compiled.
You are right shaders work that way LLVM or not. My statement was more general. People seem to advocate LLVM because it allows JIT usage like it is such a great and needed thing (although there is a better alternatives).
Also right about the one JIT usage (applies to AOT also) advantage about processor specific optimizations. One alternative to it, like you said you could avoid it with automated build of variants. There is a better way though that is compiling all variants in one single binary.
Code execution takes different paths checking the CPU inst. set extensions at runtime. (AFAIK gcc libc does that with strlen - SSE2. Also, x264 (H.264 encoder) also checks for supported CPU ins. set extensions on program launch and acts accordingly). The increase in binary size if all variants are generated for all methods is subject to debate though. Yet processor speeds aren't increasing and the little increases are way expensive and HDDs & RAMs are relatively cheap. I would rather maximize the processor performance without JIT AOT and other overheads to executable footprint any day.
I would like to have a gcc flag that creates -minstructionsetext=SSE2,AVX, etc. variants instead of -march or LLVM's IR JIT. That should have what all this manpower been gone to instead of creating LLVM that does not even have genuine way to represent logical operations which will easily be mapped and applied to any processor's instruction set / architecture. I mean I see this as the one thing that LLVM should have shined, but instead I see it sucks horribly.
I never understood this another thing that is used to advertise "having LLVM is good for sanity check" thing. The C++ standard is out there and if the compiler does not behave accordingly it will be obvious.
If a code that is not supposed to work, works using X compiler, then it is most likely to be same on Y compiler (like accessing stack variable that is supposedly be destructed by returning its address). Were all these man-hours really worth it? Aside from this sanity check idea, LLVM/Clang is basically the same thing as GCC as the JIT should not be used at all when there is already a better option I said above. It's true that LLVM allowed things like emscripten but javascript is another horror story for another time which is hopelessly being kept alive like asm.js and such.
Comment
-
Originally posted by wizard69 View PostIf it is so horrible why has Apple switched over to LLVM in XCode?
In a nut shell if you aren't willing to be part of the development process you have no reason to be using LLVM.
Comment
-
Originally posted by e8hffff View PostWhen someone describes their coding as table flipping moments, you know they really have done their best to like it.
It would be nice to here their specific gripes so others are cautious about LLVM.
Comment
-
Originally posted by brent View PostAMD has been working on LLVM backends for r600 and radeonsi hardware for a *long* time, since late 2011, with various contributors working on it. If we look at the current situation, it's unfortunately still not that good or usable: code generation quality is mediocre, there are many bugs, and there are some yet to be solved principal issues (performance, error handling). r600-class hardware support isn't even feature complete. LLVM seems like an incredibly hard to tame beast, at least for typical GPU architectures and usage as part of a graphics API.
LLVM is working well for radeonsi.
Comment
-
I mean you have buildscripts for your distribution for mesa, couldn't you just clone your favorite llvm revision in that build script and compile it into mesa and so get an updated radeon driver on previous versions of fedoras? Sure, that's a much longer build then, but it should at least be easy...
Comment
Comment