Originally posted by Marc Driftmeyer
View Post
Announcement
Collapse
No announcement yet.
Intel Linux Graphics Developers No Longer Like LLVM
Collapse
X
-
Originally posted by brent View PostSorry, but it's not that simple, unfortunately. They are not a simple subset, but very different and complex beasts altogether. E.g. GPUs inherently use predicated SIMD (and/or VLIW like in in R600) processing, and have various different memory spaces, while CPUs typically have a single memory space and use SISD processing. Please read the radeonsi ISA manual and see if you can still claim that it's a subset of a general purpose CPU architecture.
My comments are of course biased with an anti-LLVM attitude.
I am an electrical engineer so I have always been irritated by the idea of JIT/AOT instead of static compiling. It makes me wanna puke because it is pointless and waste of resources. The machines may be powerful enough for end user not to notice the overhead but on large scale (all computers over the world), it will cause some considerable energy loss and for what? Just because a developer was too lazy to pre-compile his/her program for X platform and he would rather every PC to compile the same thing on the fly each and every time they run their program in the case of JIT and compile once in the case of AOT, as if there were a thousand platforms to target (even that could be automated).
On the subject:
Well if they also wrote special optimization passes (if that is possible) they would not need to use LLVM at all I guess, I mean that will be almost everything they will need to create if they were to not use LLVM anyway?
Still IMHO this is an inflexibility that LLVM should not have had to make the project worth something. I mean as far as I know Intel has an army of open source developers compared to other companies and still find it not worthwhile to integrate LLVM. That is a bad sign.
There may be different CPU types in the future that have huge differences compared to today's. The IR of LLVM that they like to refer as "the Universal IR" will require a enormous amount of work (on the optimization passes or whatever) to make it optimal for those? Intel decided to create its own instead of integrating LLVM after all, how is creating something from scratch can be a better way to use such a "modular and great" platform called LLVM?Last edited by the303; 08 February 2014, 08:35 AM.
Leave a comment:
-
Originally posted by the303 View PostGPUs can be seen as a subset of CPUs that are specialized in floating point operations.
Leave a comment:
-
Originally posted by Tobu View PostThis feels like Unladen Swallow, another project that had high hopes and a team of Googlers working on it, to be given up in disgust. It was meant to speed up the Python interpreter (CPython) with an LLVM-based JIT. The conclusion was that LLVM is too inflexible to perform well outside of ahead of time compilers.
Since then I have been very sceptical of LLVM's claims.
Leave a comment:
-
Originally posted by the303 View PostEven if it required more than a little work, it shouldn't take years and still perform bad. (See AMD)
Like they said, apparently it is not designed for that. But then again, what is the point of a design that is based on a IR model if it cannot be used in a very flexible way?
Leave a comment:
-
Originally posted by brent View PostHowever, LLVM was designed for general purpose CPUs. GPUs have very different architectures. So there are more than "a little" modifications required.
Even if it required more than a little work, it shouldn't take years and still perform bad. (See AMD)
Like they said, apparently it is not designed for that. But then again, what is the point of a design that is based on a IR model if it cannot be used in a very flexible way?Last edited by the303; 08 February 2014, 07:40 AM.
Leave a comment:
-
Originally posted by the303 View PostTo the actual point, LLVM is a compiler backend and it should be correct tool for the job. At least it should become the correct tool with little modification.
Leave a comment:
-
This feels like Unladen Swallow, another project that had high hopes and a team of Googlers working on it, to be given up in disgust. It was meant to speed up the Python interpreter (CPython) with an LLVM-based JIT. The conclusion was that LLVM is too inflexible to perform well outside of ahead of time compilers.
Leave a comment:
-
Originally posted by Temar View PostDid you consider the possibility that LLVM is just not the right tool for the job?
To the actual point, LLVM is a compiler backend and it should be correct tool for the job. At least it should become the correct tool with little modification. People say LLVM is not that of a good thing as advertised so I guess this is what they are talking about, LLVM is not as flexible as it should have been I guess. Clang seems to have passed GCC in benchmarks though.
Leave a comment:
Leave a comment: