Announcement

Collapse
No announcement yet.

Intel Linux Graphics Developers No Longer Like LLVM

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Temar View Post
    Did you consider the possibility that LLVM is just not the right tool for the job?
    Nah. The mystic almighty god LLVM is the best in all the worlds in the universe that can do anything from compiling to making you a coffee.

    To the actual point, LLVM is a compiler backend and it should be correct tool for the job. At least it should become the correct tool with little modification. People say LLVM is not that of a good thing as advertised so I guess this is what they are talking about, LLVM is not as flexible as it should have been I guess. Clang seems to have passed GCC in benchmarks though.

    Comment


    • #12
      This feels like Unladen Swallow, another project that had high hopes and a team of Googlers working on it, to be given up in disgust. It was meant to speed up the Python interpreter (CPython) with an LLVM-based JIT. The conclusion was that LLVM is too inflexible to perform well outside of ahead of time compilers.

      Comment


      • #13
        Originally posted by the303 View Post
        To the actual point, LLVM is a compiler backend and it should be correct tool for the job. At least it should become the correct tool with little modification.
        However, LLVM was designed for general purpose CPUs. GPUs have very different architectures. So there are more than "a little" modifications required.

        Comment


        • #14
          Originally posted by brent View Post
          However, LLVM was designed for general purpose CPUs. GPUs have very different architectures. So there are more than "a little" modifications required.
          GPUs can be seen as a subset of CPUs that are specialized in floating point operations. So there should be a little modification to maybe get it more easy to work on, if LLVM was a properly designed platform. It's not like GPUs use quantum computing techniques or anything, it is just a processor.

          Even if it required more than a little work, it shouldn't take years and still perform bad. (See AMD)

          Like they said, apparently it is not designed for that. But then again, what is the point of a design that is based on a IR model if it cannot be used in a very flexible way?
          Last edited by the303; 08 February 2014, 07:40 AM.

          Comment


          • #15
            Originally posted by the303 View Post
            Even if it required more than a little work, it shouldn't take years and still perform bad. (See AMD)

            Like they said, apparently it is not designed for that. But then again, what is the point of a design that is based on a IR model if it cannot be used in a very flexible way?
            The IR is fine, but the optimization passes apparently aren't very suitable for GPUs.

            Comment


            • #16
              Originally posted by e8hffff View Post
              It would be nice to here their specific gripes so others are cautious about LLVM.
              I'd rather have them tell about it so people on the LLVM camp can fix whatever is wrong.

              Comment


              • #17
                Originally posted by Tobu View Post
                This feels like Unladen Swallow, another project that had high hopes and a team of Googlers working on it, to be given up in disgust. It was meant to speed up the Python interpreter (CPython) with an LLVM-based JIT. The conclusion was that LLVM is too inflexible to perform well outside of ahead of time compilers.
                I was so disappointed at unladen swallow bowing out at the time, but after investigating, I essentially came to the conclusion that LLVM was misrepresenting its own abilities.
                Since then I have been very sceptical of LLVM's claims.

                Comment


                • #18
                  Originally posted by the303 View Post
                  GPUs can be seen as a subset of CPUs that are specialized in floating point operations.
                  Sorry, but it's not that simple, unfortunately. They are not a simple subset, but very different and complex beasts altogether. E.g. GPUs inherently use predicated SIMD (and/or VLIW like in in R600) processing, and have various different memory spaces, while CPUs typically have a single memory space and use SISD processing. Please read the radeonsi ISA manual and see if you can still claim that it's a subset of a general purpose CPU architecture.

                  Comment


                  • #19
                    Originally posted by brent View Post
                    Sorry, but it's not that simple, unfortunately. They are not a simple subset, but very different and complex beasts altogether. E.g. GPUs inherently use predicated SIMD (and/or VLIW like in in R600) processing, and have various different memory spaces, while CPUs typically have a single memory space and use SISD processing. Please read the radeonsi ISA manual and see if you can still claim that it's a subset of a general purpose CPU architecture.
                    That makes sense.

                    My comments are of course biased with an anti-LLVM attitude.

                    I am an electrical engineer so I have always been irritated by the idea of JIT/AOT instead of static compiling. It makes me wanna puke because it is pointless and waste of resources. The machines may be powerful enough for end user not to notice the overhead but on large scale (all computers over the world), it will cause some considerable energy loss and for what? Just because a developer was too lazy to pre-compile his/her program for X platform and he would rather every PC to compile the same thing on the fly each and every time they run their program in the case of JIT and compile once in the case of AOT, as if there were a thousand platforms to target (even that could be automated).

                    On the subject:
                    Well if they also wrote special optimization passes (if that is possible) they would not need to use LLVM at all I guess, I mean that will be almost everything they will need to create if they were to not use LLVM anyway?

                    Still IMHO this is an inflexibility that LLVM should not have had to make the project worth something. I mean as far as I know Intel has an army of open source developers compared to other companies and still find it not worthwhile to integrate LLVM. That is a bad sign.

                    There may be different CPU types in the future that have huge differences compared to today's. The IR of LLVM that they like to refer as "the Universal IR" will require a enormous amount of work (on the optimization passes or whatever) to make it optimal for those? Intel decided to create its own instead of integrating LLVM after all, how is creating something from scratch can be a better way to use such a "modular and great" platform called LLVM?
                    Last edited by the303; 08 February 2014, 08:35 AM.

                    Comment


                    • #20
                      Originally posted by Marc Driftmeyer View Post
                      Seriously, piss poor code behavior is a product of the developer, not a metric of the tools used.
                      as if there doesn't exists serious issue with tools... >.>

                      Comment

                      Working...
                      X