Announcement

Collapse
No announcement yet.

ALLVM: Forthcoming Project to Ship All Software As LLVM IR

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by peppercats View Post
    Steam hardware survey for Linux says 84% of surveyed computers have SSE 4.2. Instead of targeting the bare minimum x86-64 CPU when compiling packages, move the minimum up a few years from 2002. SSE4.2 is almost a decade old.
    While I agree with the general idea, I'd like to point out that Steam survey has the average reliability of a gossip magazine.

    I think that the easiest way would be to recompile packages on the target device, or have a slightly smarter package management system that can figure out the supported instructions (horribly complex, I know), and fetching packages from a specific repo. Like "amd64-sse4.2" repo or whatever.

    Hell, this would work also for the odd 32bit systems still around.

    Comment


    • #32
      I'm not totally familiar with the LLVM IR, but does this representation totally eliminate the need for compile time dependencies? It would be the IR format closest to the hardware ISA? I mean the IR after all passes except the target code generation?

      Comment


      • #33
        I have to wonder if this is an attempt to take some of Apples initiatives with software distribution to Linux. For example you can build a number of optimized executables for your app and have the correct one for the device a user is updating installed. This allows executables matched to the capabilities of the processor. On the Mac side of things I thought they where requiring developers to deliver apps in an intermediate representation. Hopefully a Mac developer can chime in with current app store requirements but I suspect the same idea is at play here. With an IR Apple can generate machine specific code for the user to download, in this case it isn't about the user downloading and generating the executable locally.

        At least that is my understanding, with no apps on the Mac Store I'm a bit out of the loop. I do believe though that Apple is after methods of delivering optimized but compact apps to the user. A great deal of bandwidth can be saved, in some cases, if apps are pared of their architecture support code. I still don't grans why someone would want to have everything compiled locally, that seems like a waste. However if the distribution site keeps optimized solutions available, and updates them to match the latest processor improvements then users directly benefit form a developers use of an IR.

        One thing that is popping up is that Intel has leaked that future processors will have hardware support in AVX to accelerate AI computations. This is a perfectly good example of where the distribution site could have a significant impact on an apps performance with little input from the developer. The use of an IR means that an app has the potential to be optimized even if the developer doesn't know about new processor features.

        Comment


        • #34
          Originally posted by Xelix View Post

          There are many language implementations other than Java and C# that already target the Java and .NET VMs.
          I think the key difference is that java and C# do JIT compilation. JIT (Just in time) compilation happens every single time you run the code.

          The real potential here is to do AOT (Ahead of Time) compilation. When you install the software, the LLVM IR can be optimized for your particular processor and use all available extensions. There is no need to run the JIT on every execution, just run the AOT once at install time. AOT also has the luxury of time, it is ok for the install to take a bit longer so the AOT compiler can do more optimizations that are slower.

          There is a precedent for this. In android, they are switching from Dalvik (a JIT interpreter), to ART (an AOT compiler), they both receive the binary in bytecode called dex, but ART will compile it to native at install time, while Dalvik will compile it to native every time the binary is executed.

          I don't know if that is what ALLVM is doing, but it would seem to me like the logical way to go.

          Comment


          • #35
            Originally posted by paulpach View Post

            I think the key difference is that java and C# do JIT compilation. JIT (Just in time) compilation happens every single time you run the code.

            The real potential here is to do AOT (Ahead of Time) compilation. When you install the software, the LLVM IR can be optimized for your particular processor and use all available extensions. There is no need to run the JIT on every execution, just run the AOT once at install time. AOT also has the luxury of time, it is ok for the install to take a bit longer so the AOT compiler can do more optimizations that are slower.

            There is a precedent for this. In android, they are switching from Dalvik (a JIT interpreter), to ART (an AOT compiler), they both receive the binary in bytecode called dex, but ART will compile it to native at install time, while Dalvik will compile it to native every time the binary is executed.

            I don't know if that is what ALLVM is doing, but it would seem to me like the logical way to go.
            If I remember correctly, some .NET binaries actually do JIT on first run on a given system, but as they're doing that, they actually leave a compiled copy of the binary on the system as well, making subsequent runs of that same binary on that same system faster. I'd have to double-check, though.

            Comment


            • #36
              Originally posted by paulpach View Post

              I think the key difference is that java and C# do JIT compilation. JIT (Just in time) compilation happens every single time you run the code.

              The real potential here is to do AOT (Ahead of Time) compilation. When you install the software, the LLVM IR can be optimized for your particular processor and use all available extensions. There is no need to run the JIT on every execution, just run the AOT once at install time. AOT also has the luxury of time, it is ok for the install to take a bit longer so the AOT compiler can do more optimizations that are slower.

              There is a precedent for this. In android, they are switching from Dalvik (a JIT interpreter), to ART (an AOT compiler), they both receive the binary in bytecode called dex, but ART will compile it to native at install time, while Dalvik will compile it to native every time the binary is executed.

              I don't know if that is what ALLVM is doing, but it would seem to me like the logical way to go.
              I agree with everything you say. My answer was for Kushan, who said "[...] but using LLVM's IR, you can use any language you want, you're not constrained to Java, C# or whatever.". All my answer says is that we are already not constrained to Java/C#.

              Comment


              • #37
                Shouldn't all software be distributed as HSAIL?

                Comment


                • #38
                  Originally posted by GrayShade View Post
                  Yes, that's why the WebAssembly project for bytecode in browsers rejected it in favor of a more abstract representation, even though Mozilla's compilers and tooling is built around LLVM.

                  Comment


                  • #39
                    Originally posted by Niarbeht View Post
                    If I remember correctly, some .NET binaries actually do JIT on first run on a given system, but as they're doing that, they actually leave a compiled copy of the binary on the system as well, making subsequent runs of that same binary on that same system faster. I'd have to double-check, though.
                    Well, not quite. There is a way to run the .NET JIT on-demand (called NGEN) after installing the program and if you do, this version will be run instead. However, it won't produce code specific to the processor in use. If you don't do this step, the generated code will be tuned to the CPU (at least up to the instruction set, I suppose).

                    Comment


                    • #40
                      Originally posted by GrayShade View Post

                      Well, not quite. There is a way to run the .NET JIT on-demand (called NGEN) after installing the program and if you do, this version will be run instead. However, it won't produce code specific to the processor in use. If you don't do this step, the generated code will be tuned to the CPU (at least up to the instruction set, I suppose).
                      Once again proving that all you have to do is post something that's only partially-correct on the Internet, and someone will come along to fix things!

                      Comment

                      Working...
                      X