Announcement

Collapse
No announcement yet.

ALLVM: Forthcoming Project to Ship All Software As LLVM IR

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • karolherbst
    replied
    Originally posted by name99 View Post

    Apple and Linaro:
    #45

    Intel is an obvious extension of the Linaro case. (More emphasis on ISA extensions, less on very different micro-archutectures.)
    and in any case, apple has the used API under full control. It would be the same as developing for java for .NET, which is by far a much easier situation.

    Leave a comment:


  • karolherbst
    replied
    Originally posted by name99 View Post

    Apple and Linaro:
    #45

    Intel is an obvious extension of the Linaro case. (More emphasis on ISA extensions, less on very different micro-archutectures.)
    you are guessing there. There is not a single source for this. => invalid argument

    Leave a comment:


  • name99
    replied
    Originally posted by karolherbst View Post

    It has nothing to do with the actual architecture, also, it would only work for a really small subset of stuff, where you don't need something like this anyway.



    and this relates to this topic how? And no they haven't. They have plain x86 or arm64 binaries, no bytecode stuff.



    why would they care? it's arm, they need at most 2 or 3 versions. To cover only this is very short sighted. If you want to do this right, you would compile bytecode binaries, which would run on every arm, x86, mips, risc-v, whatever without modifications. That's the goal, not to run the same thing on different arm architectures.



    how so?




    it isn't a linux/gcc issue I am talking about, you see the same thing on every unix based system and even on windows (less these days). It's a really hard to solve problem and I don't think we will be ready for this the next 10 years. And yes, this is what kind of a big problem this is.
    Apple and Linaro:
    #45

    Intel is an obvious extension of the Linaro case. (More emphasis on ISA extensions, less on very different micro-archutectures.)

    Leave a comment:


  • karolherbst
    replied
    Originally posted by name99 View Post
    Read the entire sequence of comments I wrote. Just because LLVM IR cannot solve the distribution problem for ALL architectures and ALL possible binaries is not the same thing as being useless...
    It has nothing to do with the actual architecture, also, it would only work for a really small subset of stuff, where you don't need something like this anyway.

    Originally posted by name99 View Post
    Apple has a problem they want to solve, as to improved and somewhat future-proof binary distribution, and they have something (which is probably LLVM IR-based) which solves that problem.
    and this relates to this topic how? And no they haven't. They have plain x86 or arm64 binaries, no bytecode stuff.

    Originally posted by name99 View Post
    Linaro has a similar problem for the range of expected ARM servers.
    why would they care? it's arm, they need at most 2 or 3 versions. To cover only this is very short sighted. If you want to do this right, you would compile bytecode binaries, which would run on every arm, x86, mips, risc-v, whatever without modifications. That's the goal, not to run the same thing on different arm architectures.

    Originally posted by name99 View Post
    Intel has a similar problem, they just seem uninterested, right now, in solving it.
    how so?


    Originally posted by name99 View Post
    You're taking the problems of Linux/GCC variety to be universal, and that's not useful. The fact that LLVM may not solve the Linux/GCC problem doesn't mean it can't solve other people's problems...
    it isn't a linux/gcc issue I am talking about, you see the same thing on every unix based system and even on windows (less these days). It's a really hard to solve problem and I don't think we will be ready for this the next 10 years. And yes, this is what kind of a big problem this is.

    Leave a comment:


  • name99
    replied
    Originally posted by karolherbst View Post

    It's not just about this though. Between different OS you have even different functions not compiled for each set of kernel/OS combination. Then you have macros doing different things on different architectures. Then you have different ABIs for the same kernel for different architectures and then you have different ABIs for no reason at all, because some library was compiled with different optional features. It isn't about some weirdo x86 features. You would have to add so many constraints into the development process, that it won't be fun anymore.

    Anyway, the idea might be good, but in practise it won't work. It's a nice research project totally ignoring the current development practises.



    and this is the reason it is useless. Nobody writes neutral C/C++, not alone because most target glibc, and glibc itself isn't neutral C/C++.

    So before you could even use this, glibc has to be rewritten in many places.
    Read the entire sequence of comments I wrote. Just because LLVM IR cannot solve the distribution problem for ALL architectures and ALL possible binaries is not the same thing as being useless...
    Apple has a problem they want to solve, as to improved and somewhat future-proof binary distribution, and they have something (which is probably LLVM IR-based) which solves that problem. Linaro has a similar problem for the range of expected ARM servers. Intel has a similar problem, they just seem uninterested, right now, in solving it.

    You're taking the problems of Linux/GCC variety to be universal, and that's not useful. The fact that LLVM may not solve the Linux/GCC problem doesn't mean it can't solve other people's problems...

    Leave a comment:


  • karolherbst
    replied
    Originally posted by name99 View Post

    As I said, people were explaining how this wouldn't work eighteen months ago...
    I'm not claiming that this CAN handle x86 and ARM blindly today (I honestly don't know), but your specific claim is not necessarily. LLVM already knows the semantics of, and handles especially, a large number of infrastructure functions, including memory manipulation type functions, functions that have particular security problems, and functions that require special parsing to warn users that they might be screwing up (eg printf and suchlike). There's no reason weirdo x86 special functions would be any different.

    What IS relevant is that you are coding to the C/C++/Objective C/Swift/whatever +IEEE fp virtual machine. Obviously if you're (deliberately, or because you don't know how to write C properly) using something that is x86 specific (segments? 80-bit doubles? inline assembly? and, of course, any "undefined" behavior) this is not going to work.
    It's not magic.
    It's not just about this though. Between different OS you have even different functions not compiled for each set of kernel/OS combination. Then you have macros doing different things on different architectures. Then you have different ABIs for the same kernel for different architectures and then you have different ABIs for no reason at all, because some library was compiled with different optional features. It isn't about some weirdo x86 features. You would have to add so many constraints into the development process, that it won't be fun anymore.

    Anyway, the idea might be good, but in practise it won't work. It's a nice research project totally ignoring the current development practises.

    Originally posted by name99 View Post
    It promises, more or less, to transport NEUTRAL C/C++/whatever between CPUs, not to reverse engineer your x86 code to figure out how it should work equivalently on ARM.
    and this is the reason it is useless. Nobody writes neutral C/C++, not alone because most target glibc, and glibc itself isn't neutral C/C++.

    So before you could even use this, glibc has to be rewritten in many places.

    Leave a comment:


  • name99
    replied
    Originally posted by wizard69 View Post
    I have to wonder if this is an attempt to take some of Apples initiatives with software distribution to Linux. For example you can build a number of optimized executables for your app and have the correct one for the device a user is updating installed. This allows executables matched to the capabilities of the processor. On the Mac side of things I thought they where requiring developers to deliver apps in an intermediate representation. Hopefully a Mac developer can chime in with current app store requirements but I suspect the same idea is at play here. With an IR Apple can generate machine specific code for the user to download, in this case it isn't about the user downloading and generating the executable locally.

    At least that is my understanding, with no apps on the Mac Store I'm a bit out of the loop. I do believe though that Apple is after methods of delivering optimized but compact apps to the user. A great deal of bandwidth can be saved, in some cases, if apps are pared of their architecture support code. I still don't grans why someone would want to have everything compiled locally, that seems like a waste. However if the distribution site keeps optimized solutions available, and updates them to match the latest processor improvements then users directly benefit form a developers use of an IR.

    One thing that is popping up is that Intel has leaked that future processors will have hardware support in AVX to accelerate AI computations. This is a perfectly good example of where the distribution site could have a significant impact on an apps performance with little input from the developer. The use of an IR means that an app has the potential to be optimized even if the developer doesn't know about new processor features.
    In principle you are correct in all this.
    In practice, as far as APPLE is concerned, I don't think we'll see it on Intel. Apple hasn't offered the app thinning technologies for macOS, and IMHO they won't bother, because their agenda is to move Macs to ARM "soon" (before 2020, perhaps announced at WWDC2017, maybe requires WWDC 2018), either way there's no point in creating the infrastructure for a target ISA that's going away in the next few years.
    Maybe I'm wrong, but that's the way it looks to me --- that the A10X on 10nm are probably going to be good enough to match Intel when given the same thermal headroom, and if not the A10X in 2017, then the A11X on 7nm in 2018.

    Leave a comment:


  • name99
    replied
    Originally posted by dragorth View Post
    How would this handle moving a hard disk from one cpu to another? What benefit does this have for the user?
    Like everything, it relies on the competence of the OS developers...
    There are already multiple solutions to this, used in different scenarios. You realize GPUs do this all the time? An intermediate representation of shader code ships in the binaries, and is finalized to the target as necessary. A SANE OS would do the same thing --- retain the IR in the binary, compare the available machine code to the target CPU+runtime, and if they don't match, run the finalizer to build new machine code.
    Alternatively, if you don't DIRECTLY move binaries between devices (like iOS, I don't know about Android) then what would happen is when the equivalent of this occurs (eg when you restore from an iPhone 6 backup onto an iPhone 7) the GUID for each app is sent to the app store, and a new version of the appropriate app is sent to the device, rather than directly copying over the iPhone 6 binary that was in the backup.

    Leave a comment:


  • name99
    replied
    Originally posted by dagger View Post
    Will it work on z/OS? I can keep dreaming...
    Why not? LLVM exists for z/Systems and is actively worked on.

    Leave a comment:


  • name99
    replied
    Originally posted by karolherbst View Post

    only problem, it won't work, cause you have already symbol references and stuff like that. So if your code calls _fpow_x86_opt, it calls this, and it would fail to compile on arm, cause on arm you wouldn't have that symbol anywhere. Memcpy also has multiple opts for every thinkable platform, even multiple ones for x86.

    In the end you would have to abstract all those architecture specific opts away and I didn't even mention OS specific things, short: LLVM isn't really suited for this

    Anyway this idea is far from being new and I already played with LLVM IR binaries like 6 years ago.
    As I said, people were explaining how this wouldn't work eighteen months ago...
    I'm not claiming that this CAN handle x86 and ARM blindly today (I honestly don't know), but your specific claim is not necessarily. LLVM already knows the semantics of, and handles especially, a large number of infrastructure functions, including memory manipulation type functions, functions that have particular security problems, and functions that require special parsing to warn users that they might be screwing up (eg printf and suchlike). There's no reason weirdo x86 special functions would be any different.

    What IS relevant is that you are coding to the C/C++/Objective C/Swift/whatever +IEEE fp virtual machine. Obviously if you're (deliberately, or because you don't know how to write C properly) using something that is x86 specific (segments? 80-bit doubles? inline assembly? and, of course, any "undefined" behavior) this is not going to work.
    It's not magic. It promises, more or less, to transport NEUTRAL C/C++/whatever between CPUs, not to reverse engineer your x86 code to figure out how it should work equivalently on ARM.

    Leave a comment:

Working...
X