Announcement

Collapse
No announcement yet.

Facebook Is Aiming To Make Compilers Faster Using Machine Learning With CompilerGym

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • commodore256
    replied
    Neural networks are revolutionary, but overrated. But the idea looks promising as long there's Natural Language Processing piped into it to document it's methods and can turn it into a hand crafted algorithm and write a scientific paper. Otherwise, when you trust magic that you don't know how it works, you get results like this.

    Leave a comment:


  • sinepgib
    replied
    Originally posted by bregma View Post
    (3) how faster compilers equate with faster programs is beyond my understanding.
    This is just speculation, but if stronger optimization transforms are found faster then you can apply them in reasonable time. Say, if you have some phase that takes O(n^2) or O(n^3) then that optimization is unusable for large programs. If you can make something equivalent in O(n) somehow, then you can further optimize the output with it.

    Leave a comment:


  • licovilc
    replied
    >Our goal is to be a catalyst for using ML to make compilers faster, which is important as poorly optimized programs are slow and consume too many computing resources as well as too much energy, limiting applications of energy-efficient edge devices and making data centers less environmentally friendly

    are they going to work on deleting Electron?

    Leave a comment:


  • Ironmask
    replied
    Pretty cool. I once read a blog post about GitHub Copilot that hypothesized that AI being incorporated into whatever the next big language is will be like safety being the main feature of Rust. I think there's some credence to that.

    Originally posted by bregma View Post
    This seems very confusing. Their goal is to have compiler produce more "optimized" (for what?) programs by calling out to some Python programs that will iterate over combinatoric solutions using inscrutable and unknown heuristics and that will speed up the compiler. It strikes me that (1) adding a whole lot of out-of-process processing is unlikely to make the compilation faster, (2) how it is possible to prove the correctness of the output of a ML-generated algorithm laden with intellectual debt, and (3) how faster compilers equate with faster programs is beyond my understanding.

    I guess the proof will be in the pudding. Or at least, the results will be in the pudding. I wouldn't trust my life or my liability to ML-generated software. Azimov's laws are fiction.
    Sadly yes, before any real work in AI can be done we have to get out of what I call the "AI Zombie Apocalypse", first had the AI Winter where everyone was uninterested, now we have a hoard of cargo cultists mindlessly using the same tooling and algorithms and never trying or pursuing anything else. I very rarely ever see papers that think "hmm, maybe we could try symbolics instead of bashing our heads against the wall of connectionist purism"
    At least a lot of the AI world is using embedding now which is probably as close to symbolics as you can get in a connectionist algorithm.

    Leave a comment:


  • rmfx
    replied
    The best way to optimize code is to stop using shit like electron… and come back a reasonable use of low level coding.

    Leave a comment:


  • bregma
    replied
    This seems very confusing. Their goal is to have compiler produce more "optimized" (for what?) programs by calling out to some Python programs that will iterate over combinatoric solutions using inscrutable and unknown heuristics and that will speed up the compiler. It strikes me that (1) adding a whole lot of out-of-process processing is unlikely to make the compilation faster, (2) how it is possible to prove the correctness of the output of a ML-generated algorithm laden with intellectual debt, and (3) how faster compilers equate with faster programs is beyond my understanding.

    I guess the proof will be in the pudding. Or at least, the results will be in the pudding. I wouldn't trust my life or my liability to ML-generated software. Azimov's laws are fiction.

    Leave a comment:


  • lamka02sk
    replied
    Originally posted by ddriver View Post
    Great, maybe they can use AI to figure how to not be such a sh1tty company.

    Also, having used react, I somehow doubt performance and efficiency are true priorities.
    React is a whole different story and the performance is quite similar to other frameworks like Vue or Angular so I wouldn't criticise it too much. The problem of these frameworks is the overhead/glue code and you can't really do anything about that. Just a short time ago, everyone thought that virtual DOM is the only way to go forward, but now, Svelte is the proof, that you don't need virtual DOM to make dynamic web apps fast. I think the compiler based approach is a much better solution.

    Leave a comment:


  • david-nk
    replied
    GCC also needs to get configurable phase ordering, so it can benefit from such work. The problem with GCC right now is that the optimization phases break each other or are simply not applied because they happen in the wrong order. Often it would be necessary to run an optimization phase multiple times throughout the optimization process and with reinforcement learning, you could figure out the best pipeline setup for your application. And find a better default setting as well.

    Leave a comment:


  • timofonic
    replied
    I hope efforts get directly mainlined in compilers in some way and we can see a lot better optimization along them.

    Leave a comment:


  • ddriver
    replied
    Great, maybe they can use AI to figure how to not be such a sh1tty company.

    Also, having used react, I somehow doubt performance and efficiency are true priorities.
    Last edited by ddriver; 02 October 2021, 08:02 AM.

    Leave a comment:

Working...
X