MLIR Is A New IR For Machine Learning That Might Become Part Of LLVM
Earlier this month the developers behind Tensorflow open-sourced MLIR as the Multi-Level Intermediate Representation. They hope this IR can become a common format between machine learning models/frameworks and as part of that it might end up becoming an LLVM sub-project.
The Multi-Level Intermediate Representation is designed to suit a unified infrastructure and can represent all TensorFlow graphs, support all of the optimizations and transformations, and ideally can be usable by any high performance machine learning framework. Some key elements of MLIR:
MLIR is a common IR that also supports hardware specific operations. Thus, any investment into the infrastructure surrounding MLIR (e.g. the compiler passes that work on it) should yield good returns; many targets can use that infrastructure and will benefit from it.
MLIR is a powerful representation, but it also has non-goals. We do not try to support low level machine code generation algorithms (like register allocation and instruction scheduling). They are a better fit for lower level optimizers (such as LLVM). Also, we do not intend MLIR to be a source language that end-users would themselves write kernels in (analogous to CUDA C++). While we would love to see a kernel language happen someday, that will be an independent project that compiles down to MLIR.
For now the MLIR framework itself is hosted via this Tensorflow project Git repository while Tensorflow will soon begin bringing up their own MLIR-based compilers. MLIR might become an LLVM sub-project itself to help foster adoption by different machine learning communities and to benefit from the tight integration with the LLVM infrastructure.
LLVM founder Chris Lattner, who is currently employed by Google, is part of the team developing MLIR. More details in this PDF slide deck from the EuroLLVM conference earlier this month.