Originally posted by kiffmet
View Post
Why not contribute to Apache TVM? It already has all sorts of exotic hardware as functional targets.
Why not build on top of IREE or MLIR, or contribute to the PyTorch MLIR effort?
And there are a gazillion other smaller efforts: https://github.com/merrymercy/awesome-tensor-compilers
And other not yet public efforts like HippoML or... Other ones I can't even remember or Google, because there have been so many. There was one really promising one built as some kind of alternative Python interpreter, with a lightly restricted subset of Python.
In other words, aside from Hotz I guess, I don't see what tinygrad is doing different. Bigger projects like TVM and MLIR already have extremely fast, real demos of transformers models and such on all sort of hardware (including AMD). More focused efforts on the scale of tinygrad (like GGML) are SOTA in specific niches. Triton+PyTorch is vendor neutral and explicity endorsed by AMD. Tinygrad is... good on AMD at some point in the future, and other vendors later?
Comment