Originally posted by bridgman
View Post
Announcement
Collapse
No announcement yet.
AMD Contributing MIGraphX/ROCm Back-End To Microsoft's ONNX Runtime For Machine Learning
Collapse
X
-
-
Originally posted by Jabberwocky View PostI am not an expert, but it is my understanding. TensorFlow is a de facto standard defined by Google Brain team. TensorFlow does not want to support other backends. Now Facebook/Microsoft, AMD, Red Hat and others are working in silos to come up with their own solutions to run TensorFlow and all are developing forks that are not being upstreamed.
https://github.com/tensorflow/tensorflow (scroll down to Community Supported builds)
- Likes 1
Leave a comment:
-
Originally posted by boboviz View PostI'm not so negative. Project ONNX started officialy in December 2017 and is open since December 2018, so it's a very young project (for example TensorFlow has 5ys).
I am not an expert, but it is my understanding. TensorFlow is a de facto standard defined by Google Brain team. TensorFlow does not want to support other backends. Now Facebook/Microsoft, AMD, Red Hat and others are working in silos to come up with their own solutions to run TensorFlow and all are developing forks that are not being upstreamed.
There are other good frameworks, but it's too expensive to move models at this point in time. Hence TensorFlow is currently a gatekeeper (this is where my negativity comes from) for anyone wanting to enter the industry. Once you have spent an insane amount of time to learn how everything works you're okay and can probably convert models and do really awesome things. My emphasis is on newcomers or people from other industries that don't have or want a CS degree.
I don't know why we always have to depend on Dave for sorting out industry wide problems: https://youtu.be/KfDQb6xOkXg?t=244 I'm not saying he does 100% of the work, but he drives it and puts it all together.
Leave a comment:
-
Originally posted by Jabberwocky View PostThis looks like a can of worms: https://github.com/onnx/tensorflow-onnxLast edited by boboviz; 07 February 2020, 06:53 AM.
Leave a comment:
-
Tensorflow only supports exporting many functions only for onnx opset 10 and 11. DirectML supports 7-8...
Having used TensorRT, I can say it's quite well supported and gives very significant speedups over the training engines (tensorflow, pytorch, etc).
No idea whether DirectML gives such speedups, but it doesn't support the onnx opset I needs.
The MIXGraph backend seems very early. The documentation doesn't even mention the onnx opset supported.
- Likes 1
Leave a comment:
-
This is a bummer: https://github.com/tensorflow/tensorflow/issues/18307
This looks like a can of worms: https://github.com/onnx/tensorflow-onnx
I'm trying to hold my middle finger down, but it's difficult!
- Likes 1
Leave a comment:
-
One small point - I believe AMD GPUs have been supported under ONNX Runtime for some time via the DirectML back end on Windows - the more recent change is plumbing it into the ROCm stack on Linux.
Leave a comment:
-
AMD Contributing MIGraphX/ROCm Back-End To Microsoft's ONNX Runtime For Machine Learning
Phoronix: AMD Contributing MIGraphX/ROCm Back-End To Microsoft's ONNX Runtime For Machine Learning
AMD is adding a MIGraphX/ROCm back-end to Microsoft's ONNX run-time for machine learning inferencing to allow for Radeon GPU acceleration...
Tags: None
Leave a comment: