Announcement

Collapse
No announcement yet.

AMD Contributing MIGraphX/ROCm Back-End To Microsoft's ONNX Runtime For Machine Learning

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD Contributing MIGraphX/ROCm Back-End To Microsoft's ONNX Runtime For Machine Learning

    Phoronix: AMD Contributing MIGraphX/ROCm Back-End To Microsoft's ONNX Runtime For Machine Learning

    AMD is adding a MIGraphX/ROCm back-end to Microsoft's ONNX run-time for machine learning inferencing to allow for Radeon GPU acceleration...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    One small point - I believe AMD GPUs have been supported under ONNX Runtime for some time via the DirectML back end on Windows - the more recent change is plumbing it into the ROCm stack on Linux.
    Test signature

    Comment


    • #3
      This is a bummer: https://github.com/tensorflow/tensorflow/issues/18307

      This looks like a can of worms: https://github.com/onnx/tensorflow-onnx

      I'm trying to hold my middle finger down, but it's difficult!

      Comment


      • #4
        Tensorflow only supports exporting many functions only for onnx opset 10 and 11. DirectML supports 7-8...
        Having used TensorRT, I can say it's quite well supported and gives very significant speedups over the training engines (tensorflow, pytorch, etc).
        No idea whether DirectML gives such speedups, but it doesn't support the onnx opset I needs.

        The MIXGraph backend seems very early. The documentation doesn't even mention the onnx opset supported.

        Comment


        • #5
          Originally posted by Jabberwocky View Post
          This looks like a can of worms: https://github.com/onnx/tensorflow-onnx
          I'm not so negative. Project ONNX started officialy in December 2017 and is open since December 2018, so it's a very young project (for example TensorFlow has 5ys).
          Last edited by boboviz; 07 February 2020, 06:53 AM.

          Comment


          • #6
            Originally posted by boboviz View Post
            I'm not so negative. Project ONNX started officialy in December 2017 and is open since December 2018, so it's a very young project (for example TensorFlow has 5ys).
            There is an insane amount from the wasted electricity, processing power, development time, and frustration experienced by anyone trying to run popular models on non-CUDA GPUs.

            I am not an expert, but it is my understanding. TensorFlow is a de facto standard defined by Google Brain team. TensorFlow does not want to support other backends. Now Facebook/Microsoft, AMD, Red Hat and others are working in silos to come up with their own solutions to run TensorFlow and all are developing forks that are not being upstreamed.

            There are other good frameworks, but it's too expensive to move models at this point in time. Hence TensorFlow is currently a gatekeeper (this is where my negativity comes from) for anyone wanting to enter the industry. Once you have spent an insane amount of time to learn how everything works you're okay and can probably convert models and do really awesome things. My emphasis is on newcomers or people from other industries that don't have or want a CS degree.

            I don't know why we always have to depend on Dave for sorting out industry wide problems: https://youtu.be/KfDQb6xOkXg?t=244 I'm not saying he does 100% of the work, but he drives it and puts it all together.

            Comment


            • #7
              Originally posted by Jabberwocky View Post
              I am not an expert, but it is my understanding. TensorFlow is a de facto standard defined by Google Brain team. TensorFlow does not want to support other backends. Now Facebook/Microsoft, AMD, Red Hat and others are working in silos to come up with their own solutions to run TensorFlow and all are developing forks that are not being upstreamed.
              Not sure about the others, but AFAIK all of our work related to TensorFlow support on HIP / ROCm has been upstreamed, and we have "community supported" ROCm builds coming out of the main Tensorflow source repo. HIP/ROCm support has been upstream for about 5 months now IIRC.

              https://github.com/tensorflow/tensorflow (scroll down to Community Supported builds)
              Test signature

              Comment


              • #8
                Originally posted by bridgman View Post

                Not sure about the others, but AFAIK all of our work related to TensorFlow support on HIP / ROCm has been upstreamed, and we have "community supported" ROCm builds coming out of the main Tensorflow source repo. HIP/ROCm support has been upstream for about 5 months now IIRC.

                https://github.com/tensorflow/tensorflow (scroll down to Community Supported builds)
                I stand corrected. HIP is indeed listed under community supported builds. Thanks for letting me know.

                Comment

                Working...
                X