They absolutely still need to groom the full software ecosystem. If I cannot install a rocm enabled version of pytorch on conda-forge, I won't be able to use it and won't switch away from cuda or nvidia GPUs. Supporting only pip or something like a fedora base install is not enough. It's very low effort for AMD to support this.
If they want people to flock to use their MI300's for ML on Azure soon, they should make the ecosystem easier to handle for devs to switch easily. For our company only price/performance matters, if there is no migration barrier.
If they want people to flock to use their MI300's for ML on Azure soon, they should make the ecosystem easier to handle for devs to switch easily. For our company only price/performance matters, if there is no migration barrier.
Comment