Being able to offload PyTorch onto Ryzen AI, with just a "pip install," would be a great feature. It would make laptops much faster at the kinds of data analysis I do.
Announcement
Collapse
No announcement yet.
AMD Talking Up Open-Source & Open Standards Ahead Of "Advancing AI" Event
Collapse
X
-
Ryzen AI is mostly impossible to use atm, the only thing you can do with it is accelerate onnx graphs, not due to lack of interest but because amd has thus far not provided any other way to execute anything on there.
Even if it where to work it would only help with power consumption a bit in most cases, as ai is very mutch mostly memory bandwidth limited
- Likes 1
Comment
-
Originally posted by Jabberwocky View PostI hope they will announcing something that is already working and not some theoretical in the future we will do this and that BS. Compute / AI has been in need of practical open standards for more than a decade. Until now it has been mostly talk and not much walk.
I found tiny corp is producing the most practical open source route for LLama and "SD". It's ironic that OpenAI, Google, Microsoft, Facebook, Apple, Amazon, IBM, Oracle, Nvidia, Intel*, AMD* or any other big company isn't able to achieve what a tiny company can do for practical open AI frameworks that runs on all devices.
You like pytorch? You like micrograd? You love tinygrad! ❤️ - GitHub - tinygrad/tinygrad: You like pytorch? You like micrograd? You love tinygrad! ❤️
I hope they consider it as part of their standardization.
Tinygrad supports the following:- CPU
- GPU (OpenCL)
- C Code (Clang)
- LLVM
- METAL
- CUDA
- Triton
- PyTorch
- HIP
- WebGPU
*Intel and AMD are at least trying but not enough focus on supporting different backends. Nvidia has been doing the exact opposite as usual vendor lock-in all the way.
Comment
-
Originally posted by ET3D View PostI'm totally not sure why you're bundling AI, a useful technology which helps in a wide variety of use cases, together with blockchain stuff.
I commented on it because Nvidia, AMD, and Intel can't say anything this year without inserting 'AI' into it, which is annoying.
Comment
-
Originally posted by Teggs View PostI commented on it because Nvidia, AMD, and Intel can't say anything this year without inserting 'AI' into it, which is annoying.
Comment
-
Originally posted by oleid View Post
How do you like Apache TVM?
1) The docs are out of date
2) It seems like there's too much integration required to get started (this might be incorrect based on out of date docs)
I don't have a lot of time for research on this hobbyist project.
Comment
-
Originally posted by boboviz View Post
That's great, but... "tinygrad is still alpha software" (not even "beta", only "alpha")
DirectML until now has proved to most useful, but even that is depended on Microsoft to keep software up to date which they have started to slow down and we are currently seeing generic responses like: "While I can't provide a roadmap as the moment, please know that your feedback is valuable to us. We will follow up once we can review this request." No newer versions of pytorch or python being released.
It would be great to have some standardization.
Comment
-
Originally posted by Jabberwocky View Post
I decided not to use it for 2 reasons.
1) The docs are out of date
2) It seems like there's too much integration required to get started (this might be incorrect based on out of date docs)
I don't have a lot of time for research on this hobbyist project.
- Likes 1
Comment
-
Originally posted by oleid View Post
Yeah, for a hobbyist project that's definitively overkill. I'm going to investigate huggingface/candle next. Just heard about it yesterday. It seems to address most grievances I have with tensorflow.
For my system Candle won't be as fast as PyTorch-directml or Tinygrad but it sure can do a lot. There's some interesting development going on in the issues and PRs and in some related projects like llama.cpp
I think Candle would be a good test for rusticl
I'm going to be following this closely. Thanks for sharing!
- Likes 1
Comment
-
Comment