Announcement

Collapse
No announcement yet.

AMD Talking Up Open-Source & Open Standards Ahead Of "Advancing AI" Event

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Being able to offload PyTorch onto Ryzen AI, with just a "pip install," would be a great feature. It would make laptops much faster at the kinds of data analysis I do.

    Comment


    • #12
      Ryzen AI is mostly impossible to use atm, the only thing you can do with it is accelerate onnx graphs, not due to lack of interest but because amd has thus far not provided any other way to execute anything on there.

      Even if it where to work it would only help with power consumption a bit in most cases, as ai is very mutch mostly memory bandwidth limited

      Comment


      • #13
        Originally posted by Jabberwocky View Post
        I hope they will announcing something that is already working and not some theoretical in the future we will do this and that BS. Compute / AI has been in need of practical open standards for more than a decade. Until now it has been mostly talk and not much walk.

        I found tiny corp is producing the most practical open source route for LLama and "SD". It's ironic that OpenAI, Google, Microsoft, Facebook, Apple, Amazon, IBM, Oracle, Nvidia, Intel*, AMD* or any other big company isn't able to achieve what a tiny company can do for practical open AI frameworks that runs on all devices.

        You like pytorch? You like micrograd? You love tinygrad! ❤️ - GitHub - tinygrad/tinygrad: You like pytorch? You like micrograd? You love tinygrad! ❤️


        I hope they consider it as part of their standardization.

        Tinygrad supports the following:
        • CPU
        • GPU (OpenCL)
        • C Code (Clang)
        • LLVM
        • METAL
        • CUDA
        • Triton
        • PyTorch
        • HIP
        • WebGPU
        Not only that but it's also relatively easy to add your own device support:



        *Intel and AMD are at least trying but not enough focus on supporting different backends. Nvidia has been doing the exact opposite as usual vendor lock-in all the way.
        How do you like Apache TVM?

        Comment


        • #14
          Originally posted by ET3D View Post
          I'm totally not sure why you're bundling AI, a useful technology which helps in a wide variety of use cases, together with blockchain stuff.
          Because a lot of idiots with little to no understanding of the technology are hyping it to the moon, just like the previous darlings. The usefullness of the actual thing they are pumping up (Machine learning - high, NFTs -zero) has no bearing on it. Large amounts of wealth have been moved on the basis of beliefs in all of these things, and I expect that to continue with whatever the next craze is.

          I commented on it because Nvidia, AMD, and Intel can't say anything this year without inserting 'AI' into it, which is annoying.

          Comment


          • #15
            Originally posted by Teggs View Post
            I commented on it because Nvidia, AMD, and Intel can't say anything this year without inserting 'AI' into it, which is annoying.
            I understand that the AI hype can be annoying, but bundling this with the non-useful stuff is also annoying. Yes, it's a bubble, but it's like the internet bubble. A lot of companies that had to do with the internet weren't successful, but online interaction became a backbone for modern computing. I think that AI is in the same boat.

            Comment


            • #16
              Originally posted by oleid View Post

              How do you like Apache TVM?
              I decided not to use it for 2 reasons.

              1) The docs are out of date
              2) It seems like there's too much integration required to get started (this might be incorrect based on out of date docs)

              I don't have a lot of time for research on this hobbyist project.

              Comment


              • #17
                Originally posted by boboviz View Post

                That's great, but... "tinygrad is still alpha software" (not even "beta", only "alpha")
                My problem is that on consumer hardware everything besides Nvidia + CUDA is still alpha phase.

                DirectML until now has proved to most useful, but even that is depended on Microsoft to keep software up to date which they have started to slow down and we are currently seeing generic responses like: "While I can't provide a roadmap as the moment, please know that your feedback is valuable to us. We will follow up once we can review this request." No newer versions of pytorch or python being released.

                It would be great to have some standardization.

                Comment


                • #18
                  Originally posted by Jabberwocky View Post

                  I decided not to use it for 2 reasons.

                  1) The docs are out of date
                  2) It seems like there's too much integration required to get started (this might be incorrect based on out of date docs)

                  I don't have a lot of time for research on this hobbyist project.
                  Yeah, for a hobbyist project that's definitively overkill. I'm going to investigate huggingface/candle next. Just heard about it yesterday. It seems to address most grievances I have with tensorflow.

                  Comment


                  • #19
                    Originally posted by oleid View Post

                    Yeah, for a hobbyist project that's definitively overkill. I'm going to investigate huggingface/candle next. Just heard about it yesterday. It seems to address most grievances I have with tensorflow.
                    I was surprised that I wasn't aware of this and then impressed after scanning over the code and discussions.

                    For my system Candle won't be as fast as PyTorch-directml or Tinygrad but it sure can do a lot. There's some interesting development going on in the issues and PRs and in some related projects like llama.cpp

                    I think Candle would be a good test for rusticl

                    I'm going to be following this closely. Thanks for sharing!

                    Comment


                    • #20
                      Originally posted by Jabberwocky View Post

                      For my system Candle won't be as fast as PyTorch-directml or Tinygrad but it sure can do a lot. [...] Thanks for sharing!

                      You're welcome!
                      Out of curiosity: Why will it be slower? Is it the lack of directml support?

                      Comment

                      Working...
                      X