Announcement

Collapse
No announcement yet.

AMD's Newest Open-Source Surprise: "Peano" - An LLVM Compiler For Ryzen AI NPUs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD's Newest Open-Source Surprise: "Peano" - An LLVM Compiler For Ryzen AI NPUs

    Phoronix: AMD's Newest Open-Source Surprise: "Peano" - An LLVM Compiler For Ryzen AI NPUs

    There was a very exciting Friday evening code drop out of AMD... They announced a new project called Peano that serves as an open-source LLVM compiler back-end for AMD/Xilinx AI engine processors with a particular focus on the Ryzen AI SOCs with existing Phoenix and Hawk Point hardware as well as the upcoming XDNA2 found with the forthcoming Ryzen AI 300 series...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Good start, AMD! Although this has been said a million times, it's worth saying again. We need every college kid to not only have access to but to also be able to easily program the new hardware. This was the success of CUDA. It looks like AMD has learnt a thing or two.

    The other thing that needs to be said is that even though Nvidia wants you to think that you need tensor cores in your GPU for basic sustenance of life itself, it is only useful if you're training your own neural network. For the majority of users, an inference engine that works on a low power budget is what is needed to experience the benefits of ML/AI. Also, neural processing is its own thing, different from the traditional CPU or GPU paradigms. That's why I wouldn't count out AMD or Intel yet, despite Nvidia's dominance in the AI boom.
    Last edited by arunbupathy; 08 June 2024, 07:29 AM.

    Comment


    • #3
      Reached out to AMD to try to get any more information on their Linux plans...
      Michael Larabel
      https://www.michaellarabel.com/

      Comment


      • #4
        I hope they make it a lot better than their subpar GPGPU support at least...

        Comment


        • #5
          Noob question, and I should probably rtfm, but I guess this is using a special dialect of C to target the NPU? From my little experience with C-CUDA there was quite a learning curve there. Or is this different altogether?

          Comment


          • #6
            Any interesting software that uses it?

            Comment


            • #7
              Originally posted by Laughing1 View Post
              Any interesting software that uses it?
              It was just made public as open-source yesterday.
              Michael Larabel
              https://www.michaellarabel.com/

              Comment


              • #8
                Originally posted by Michael View Post

                It was just made public as open-source yesterday.
                Okay, thanks.

                Comment


                • #9
                  ​​​​​​Still waiting for something like SR-IOV for my vega 8. Intel, with their quite weak iGPU, provide that, but AMD just give something like these with their external GPU.

                  Comment


                  • #10
                    Originally posted by JanW View Post
                    Noob question, and I should probably rtfm, but I guess this is using a special dialect of C to target the NPU? [...] Or is this different altogether?
                    The article says it is an llvm back-end. So it can be used with any compiler front-end - in theory.
                    I figure it will be used with the ROCm stack so that the existing ROCm support of tensorflow, pytorch and so on can get reused.

                    Comment

                    Working...
                    X