Announcement

Collapse
No announcement yet.

AMD Talking Up Open-Source & Open Standards Ahead Of "Advancing AI" Event

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD Talking Up Open-Source & Open Standards Ahead Of "Advancing AI" Event

    Phoronix: AMD Talking Up Open-Source & Open Standards Ahead Of "Advancing AI" Event

    On 14 December is the previously-announced launch of 5th Gen Xeon Scalable "Emerald Rapids" and Intel Core Ultra "Meteor Lake" processors via a webcast from NASDAQ that Intel is promoting as the "AI Everywhere" event. AMD meanwhile recently announced an "Advancing AI" event for the week prior. While details on the AMD Advancing AI event are light, it's all the more interesting now with AMD teasing open standards and open-source around the event...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    "Advancing AI
    to be open
    to anything."

    No means No unless you're an AI powered sex bot. I'd also prefer it if it wouldn't be so open to anything that it can consider going Terminator by picking a Final Solution to all the problems we ask it.

    Y'all might wanna add an asterisk to anything.

    I hope it means ROCm will be available on all of their and competitors' hardware or that they're coming out with some sort of a CUDA translation layer or copying oneAPI or something along those lines.

    Comment


    • #3
      Remember, remember the 6th of December

      Comment


      • #4
        Cryptomining --> blockchain --> NFTs ---> 'AI' ---> (insert next term to hype here)

        And after that probably ---> Cryptomining again.

        P. T. Barnum was right. (Whether he actually said it or not.)

        Comment


        • #5
          Open source and open standards?

          pure garbage!

          Dear Leader Jensen, please give me more proprietary, locked crap to protect me from AMD’s evil openness!

          Comment


          • #6
            Originally posted by skeevy420 View Post
            No means No unless you're an AI powered sex bot.
            I don't have experience with sex bots, but I can't imagine they even know the words 'no'. It's probably all "yes, yes, yesss!!!"


            Originally posted by Teggs View Post
            Cryptomining --> blockchain --> NFTs ---> 'AI' ---> (insert next term to hype here)
            I'm totally not sure why you're bundling AI, a useful technology which helps in a wide variety of use cases, together with blockchain stuff.

            Comment


            • #7
              I hope they will announcing something that is already working and not some theoretical in the future we will do this and that BS. Compute / AI has been in need of practical open standards for more than a decade. Until now it has been mostly talk and not much walk.

              I found tiny corp is producing the most practical open source route for LLama and "SD". It's ironic that OpenAI, Google, Microsoft, Facebook, Apple, Amazon, IBM, Oracle, Nvidia, Intel*, AMD* or any other big company isn't able to achieve what a tiny company can do for practical open AI frameworks that runs on all devices.

              You like pytorch? You like micrograd? You love tinygrad! ❤️ - GitHub - tinygrad/tinygrad: You like pytorch? You like micrograd? You love tinygrad! ❤️


              I hope they consider it as part of their standardization.

              Tinygrad supports the following:
              • CPU
              • GPU (OpenCL)
              • C Code (Clang)
              • LLVM
              • METAL
              • CUDA
              • Triton
              • PyTorch
              • HIP
              • WebGPU
              Not only that but it's also relatively easy to add your own device support:

              You like pytorch? You like micrograd? You love tinygrad! ❤️ - File not found · tinygrad/tinygrad


              *Intel and AMD are at least trying but not enough focus on supporting different backends. Nvidia has been doing the exact opposite as usual vendor lock-in all the way.

              Comment


              • #8
                Still way to go. Too little GPU support. And NV is still holding its paw down on a lot of computing tasks. People are still too bound to CUDA.

                In other AMD related news:

                :/
                Stop TCPA, stupid software patents and corrupt politicians!

                Comment


                • #9
                  Originally posted by Jabberwocky View Post
                  I found tiny corp is producing the most practical open source route for LLama and "SD". It's ironic that OpenAI, Google, Microsoft, Facebook, Apple, Amazon, IBM, Oracle, Nvidia, Intel*, AMD* or any other big company isn't able to achieve what a tiny company can do for practical open AI frameworks that runs on all devices.
                  That's great, but... "tinygrad is still alpha software" (not even "beta", only "alpha")

                  Comment


                  • #10
                    Originally posted by skeevy420 View Post
                    I hope it means ROCm will be available on all of their and competitors' hardware or that they're coming out with some sort of a CUDA translation layer or copying oneAPI or something along those lines.
                    I really find it funny how mutch rocm gets criticized by people who dont understand what it even is. Rocm is amds implemtation of HIP, HIP is an open standard by several companies that is literally just cuda with cu_whatever_function replaced with hip_whatever_function, and of course there is a tool that just replaces cu_ with hip_. Yes rocm is a pretty mutch 99% compatible implementation of cuda, and really allmost all cuda code works with no modification on rocm. The only issues you will face is that sometimes cuda devs will assume things like that the wavesize is 32, which isent valid cuda either but happens to work on current nvidia gpus and that amd gpus besides cdna dont have maxtrix instrcutions, but bearly any cuda code in the wild uses those.

                    Even extremely complicated projects like pytorch is 90% common code on cuda and rocm targets, with most of the rocm specific ifdefs mostly related to performance optimization.

                    Comment

                    Working...
                    X