Announcement

Collapse
No announcement yet.

Intel Habana Labs SynapseAI Core Updated With Gaudi2 Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Habana Labs SynapseAI Core Updated With Gaudi2 Support

    Phoronix: Intel Habana Labs SynapseAI Core Updated With Gaudi2 Support

    While these days the Intel-owned Habana Labs Linux software stack is a shining example of an open-source AI accelerator solution with mainline kernel driver support and also helping bring together the new compute accelerator subsystem, it wasn't always so blessed. Initially there was the closed-source user-space bits that fortunately last year was opened up with SynapseAI Core...

    https://www.phoronix.com/news/Intel-...AI-Core-Gaudi2

  • #2
    >Habana Labs Linux software stack is a shining example of an open-source AI accelerator solution with mainline kernel driver support

    this reads like sarcasm. have you looked at the userspace they published to get their driver in? check https://github.com/HabanaAI/SynapseAI_Core#limitations

    what kinds of neural networks can you create with single-node graphs? especially when you have to implement all the operations yourself.

    this is malicious compliance at its finest. the kernel community bears the burden of maintaining their driver, but to do anything with the hardware, you must use a 2000-line install.sh script to pull in dozens of blobs. of course this only works on a few tested distros of the right version. https://docs.habana.ai/en/latest/Ins...tallation-bare

    there is nothing open about any ML accelerators, they all have their very special fork of LLVM under lock and key. the OEMs are run by boomers who are afraid of disclosing their unique and wonderful hardware details (some form of a systolic array). some even run the compiler as a service, so you have to convert your models through their REST API before uploading to the device. many require NDAs before giving you access to the toolchain. if anyone knows otherwise, please let me know. so far the most open stack is Intel's oneAPI when combined with their GPUs.

    Comment


    • #3
      Originally posted by ziguana View Post
      >
      this is malicious compliance at its finest. the kernel community bears the burden of maintaining their driver, but to do anything with the hardware, you must use a 2000-line install.sh script to pull in dozens of blobs. of course this only works on a few tested distros of the right version. https://docs.habana.ai/en/latest/Ins...tallation-bare

      there is nothing open about any ML accelerators, they all have their very special fork of LLVM under lock and key. the OEMs are run by boomers who are afraid of disclosing their unique and wonderful hardware details (some form of a systolic array). some even run the compiler as a service, so you have to convert your models through their REST API before uploading to the device. many require NDAs before giving you access to the toolchain. if anyone knows otherwise, please let me know. so far the most open stack is Intel's oneAPI when combined with their GPUs.
      Couldn't agree more. We've already had these type of discussions where userspace is a bunch of BS,
      but yet they want the kernel community to maintain a driver for something that really isn't any common hardware.
      Even vendors of more common hardware has caught some flak for this.

      If everything else is closed they can maintain their kernel code by their own.

      Comment

      Working...
      X