Announcement

Collapse
No announcement yet.

Tachyum Gets FreeBSD Running On Their Prodigy ISA Emulation Platform For AI / HPC

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Tachyum Gets FreeBSD Running On Their Prodigy ISA Emulation Platform For AI / HPC

    Phoronix: Tachyum Gets FreeBSD Running On Their Prodigy ISA Emulation Platform For AI / HPC

    Tachyum is a startup working on "the world's first universal processor" that can be used from AI to HPC to hyperscale computing needs. The Tachyum processor aims to replace the needs of discrete TPUs / GPUs / XPUs into a single homogeneous processor architecture. While still running as an emulated platform, Tachyum has announced that in addition to Linux they have managed to boot and run FreeBSD on their ISA...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    They still don't have silicon it seems despite claiming tape out in 2019 back in 2018. Only FPGA and QEMU.

    faster, 10x lower power, and 1/3 the cost of competing products
    When something sounds too good to be true, it usually isn't.

    I want to be proven wrong, but I'm skeptical.

    Comment


    • #3
      Originally posted by ldesnogu View Post
      They still don't have silicon it seems despite claiming tape out in 2019 back in 2018. Only FPGA and QEMU.


      When something sounds too good to be true, it usually isn't.

      I want to be proven wrong, but I'm skeptical.
      Exactly my thought. Best case it looks like a disappointment, second worst like vaporware, worst a scam.

      Comment


      • #4
        Sounds like another libreSOC scam. All of those CPU/TPU/GPU/XPU things are different silicon for a reason: you get better performance with hardware designed for its purpose.

        They might as well add "we've built a quantum computer" because they can simulate it on their simulated chips. Vaporware inception!

        Comment


        • #5
          What I'm missing here are tech details on the hardware.

          Comment


          • #6
            Too good to be true, unfortunately. The world needs this ASAP (current architecture mess is a total disaster, prices skyrocketting, etc), but it's very difficult to happen. Duoply hell is going to be a quite largue period, maybe a bit of oligopoly will happen with ARM soon but not much more than now.

            Comment


            • #7
              Originally posted by timofonic View Post
              Too good to be true, unfortunately. The world needs this ASAP (current architecture mess is a total disaster, prices skyrocketting, etc), but it's very difficult to happen.
              It's dubious that you can have all three. Look at general purpose CPUs. You know why they don't excel at AI, gfx, HPC, etc, all of their uses? Because specialization leads to better thermal, silicon use, power consumption and ISA properties, that's why the current trend is adding more units. GPUs excel at what they do, which is transformation of floating point values in a highly parallel, dependency free (or close to) setting. The price is that they're completely useless to drive a system. You can't have your cake and eat it too, whoever says you do is either delusional or a liar.
              Now, you can make things cheaper. If you assume you have specialized hardware you can simplify your general purpose CPUs, for example. For the tasks that do remain in your CPU, you could also probably make it somewhat faster. But 10x lower power consumption and 1/3 of the price while being faster? Nope, really unlikely to become true unless we talk about a maaaajor breakthrough. I guess they play with not needing as many chips for the 1/3 cost claim, but the scale only will probably make that assertion false.

              Comment


              • #8
                There are already some potential customers lining up for Tachyum's products. So while I understand the scepticism, there could be something disruptive to the market from them later in the year, or early next year. There is a cost in terms of power and performance inherent in the classical offload model and maybe they did find a way to better integrate diffent kind of accelerators on their silicon or to serve these workloads in a different way.

                A german supercomputing center already signed a memorandum of understanding with them. They might have liked what they saw. On the other hand, the company needs to improve their execution, their timetable slipped quite some time and the software infrastructure seems to be still in an early stage. But delays are nothing unheard of in the HPC sector, just look at Intel's and AMD's recent shortcomings in that regard.

                Comment


                • #9
                  Originally posted by sinepgib View Post

                  It's dubious that you can have all three. Look at general purpose CPUs. You know why they don't excel at AI, gfx, HPC, etc, all of their uses? Because specialization leads to better thermal, silicon use, power consumption and ISA properties, that's why the current trend is adding more units. GPUs excel at what they do, which is transformation of floating point values in a highly parallel, dependency free (or close to) setting. The price is that they're completely useless to drive a system. You can't have your cake and eat it too, whoever says you do is either delusional or a liar.
                  Now, you can make things cheaper. If you assume you have specialized hardware you can simplify your general purpose CPUs, for example. For the tasks that do remain in your CPU, you could also probably make it somewhat faster. But 10x lower power consumption and 1/3 of the price while being faster? Nope, really unlikely to become true unless we talk about a maaaajor breakthrough. I guess they play with not needing as many chips for the 1/3 cost claim, but the scale only will probably make that assertion false.
                  There's definitely room for a more unified ISA like this is going for.

                  Right now you shuffle data to the CPU, or GPU, or tensor processor or whatever with its own instructions... but it would be advantageous to have those various components as part of the same core, using the same ISA.

                  AMD pushed this but kinda fumbled it, particularly in the AI space. Nvidia and Intel are going there as well (with Tegra/AMX), albeit very slowly, as they are heavily invested in their own discrete niches.

                  Comment


                  • #10
                    Originally posted by brucethemoose View Post

                    but it would be advantageous to have those various components as part of the same core, using the same ISA.
                    Why, exactly?

                    Comment

                    Working...
                    X