Announcement

Collapse
No announcement yet.

Teflon Merged To Mesa 24.1 As Gallium3D Frontend For TensorFlow Lite

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Teflon Merged To Mesa 24.1 As Gallium3D Frontend For TensorFlow Lite

    Phoronix: Teflon Merged To Mesa 24.1 As Gallium3D Frontend For TensorFlow Lite

    Teflon has been merged into Mesa 24.1 as a Gallium3D front-end that TensorFlow can load for delegating the execution of operations in a neural network model. Teflon was created initially for the Etnaviv Gallium3D driver for being able to run AI inferencing on Vivante NPUs...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I don't personally use these kinds of things, but cheers, a new gallium state tracker is always neat, if only they merged grover T.T

    Comment


    • #3
      Instructions to try it: https://hub.libre.computer/t/libre-c...tv1-guide/3095

      Comment


      • #4
        This is cool and all, but I want to see a backend for running full-fat TensorFlow. Inference AND training.

        You could already run the inference-only mode on the vulkan backend on top of any of the vulkan drivers out there.

        Someone is working on an OpenCL backend for TF that could run on rusticl, but it's progressing at a snail's pace because it's only one dude.

        Comment


        • #5
          This class of NPU is not really made for training since it's found in edge devices. There's dedicated GPUs and accelerators for this that cost 1000x. You can do inferencing on Vulkan but the cost, performance, and power will exceed your edge device's budget.

          Comment


          • #6
            I'm not well versed (at all) in this, but I had been under the impression that what you say is very true for large models like LLMs, I thought
            you could still do stuff like train small (e.g. computer vision) models well on consumer GPUs and do limited training like fine tunes or loras or some such
            stuff on the smaller LLMs also "ok" with consumer HW.

            Just what I recall reading tangentially, I haven't really looked into it but hoped that some useful parts of training would be accessible with modest sized models.

            Originally posted by LoveRPi View Post
            This class of NPU is not really made for training since it's found in edge devices. There's dedicated GPUs and accelerators for this that cost 1000x. You can do inferencing on Vulkan but the cost, performance, and power will exceed your edge device's budget.

            Comment


            • #7
              Loved the driver name

              Comment

              Working...
              X