Announcement

Collapse
No announcement yet.

Linux Developers Talk Again About An Accelerator Subsystem - Or Moving Them Into The GPU/DRM Area

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux Developers Talk Again About An Accelerator Subsystem - Or Moving Them Into The GPU/DRM Area

    Phoronix: Linux Developers Talk Again About An Accelerator Subsystem - Or Moving Them Into The GPU/DRM Area

    On and off for years has been talk of an accelerator subsystem for the Linux kernel considering that for now most AI training/inference accelerator drivers end up lodged within the "char/misc" area of the kernel. That accelerator subsystem discussion has been restarted with talks of having such a subsystem or moving those drivers within the GPU/DRM subsystem space...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    It seems a logical choice for them to oversee it. On the technical side there's a lot of shared infrastructure (DMA-BUF), and for good reason as GPUs have been used as the archetypal accelerators for years now outside of graphics via openCL/cuda/ROCm and now compute-over-vulkan (a la pytorch on android). The maintainers also have established protocols for a lot of the things an accelerator needs: compilers, libraries, drivers, etc, because GPUs have been accelerators from the start even within graphical workloads.

    All of these startups looking to cash-in on machine learning might want to be be considered special in order to evade these rules, but I don't think the technical arguments stack up and I don't think it's in anyone's best interest either. Trying to get called a generic char device after running into restrictions should have been a big red flag. The requirements for *graphical* accelerators didn't come about from nothing and are there because the proprietary alternative fucking sucked, for developers and users alike. People still loathe the GPU vendors (looking at you PowerVR) who don't play ball. Materially, nothing has changed between the early days of GPU accelerators and the early days now of ML (etc) ones. The latter promise to be just as much of a clusterfudge without a firm stance.

    Make accelerators a subfolder of GPUs for all I care. Doesn't matter if it's there or under it's own folder so long as the process and governance is in place. You can always move it later anyway.

    Comment


    • #3
      Should be a nobrainer. AI accelerators will become a more and more integral part of any modern hardware. Not trying to integrate them into the Linux sub system would limited the future usefulness a lot. A nice API (vendor independent, like MESA), would be marvelous.

      Comment


      • #4
        I know it would be a lot more work, but wouldn't GPUs be a subset of accelerators and not the other way around?

        Probably not feasible. I'm just being picky and asking whether or not it makes sense.

        Comment


        • #5
          Oh yeah, badly mash this into drm as well. The lack of separation between infrastructure (memory, io, int handling), display and 3d rendering (command queue) has done us wonders these past 14ys. But don't dare to touch media or you will piss off v4l guys. And 2d should never be talked about, only hot new buzzwords go in.

          -- the guy who trailblazed separation in display driver development
          Last edited by libv; 14 September 2021, 05:36 PM.

          Comment


          • #6
            Well, I see computing evolving into more accelerator-like on the hardware side where the CPU does less and less actual computing and more controlling (RISC-V was actually designed with such a future in mind).
            So having some kind of subsystem in place for all of this BEFORE it happens would prevent A LOT of headaches.
            And even if it doesn't end up like that, it would probably still be better from an architectural pov than now.

            Comment

            Working...
            X