Announcement

Collapse
No announcement yet.

There Is Another Debate Over An AI Accelerator Subsystem For Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • There Is Another Debate Over An AI Accelerator Subsystem For Linux

    Phoronix: There Is Another Debate Over An AI Accelerator Subsystem For Linux

    With there recently being a number of new driver proposals for various AI-focused accelerators for the Linux kernel, currently they either go into "char/misc" as the random catch-all area of the kernel or within the Direct Rendering Manager (DRM) subsystem traditionally used for GPU drivers. There's been yet another discussion happening this week over introducing a formal "accelerator" subsystem in the kernel for the growing number of AI devices that may be seeking to provide open-source drivers...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Aren't we headed to a world where accelerators are all handled by the CPU bypassing the kernel? You tell the CPU what to do but the CPU decides how and where to do it.

    Comment


    • #3
      You smell vendor lock-in when the hurdle that they object to is "an open source consumer of the interface" that the developers can use to test the code.

      The DRM maintainers having had to deal with these types of shenanigans in the past are probably more aware of what happens and thus fighting harder about the rules than others.

      Comment


      • #4
        I feel like we're going round in circles a little bit - we moved from CPU + optional FPU to all-in-one and now seem to be headed back to external (optional) accelerators.

        When will we reach all-integrated again, before further divergence...?

        Comment


        • #5
          Originally posted by Paradigm Shifter View Post
          I feel like we're going round in circles a little bit - we moved from CPU + optional FPU to all-in-one and now seem to be headed back to external (optional) accelerators.

          When will we reach all-integrated again, before further divergence...?
          Pretty soon now — I've heard Apple Silicon's doing just the thing.

          Comment


          • #6
            I'm surprised how quickly the AI arrived on our deskstop. Today Digikam, Libreoffice, EasyEffects use AI.

            Comment


            • #7
              Originally posted by promeneur View Post
              I'm surprised how quickly the AI arrived on our deskstop. Today Digikam, Libreoffice, EasyEffects use AI.
              AI accelerators are for deep learning which isn't used by desktop software. The stuff you're seeing in image/video processing is the execution of existing models (the resulting heuristics).

              Only exception is convolutional networks for video transcoding since they outperform the currently available dedicated video compression hardware accelerators Intel has baked into its CPUs (like H264 and hevc). That is, you'll be able to stream video using slightly less power or convert a DVD to mp4 using handbreak a bit faster... Oh, there's some rendering algorithms using deep learning hardware but I don't think they're really practical for gaming.

              Comment


              • #8
                Originally posted by promeneur View Post
                I'm surprised how quickly the AI arrived on our deskstop. Today Digikam, Libreoffice, EasyEffects use AI.
                I mean, it's been ten years since the resurgence of AI. It's about time some end user products start using it.

                Comment


                • #9
                  Originally posted by intelfx View Post
                  Pretty soon now — I've heard Apple Silicon's doing just the thing.
                  You're just picking Apple as the trend-setter, because sexy.

                  Another trend that's happening in the domain where most of these accelerators are primarily used is that of disaggregation. We're seeing memory move farther away from CPUs via CXL, which also supports accelerators. Nvidia and AMD are both moving in the direction of more disaggregation, in general.








                  The reason I think the disaggregation trend is more relevant one is that people want to scale CPU cores, accelerated-compute power, and memory resources according to their need. If you integrate all three, then you cannot scale one without scaling the others.
                  Last edited by coder; 06 August 2022, 10:47 PM.

                  Comment


                  • #10
                    Originally posted by c117152 View Post
                    AI accelerators are for deep learning which isn't used by desktop software.
                    I can't comment on those other examples, but both AMD and Nvidia (maybe also Intel?) have deep learning models for audio noise suppression. Another common example is Nvidia's DLSS and Intel's XeSS for superior interpolation of realtime rendered images.

                    Voice recognition systems have also transitioned to deep learning models, and this is one of the things I believe Intel is using their GNA block for.

                    Comment

                    Working...
                    X