Announcement

Collapse
No announcement yet.

Intel Posts Initial Code For x86 User Interrupts On Linux - Shows Great Performance Potential

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Posts Initial Code For x86 User Interrupts On Linux - Shows Great Performance Potential

    Phoronix: Intel Posts Initial Code For x86 User Interrupts On Linux - Shows Great Performance Potential

    In addition to the big Advanced Matrix Extensions support still being in flux and the kernel-side AMX code not yet being merged, another feature of next year's Xeon "Sapphire Rapids" that we are only now seeing in early published form for the Linux kernel is handling of x86 user interrupts...

    https://www.phoronix.com/scan.php?pa...ser-Interrupts

  • #2
    Looks good, but...
    Linux is a monolithic kernel and hence most (if not all) interrupts have to go through the kernel first by design (unless I am wrong)

    Comment


    • #3
      Originally posted by tildearrow View Post
      Looks good, but...
      Linux is a monolithic kernel and hence most (if not all) interrupts have to go through the kernel first by design (unless I am wrong)
      From what I understand, there's already some very limited user space handling of interrupts in the kernel. The UIO driver (User space I/O) does that. Peter Chubb has also been working on user space interrupts for a while to help move drivers out of the kernel proper. The theory is similar to microkernels, but I look at it as less about reliability and more about security. It would also allow drivers to continue development at their own pace somewhat separate of kernel internals. That means you don't have to wait a year for a new driver for which ever piece of hardware broke in the last distro kernel update cycle. Yes I know there's ways around that, but we shouldn't HAVE to deal with it in the first place.

      Comment


      • #4
        Seems pretty cool! I guess it's a bit like the old DOS days, when userspace DPMI apps could register their own ring-3 interrupt handlers. Only better managed, I'm sure!

        Comment


        • #5
          Food for thought:

          - Imagine if Google paid a Rust dev to work on Linux driver infrastructure.
          - Imagine if Google paid a Linux Kernel dev to add a Linux userspace driver framework.
          - Imagine if Google developed a micro-kernel-based operating system with userspace drivers to replace their Linux-based Android OS.
          - Imagine if Google designed and built its own Intel whiteboxes for its datacenters running Linux.
          - Imagine if Google and Microsoft had each independently reached the conclusion that non-memory-safe languages (C and C++) account for about 70 percent of all registered Google and MS CVEs respectively.

          Now imagine all of the above wasn't just in your imagination. Now swallow the red pill.

          What if within a few years, Google manages to migrate large parts of their hardware to userspace Rust drivers in a way that makes them seamlessly compatible with Zircon/Fuchsia? And what if they replace parts in Zircon/Fuchsia with Rust code piecemeal?

          I find this potential future rather exciting I must admit.

          Comment


          • #6
            This might not work as one would expect (likely due to the software implementation of that patch, but I would't know if it could also be the hardware design):

            It sounds like the handler only gets invoked immediately if the targeted "task" is already in the running state. That is, not blocking and not preempted. However I'm not sure how that exactly works, this seems to require more extensive reading.

            As far as I can tell from a quick read, the benchmark seems to always use spinning for the new user-interrupts ("Keep spinning until the interrupt is received"). Whereas for eventfd it not only uses a read in blocking mode, but also the strange situation that the same eventfd is used in both directions, which means that the writer can read its own written data (its read directly follows the write), and then the benchmark checks that and has a code path that will re-write it. And apparently no comment on what effect that might have on performance numbers.

            If the benchmark uses an always spinning read, I'd think a comparison should also include an implementation that uses spinning atomic operations on shared memory, which has a good chance of being faster, I'd think. And maybe in that scenario eventfd would do better with a loop of non-blocking reads (and maybe a version that uses a different eventfd in both directions so that writers can't read their own data).

            I think the usefulness in performance terms will depend on the latency of waking a designated thread that does a kernel-blocking-wait for the next interrupt (since that seems to be how a non-spinning scenario would work), compared to the latency of a thread that does a kernel futex wait.

            Nevertheless I'd be cautiously optimistic about a possible future of this feature, hoping that these concerns are either unfounded and due to a lack of information (or reading), or that they will be resolved.
            Last edited by indepe; 13 September 2021, 07:42 PM.

            Comment


            • #7
              Originally posted by tildearrow View Post
              Looks good, but...
              Linux is a monolithic kernel and hence most (if not all) interrupts have to go through the kernel first by design (unless I am wrong)
              This is not tied to the fact that the kernel is monolithic but to that fact that the interrupt vector (the list of entry point) is only available at ring0, because the whole operation has always been supposed to be run at ring0 (or EL1 / EL2 on arm64).

              User-space interrupts are a novelty, but I'm waiting to understand both the security model and the thread model. If any application can force an interrupt to occur in another application, this might lead to a new kind of DoS. I can see the benefits in many other situations though, especially for IPC mechanisms.

              I haven't looked at the patch yet and I hope this is not going to be an intel-only mechanism (read: it might one day be implemented on other architecture) - in which case it would have little to no interest for the broader community (most of my work in on ARM64-based hardware so...).

              Comment


              • #8
                Originally posted by ermo View Post
                Food for thought:

                - Imagine if Google
                just made their own kernel.

                Comment


                • #9
                  Originally posted by Paul Frederick View Post
                  just made their own kernel.
                  You mean Fuchsia, the one referred by ermo in the post you quoted?

                  Comment


                  • #10
                    Wonder if this sort of thing can be used in the context of seL4.

                    Comment

                    Working...
                    X