UMD Direct Submission "Proof Of Concept" For The Intel Xe Linux Driver

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
  • phoronix
    Administrator
    • Jan 2007
    • 67050

    UMD Direct Submission "Proof Of Concept" For The Intel Xe Linux Driver

    Phoronix: UMD Direct Submission "Proof Of Concept" For The Intel Xe Linux Driver

    One of the interesting Intel Xe Linux kernel graphics driver patches that was volleyed for discussion last month is working on user-mode driver (UMD) direct submission support for allowing work to be directly submitted from user-space to the GPU hardware and avoiding some of the overhead of the kernel driver interactions...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite
  • Daktyl198
    Senior Member
    • Jul 2013
    • 1528

    #2
    This sounds like a possible security hole. Is it simply "removing a step" that's giving the speedup, or is there something specific in kernel GPU drivers that's slowing things down? If it's just removing a step, I doubt it would speed things up enough to be worth all the work. And if it's something about how the kernel drivers are written, shouldn't you just fix the problem there?

    Comment

    • mobadboy
      Senior Member
      • Jul 2024
      • 160

      #3
      I'm curious how optical media has anything to do with a graphics driver. Surely someone could just write some invalid packets to the disk to trigger undefined behavior?

      Comment

      • geerge
        Senior Member
        • Aug 2023
        • 324

        #4
        Is io_uring in any way applicable? Memory, userspace, reducing kernel overhead, hell even a ring buffer is mentioned.

        Comment

        • ultimA
          Senior Member
          • Jul 2011
          • 286

          #5
          Originally posted by Daktyl198 View Post
          This sounds like a possible security hole. Is it simply "removing a step" that's giving the speedup, or is there something specific in kernel GPU drivers that's slowing things down? If it's just removing a step, I doubt it would speed things up enough to be worth all the work. And if it's something about how the kernel drivers are written, shouldn't you just fix the problem there?
          The win is not coming from removing the work that's done on the kernel side. The win is removing the transition itself to the kernel. This transition from user-space to kernel is relatively expensive. Most programs do not care because they do not make enough calls to the kernel for this to be noticeable, but 3D games do, if all rendering calls need to go through the kernel, then each call making a switch to the kernel adds up due to the extremely large number of render calls.

          Comment

          • Daktyl198
            Senior Member
            • Jul 2013
            • 1528

            #6
            Originally posted by ultimA View Post

            The win is not coming from removing the work that's done on the kernel side. The win is removing the transition itself to the kernel. This transition from user-space to kernel is relatively expensive. Most programs do not care because they do not make enough calls to the kernel for this to be noticeable, but 3D games do, if all rendering calls need to go through the kernel, then each call making a switch to the kernel adds up due to the extremely large number of render calls.
            So... it's removing a step. How many MS per frame does it add, truly? And then the question becomes why does calling a kernel API from userspace take so much time (relatively)? Is this not a possible security hole to run arbitrary code on the GPU? Is the kernel not in place for a reason?

            Comment

            • ultimA
              Senior Member
              • Jul 2011
              • 286

              #7
              Originally posted by Daktyl198 View Post
              Is this not a possible security hole to run arbitrary code on the GPU? Is the kernel not in place for a reason?
              I wasn't commenting on the security aspect. I am pretty sure people working on this have considered possible security implications. I was commenting on your "I doubt it would speed things up enough to be worth all the work" from earlier. It is worth it.

              Originally posted by Daktyl198 View Post
              How many MS per frame does it add, truly? And then the question becomes why does calling a kernel API from userspace take so much time (relatively)?
              ​The savings depend on the nature of the 3D application / game. Transitioning to kernel mode on modern machines only take at most a couple of hundred nanoseconds. So for any normal application, the question "why does it take so long" doesn't have much validity. The problem is only with games, as they can make tens of thousands of such calls in each and every frame, so the nanoseconds accumulate to single-digit percentages of your total time budget for a frame. A 60fps game only has ~16ms to render a frame, so if you can save only 0.5-1ms, that is already 3-6% extra performance.

              Comment

              Working...
              X