Announcement

Collapse
No announcement yet.

Intel Continues Bringing Up DMA-BUF Support For RDMA

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Continues Bringing Up DMA-BUF Support For RDMA

    Phoronix: Intel Continues Bringing Up DMA-BUF Support For RDMA

    Presumably with Xe-HP in mind, Intel engineers continue working on adding DMA-BUF support to the Linux kernel's RDMA code...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I see that DMA-BUF driver mods associated with Infiniband. Are the similarities between Infiniband and RoCE, used by Intel's Habana NNPs, enough to expect the Habana NNP drivers to use DMA-BUF?

    Comment


    • #3
      Any info available that says Xe-HP is supposed to also support CXL? I see notebookcheck leaks this weekend that Alder Lake-S will include PCIE5 io, and am wondering if the PCIE5 presence hints that Xe GPUs other than Ponte Vecchio might support CXL.

      Comment


      • #4
        Originally posted by jayN View Post
        Any info available that says Xe-HP is supposed to also support CXL? I see notebookcheck leaks this weekend that Alder Lake-S will include PCIE5 io, and am wondering if the PCIE5 presence hints that Xe GPUs other than Ponte Vecchio might support CXL.
        I was under the impression CXL will require some "glue" chip to work over/with PCI-E. Was I wrong?

        - Gilboa
        oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
        oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
        oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
        Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

        Comment


        • #5
          Originally posted by gilboa View Post

          I was under the impression CXL will require some "glue" chip to work over/with PCI-E. Was I wrong?

          - Gilboa
          I believe the pcie5 physical connection is unmodified, but a negotiation is required to determine if CXL protocol is implemented on both ends. Then there is extra handling of the coherency bias required on both sides, with the accelerator side having a much simpler design. The CXL consortium has a youtube site with an intro at the link below.



          Comment


          • #6
            You are correct, of-course.
            I mixed up Gen-Z and CXL.

            My bad.
            oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
            oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
            oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
            Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

            Comment

            Working...
            X