Announcement

Collapse
No announcement yet.

Linux 5.2 To Allow P2P DMA Between Any Devices On AMD Zen Systems

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux 5.2 To Allow P2P DMA Between Any Devices On AMD Zen Systems

    Phoronix: Linux 5.2 To Allow P2P DMA Between Any Devices On AMD Zen Systems

    With the Linux 5.2 kernel an AMD-supplied change by AMDGPU developer Christian König allows for supporting peer-to-peer DMA between any devices on AMD Zen systems...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I wonder if there's a way for LookingGlass to make use of this?

    Comment


    • #3
      That's a serious enhancement. Wonder if it's possible to bounce data without the DMA-controller though? Ergo, inbound, bounce, outbound. The bounce could be at the CPU cache-coherency level. That would also make it possible to use bus snooping for automagic invalidation. Then you could make some seriously fast data rewrite in-flight without resorting to DMA.

      Comment


      • #4
        Originally posted by Djhg2000 View Post
        I wonder if there's a way for LookingGlass to make use of this?
        What do you mean? An application? What kind of application?

        Comment


        • #5
          Originally posted by milkylainen View Post

          What do you mean? An application? What kind of application?
          LookingGlass is sort of like VNC, except it's only for internal use on the same system between host and guest or guest to guest. Shared Memory is used to quickly move captured frames of the VM guest to the LookingGlass client view.

          If this is at all useful for such, it'll probably need some virtualization support I guess, which I assume isn't available?

          Comment


          • #6
            I assume this would have the greatest impact on multi-socket Epyc systems?

            Comment


            • #7
              Originally posted by schmidtbag View Post
              I assume this would have the greatest impact on multi-socket Epyc systems?
              Without being familar enough with the low level details, I don't know for sure, but Epyc currently has the same number of PCI-E lanes whether using 1 CPU or 2CPUs. Intel is the Serer platform that reduces lanes of PCI-E with fewer processors on board, in the newest server market.

              This sounds more like it is the Root Domains that show up when splitting devices with ACS for Virtualization, devices that don't share the same 00:00:00 would be able to use DMA with this, which could be quite useful for Disk Controllers and Network cards for file servers and Network cards that are on multiple Roots could have better throughput for switching.

              Comment


              • #8
                This is Zen 3 prep. 4 GPU + 1 CPU merged into one infinity fabric computational complex. Aka this is where nvidia and intel should start to worry.

                Comment


                • #9
                  Originally posted by polarathene View Post

                  LookingGlass is sort of like VNC, except it's only for internal use on the same system between host and guest or guest to guest. Shared Memory is used to quickly move captured frames of the VM guest to the LookingGlass client view.

                  If this is at all useful for such, it'll probably need some virtualization support I guess, which I assume isn't available?
                  Ah. Thank you. Never used. I guess it would depend on the hardware but I don't see why it couldn't be done. Pass-through etc doesn't really depend on much beside the basic IOMMU for isolation. You could grant an VM instance exclusive rights to PCI resources without IOMMU I guess... But that creates a glaring security hole.
                  Either way. I don't see why you can't route data this way even if you're using passthrough.

                  Comment

                  Working...
                  X