Announcement

Collapse
No announcement yet.

PCI Peer-To-Peer Memory Support Queued Ahead Of Linux 4.20~5.0

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by mulenmar View Post
    This sounds like yet another method to enable nefarious operations which the host OS doesn't have a way to monitor. Is there a proof that this standard and an implementation of it is secure?
    Um. Any PCI or PCIe device made in the last 20+ years has had the ability to "bus master" and exchange commands and data with other cards, without the central CPU knowing about it. If you are worried about this just now, you're far out of date.

    There have even been "CPU accelerator cards" which were entire CPUs and RAM on a card, which would take over the whole system and replace the original CPU as the main controller.

    Comment


    • #12
      Originally posted by mulenmar View Post
      This sounds like yet another method to enable nefarious operations which the host OS doesn't have a way to monitor. Is there a proof that this standard and an implementation of it is secure?
      The implementation builds on the existing dma-buf mechanism inside dma, although it extends it to support new usage scenarios. All P2P mapping is done under control of kernel drivers. Those drivers will generally be in the upstream kernel tree in our case but may be out-of-tree open source or even closed source for other vendors. Depends on whether you count kernel drivers as part of the host OS in each of those cases.
      Test signature

      Comment


      • #13
        Originally posted by Zan Lynx View Post

        Um. Any PCI or PCIe device made in the last 20+ years has had the ability to "bus master" and exchange commands and data with other cards, without the central CPU knowing about it. If you are worried about this just now, you're far out of date.
        I'm not "just now" worried about it, I'm worried about an easier and itself vulnerable way of doing things on top of a system I already looked at with a gimlet eye.

        There are multiple reasons for my only buying motherboards and cards that run with completely open drivers.


        Originally posted by Zan Lynx View Post
        ​​​​​​There have even been "CPU accelerator cards" which were entire CPUs and RAM on a card, which would take over the whole system and replace the original CPU as the main controller.
        Yeah, but I don't own an Amiga or 68k era Mac. More seriously, I wasn't aware of PCI or PCIe cards for that, interesting!

        Originally posted by bridgman View Post

        The implementation builds on the existing dma-buf mechanism inside dma, although it extends it to support new usage scenarios. All P2P mapping is done under control of kernel drivers.
        Thank you, that clarifies things. I was under the mistaken impression that the cards themselves could do this.

        Originally posted by bridgman View Post
        ​​​​​​
        Those drivers will generally be in the upstream kernel tree in our case but may be out-of-tree open source or even closed source for other vendors. Depends on whether you count kernel drivers as part of the host OS in each of those cases.
        If the source is available and freely-licensed, yes, I count that as part of the host OS. In the case of blobs...well, I consider those the software equivalent of a particularly ugly dongle. That's not particularly relevant to this discussion, though, heh.

        Comment


        • #14
          Why don't we have some programmable, trimmed-down cores acting as DMA engines doing this? This sort of asymmetric multiprocessing would be very useful beyond DMA.

          Comment

          Working...
          X