Announcement

Collapse
No announcement yet.

ASpeed AST2500/AST2600 XDMA Engine Support Pending For Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • ASpeed AST2500/AST2600 XDMA Engine Support Pending For Linux

    Phoronix: ASpeed AST2500/AST2600 XDMA Engine Support Pending For Linux

    Kernel patches pending that might see mainlining for the upcoming Linux 5.7 window provide ASpeed XDMA engine support for the plethora of AST2500 BMCs found on server platforms and the forthcoming AST2600-based platforms...

    http://www.phoronix.com/scan.php?pag...MA-BMC-Pending

  • #2
    why would a BMC need DMA in the first place?

    Comment


    • #3
      Originally posted by starshipeleven View Post
      why would a BMC need DMA in the first place?
      Good question. A review of the mailing list postings on this subject does not help much.

      Honestly, this feature looks like a solution in search of a problem.

      Perhaps someone can find a good link or two describing why this feature needs to exist.

      Comment


      • #4
        Originally posted by NotMine999 View Post
        Perhaps someone can find a good link or two describing why this feature needs to exist.
        I did some digging.

        The feature obviously means there is something shared between BMC and host, and this mail on the LKML does also hint at the fact that yes there are one or two PCIe devices that can appear if the BMC wants to share them https://lkml.org/lkml/2020/2/10/1481
        The AST2500 and AST2600 have two PCIe devices on them, so these will show up on the host if the BMC enables both of them. Either or both can also be disabled and therefore will not show up. On the host side, in order to receive DMA transfers, its simply a matter of registering a PCI device driver and allocating some coherent DMA.... Not sure about the details of endpoints/dma client driver?
        For example, the AST2500 has 2 Gbit eth controllers, and AST2600 has 4, plus a bunch of other stuff.
        https://www.aspeedtech.com/products....ath=20&rId=440
        https://www.aspeedtech.com/products....ath=20&rId=633
        Note the "secure boot engine" in the latter, it seems this is only for BMC firmware, https://www.servethehome.com/aspeed-...-next-gen-bmc/ but who knows.

        As if we didn't have enough security already.

        A possible reason to have more gigabit ethernet on the BMC is to use that for server management (connecting directly to the OS), instead of wasting PCIe lanes for gigabit ports connected to the system.

        I would say this is bs feature creep that breaks isolation between something that is NOT secure at all (BMCs aren't particularly secure at all) and your system.

        Comment


        • #5
          Originally posted by starshipeleven View Post
          why would a BMC need DMA in the first place?
          Don't the bmc include a display driver, a eth management port and a own cpu with ram (arm based AFAIK)? Lotsa uses for DMA.

          Comment


          • #6
            Originally posted by discordian View Post
            Don't the bmc include a display driver, a eth management port and a own cpu with ram (arm based AFAIK)? Lotsa uses for DMA.
            That's their own hardware connected to their own interfaces, this driver is for DMA into the server's RAM.

            These are the first BMCs that need to do this.

            Comment


            • #7
              Yes, makes sense for framebuffers, and whatever other functionality the BMC provides (likely some crypto services). There is a gap between the BMC and the server, and you want to move data around somehow.

              No one "needs" DMA, it's just an huge efficiency boost.

              Comment


              • #8
                Originally posted by discordian View Post
                Yes, makes sense for framebuffers,
                the BMC shares its own GPU already in a more physical way (in the sense that the GPU can be initialized and run by either, the BMC or the host, even if the other is absent or turned off). And this isn't going to change because this guarantees that even if the BMC breaks or someone decides to disable the BMC (jumpers on some boards allow this) you can connect a screen and have "console access" as normal.
                and whatever other functionality the BMC provides (likely some crypto services).
                There is very little the BMC's crypto accelerators (if any) can do better than AES-NI crypto accelerators in the server CPU.

                There is a gap between the BMC and the server, and you want to move data around somehow. No one "needs" DMA, it's just an huge efficiency boost.
                What high bandwith data is there to move around between BMC and host. BMC is Board Management Controller, it's only there for remote power-on/off, power and temperature readings, remote console/iKVM access, that's it.

                Highest bandwith is its own GPU, but that has a dedicated connection to the host already.

                Comment


                • #9
                  Originally posted by starshipeleven View Post
                  the BMC shares its own GPU already in a more physical way (in the sense that the GPU can be initialized and run by either, the BMC or the host, even if the other is absent or turned off). And this isn't going to change because this guarantees that even if the BMC breaks or someone decides to disable the BMC (jumpers on some boards allow this) you can connect a screen and have "console access" as normal.
                  There is very little the BMC's crypto accelerators (if any) can do better than AES-NI crypto accelerators in the server CPU.
                  The buzzword is isolation. TPM chips aren't fast either (ridiculous slow rather), the key is that those are an independent layer
                  Originally posted by starshipeleven View Post
                  What high bandwith data is there to move around between BMC and host. BMC is Board Management Controller, it's only there for remote power-on/off, power and temperature readings, remote console/iKVM access, that's it.

                  Highest bandwith is its own GPU, but that has a dedicated connection to the host already.
                  Whatever data you have to transport, you will have to interrupt a CPU (overhead) and do some operations that are completely trivial yet potentially slow (uncached accesses). A simple Hardware DMA will do the same job, without stealing time from the work that benefits from an fat, expensive CPU.

                  In a way, the worst thing you can do on Desktop CPUs is having to deal with slow hardware directly.

                  Comment


                  • #10
                    Originally posted by discordian View Post
                    The buzzword is isolation. TPM chips aren't fast either (ridiculous slow rather), the key is that those are an independent layer
                    Trusting a BMC to do anything secure is insane, but I concede that this could also be something they will try to market.

                    Whatever data you have to transport,
                    I know what DMA is, please stop educating me.

                    My point here is that there is no data to transport between current BMC and host so there is no need for DMA.

                    Until you start adding feature creep bullshit like multiple BMC ethernet controllers or fake TPMs actually done by ARM TrustZone on the BMC SoC and you want to passthrough them to the system.

                    Comment

                    Working...
                    X