Announcement

Collapse
No announcement yet.

NVIDIA Confirms It's Acquiring Mellanox

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by madscientist159 View Post
    Fully expect NVIDIA to ... put data slurp in their new EULA
    Data slurp of what in a datacenter? How will it phone back home if the servers aren't even connected to the internet directly and the corporate firewalls will refuse any connection (out or in) that isn't specifically whitelisted.

    I don't think this is happening.

    Originally posted by madscientist159 View Post
    Fully expect NVIDIA to ... signed firmware
    newer Mellanox cards are using signed firmware already, and it's not even a bad thing. https://www.alibabacloud.com/blog/in...1.12440704.0.0

    Comment


    • #32
      Originally posted by starshipeleven View Post
      Data slurp of what in a datacenter? How will it phone back home if the servers aren't even connected to the internet directly and the corporate firewalls will refuse any connection (out or in) that isn't specifically whitelisted.

      I don't think this is happening.
      Clearly you've never worked in some higher security environments. On this end having the firewall as the one and only defense against infiltration is a gigantic no-no. Defense in depth is important in today's relatively hostile world.

      Originally posted by starshipeleven View Post
      newer Mellanox cards are using signed firmware already, and it's not even a bad thing. https://www.alibabacloud.com/blog/in...1.12440704.0.0
      The firmware isn't the problem here as its isolated where it can't read from the system unless allowed. The concern is the driver stack becoming closed; no way to stop data exfiltration if that happens.

      Comment


      • #33
        Originally posted by madscientist159 View Post
        Clearly you've never worked in some higher security environments.
        Clearly you never worked in places where you can't choose the devices or even software you will be using and you are still expected to use them safely in a data center with many other servers doing other stuff. Which is... most jobs.

        You segregate it with networking (managed switches) and lock down any access with firewalls.

        The firmware isn't the problem here as its isolated where it can't read from the system unless allowed.
        It has DMA, it can do all it wants without asking the OS. Higher-than-Gbit networking also has Remote DMA, aka allowing DMA to devices sitting on the friggin network.

        IOMMU should protect against that but again, it's a black box protecting you from another black box, that apparently isn't as solid as I thought, given the reports I saw.

        The concern is the driver stack becoming closed; no way to stop data exfiltration if that happens.
        You just have to treat the whole server as untrusted. As I said this is more or less a thing already.

        The issue is more on the desktop side.

        Comment


        • #34
          ...now i got the point. Nvidia wants to improve their raytracing technique thats why the have acquiered a company which produces fiberoptics connections ..those clever bastards

          Comment


          • #35
            Damn, that's too bad. I hoped Mellanox can stay independent. Or at least operational. Now I'm not so sure what NVidia decides to do with them.

            Infiniband is an interesting and powerful technology. I hope it stays around.

            Comment


            • #36
              Originally posted by xiando View Post
              How are these Connect-X 2 cards working out for you? Share with us. I've been hoovering the buy button a few times lately wondering "I can haz two 10 gigabit cards and a cable for $50? should I just do it already?" Only minor draw-back I see is that the cards require that you use a x8 slot for them. Apart from that minor detail.. seems like a steal?
              I'm using 2 of them connected via a 7m copper direct attach cable (lower power and latency than optics but limited in length). They do have x8 PCIe 2.0 connectors but work fine in a x4 3.0 slot (physically x16, but due to CPU limitations it's running at x4). It is not ideal because the aggregate bandwidth is on the edge of saturating full duplex for a single port. A dual port card won't be able to achieve full performance in such slot.

              On Windows I'm using the Mellanox 5.50 driver (it does not officially support ConnectX-2, but works better than the older one dedicated for X-2 in my tests). Firmware flashing and modification (disabling PXE for example) work well both under Linux and Windows.

              With MTU set to 9600 iperf3 in either direction shows 9.81GBit/s while iperf --dualtest achieves 9.19GBit/s and 9.34GBit/s simultaneously. hrping shows around 0.5ms.

              They do get warm, but even the latest 10Gbit models still have heat sinks. Not sure what more you're interested in knowing

              Comment


              • #37
                Originally posted by madscientist159 View Post
                The firmware isn't the problem here as its isolated where it can't read from the system unless allowed.
                All PCIe devices can provide a PCI Option ROM which in turn can be used to mount an early boot attack.

                Comment


                • #38
                  Not directly related,
                  But any on has experience with Chelsio HBAs?

                  At support level, reliability, and so on?

                  Appreciated,

                  Comment


                  • #39
                    The latest release from Mellanox Technologies says that they will confidently continue to move in their direction.

                    Check this review https://bestsinkdisposal.com/insinke...3-4-hp-review/

                    Comment

                    Working...
                    X