Announcement

Collapse
No announcement yet.

PCI Express 4.0 Is Ready, PCI Express 5.0 In 2019

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by polarathene View Post

    64 lanes can satisfy 4 GPUs at x16(provided the motherboard supports it with x16 slots or I think splitters/risers?). With 4.0 I probably won't need as many lanes, I have several x4 cards I'd like to use and multiple GPUs. My workload can utilize them either for para-virtualized VMs(where the extra USB/Disk/Network controllers are helpful, higher bandwidth with 4.0 means more controllers can fit on a single expansion card with hopefully good IOMMU groups so they can each go to different VMs). For the GPUs I do compute workloads like photogrammetry and deep learning, these workloads can take advantage of the lanes and bandwidth far better than games.

    I need more lanes/slots for my next system or PCIe 4.0 might reduce that need once products take advantage of it are available. Currently I only have a single dGPU with another x16 slot that'd switch to x8 if used I think hampering my GPU perf? and an x4 for the NVMe, that's my CPU lanes, mobo lanes only provides 3 x1 slots :\ I've not used risers yet but apparently those allow me to plug x4 devices into x1. I didn't know as much about this stuff when I built this machine, so unfortunately I think I have to wait until I upgrade.
    It sounds like you most likely need faster lanes, rather than more.

    Comment


    • #72
      Originally posted by starshipeleven View Post
      Yawn, it's getting into the "ridicolous overkill" bandwith. I like the fact that this means even smaller connectors/cables can run a GPU properly (currently with x4 lanes of PCIe 3.0 you can game fine), and the fact that there is already a new external pcie cable standard supported.

      Moore's Law was about number of transistors doubling each two years actually, so it might still fit. What's the transistor count on teh controllers of this stuff?
      Ever tried to feed a 2S or 4S Xeon machine from 2 x 100GbE (or 16x10GbE) NICs and/or 24x2.5" SSDs and watched the PCI-E gets saturated to death?
      We're hitting the PCI-E bandwidth limit on a daily basis. PCI-E 4.0 can't come fast enough.

      Keep in mind that the big money is in server machines, and they can't get enough bandwidth.... (Especially once you start talking about RDMA, Terabit Ethernet, etc).

      - Gilboa
      Last edited by gilboa; 22 June 2017, 06:33 AM.
      oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
      oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
      oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
      Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

      Comment

      Working...
      X