Announcement

Collapse
No announcement yet.

PCI Express 6.0 Announced For Release In 2021 With 64 GT/s Transfer Rates

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Drizzt321
    replied
    Originally posted by wizard69 View Post
    This has been an interesting read guys but one thing came to mind that might be worth discussion. What are your thoughts on PCI-E 6 being used as a communications channel on multi chip modules? It seems like with the advent of AMD’s chiplet approach and the need to evolve hardware at different rates that this might become a thing. That is use PCI-E 6 to connect GPUs to compute clusters in a single APU like module. Actually I’m thinking it would be an viable interconnect for any high performance subsystem.

    The big advantage is the ability to decouple the evolution of one part of the module from the others. With the density wall soon upon us I’m expecting to see greater use of multi chip modules even at the lower cost end. Having a standard interface, even if it never goes off module seems like a good idea.
    We have similar interconnects within CPU/MDM packages. HyperTransport -> Inifity Fabric for AMD (and others, released 2001 BEFORE PCIe was), and QPI for Intel which was launched in 2008. HT/IF seems, to me, to be the more technically flexible of the two being able to interconnect CPU and other various chips and silicon both on-package and off-package. Things like additional FPGA and other non-CPU ASICs. IF is also set to scale from 30GB/s to 512GB/s, while PCIe 6.0 x16 maxes out at 126.03 GB/s.

    I was thinking HT/IF was mostly just on-board, but it appears they do have slot and chassis interconnect cables specced out. At least for HT. Interesting. https://www.hypertransport.org/ht-connectors-and-cables

    Leave a comment:


  • rmoog
    replied
    PCIE 6.0 sounds pretty cool. But are we going to get back to the notion that the board's slots do not offer any dealbreakers that come from cheapskate design? Here's what I mean.

    When we had boards with up to 6 PCI slots, occupying any number of those slots would not slow down any of them or disable any of them. Great!
    Come around 2004, we get PCIE, we get a resurgence of SLI, and we also get throttled PCIE bandwidth if we decide to go with SLI. That's just what nobody wanted. Fortunately, this got resolved with premium boards that wouldn't share power across PCIE lanes of many slots. With 2008 I stopped seeing boards like this.
    Now, we have PCIE-mounted storage (NVME) and it's great. But what isn't great is that sometimes boards pair a PCIE slot to a M.2 slot and occupying either one will disable the other. Come on, seriously? That's like saying "if you plug in the USB the ethernet won't work because we put them on the same bus". Come on.

    Can we get nice boards this time around or not?

    Leave a comment:


  • loganj
    replied
    Would there even be a PCIe 5 motherboard?

    Leave a comment:


  • wizard69
    replied
    This has been an interesting read guys but one thing came to mind that might be worth discussion. What are your thoughts on PCI-E 6 being used as a communications channel on multi chip modules? It seems like with the advent of AMD’s chiplet approach and the need to evolve hardware at different rates that this might become a thing. That is use PCI-E 6 to connect GPUs to compute clusters in a single APU like module. Actually I’m thinking it would be an viable interconnect for any high performance subsystem.

    The big advantage is the ability to decouple the evolution of one part of the module from the others. With the density wall soon upon us I’m expecting to see greater use of multi chip modules even at the lower cost end. Having a standard interface, even if it never goes off module seems like a good idea.

    Leave a comment:


  • wizard69
    replied
    Originally posted by Zan Lynx View Post
    And Thunderbolt uses short, heavily shielded cables with active signal repeaters. These things are not easy to build.
    Copper is pretty much done for.

    The thing that really bothers me about Thunderbolt /USB-C is the cable connector. That is for the mechanical relays ability more than anything. You would think that considering the intended uses they would have specified something more secure.
    25 feet is ridiculous and even if it managed to connect, I bet the error rate was insanely high.\

    And all of you have seen the limits on 100 gigabit Ethernet cables right? Half a meter. That is all. If you want more go optical.
    I actually had high hopes for optical to become more mainstream after Apple debuted Thunderbolt. That does not seem to have happened sadly.
    PCIe 5 and 6 will be lucky if they can work past the first slot.
    we could very well see motherboards implementing dual bus much like in past systems. A high speed slot(s) for a GPU or other high demand card and a bunch of secondary PCI-E 4.x slots. Let’s face it often the need for multiple slots is not tied to ultra high performance. The tech in PCI-E 4 could easily be the low cost / lower performance solution for years to come.

    Leave a comment:


  • wizard69
    replied
    Originally posted by chroma View Post
    How can they design PCIe 6.0 to be backwards compatible with PCIe 5.0 and 4.0 when few vendors have even implemented PCIe 4.0 yet? Who even wants to bother implementing PCIe 4.0 now, knowing it's already obsolete twice over?
    There is a huge difference between having a standard and having working and reliable hardware to implement that standard. This especially when implementing new signaling. I actually see PCI-E 4.0 being a long term play, likely being significant for a decade or so. PCI-E 5.0 is likely to be skipped over long term for 6.0.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by milkylainen View Post
    And no, you can't transit PCIe on arbitrary length because "It's optical".
    A 300km imaginary optical cable would have a minimum 2ms round trip time. Absolutely eons.
    2ms is inside the tolerance latency of PCIe 1-5 protocols. If your software/drivers/mmu will like it that is another matter. By spec has a latency limit of about the other side of the earth. Its more if you can get the signal that far and if the software will tolerate.

    Originally posted by milkylainen View Post
    OCuLink is another history and is currently stuck at PCIe 3.0 speeds.
    https://www.plda.com/blog/category/t...d-twinax-cable

    There are new optical cables appearing for Pcie-4.0. OCuLink-2 that was released before the PCIe 4.0 spec in fact is for PCIe 4.0 speeds. First version of OCulink is for PCIe 3.0 the second version is for PCIe 4.0.

    Leave a comment:


  • ll1025
    replied
    Originally posted by Zan Lynx View Post
    And Thunderbolt uses short, heavily shielded cables with active signal repeaters. These things are not easy to build.

    25 feet is ridiculous and even if it managed to connect, I bet the error rate was insanely high.\

    And all of you have seen the limits on 100 gigabit Ethernet cables right? Half a meter. That is all. If you want more go optical.
    Then I have to ask what mellanox is doing manufacturing part number MCP1600-C003E26N. That looks to me like a 5m 100gbe copper cable.

    For what its worth, the bandwidth on 100gbe is close to the bandwidth of a PCIE 3.0 16x slot, so no the error rate need not be terribly high.

    Leave a comment:


  • ll1025
    replied
    Originally posted by starshipeleven View Post
    Code:
    Slow IO speed has been the primary bottleneck of most computers for ages. PCIe bandwidth isn't a problem for GPUs
    Have you heard of the term NVMe?
    That's SSDs with native PCIe interfaces.

    They are a thing in the businness server market for setups where you need FAST storage, and are taking over the older SAS standard.
    NVMe doesn't change the fact that storage IO is still an order of magnitude slower than RAM, cache, and cpu registers. Slow IO is *still* a bottleneck. Consider that modern Epyc (Rome) CPUs are using multiple PCIe16 lanes for interconnects, as do GPUs, while NVMe still shares a meagre 4 PCIe lanes.

    You can run a storage node running dozens of NVMe disks off of a 4-8 core Epyc and not even saturate a single 100gbe card-- because the limiting factor isn't your RAM, or your CPU, or your NIC, it's your storage.

    Leave a comment:


  • ll1025
    replied
    Originally posted by stormcrow View Post

    ...The top line high quality board is likely to be all, or nearly all, gold circuits and will, generally, electrically perform better. As you go down in price the gold is replaced with copper where electrically and thermally feasible - and sometimes even when it's not....
    You lost some credibility here-- copper is a better electrical conductor than gold, and performs better thermally as well. We gold-plate contacts despite the (negligible) loss in conductivity because gold will resist corrosion and oxidation--which are far bigger issues for copper.

    Conductivity / resistance is not the limiting factor with high speed electronics running on a motherboard; interference in all its forms is, and changing metals won't solve that problem.

    If you disagree here, I'd love to see what top-line board you think is using gold wiring-- and I'd love to know why they wouldn't use silver instead.

    Leave a comment:

Working...
X