Announcement

Collapse
No announcement yet.

PCI Express 7.0 Specification Announced - Hitting 128 GT/s In 2025

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by tildearrow View Post

    Eventually Canonical will rewrite the Linux kernel in JavaScript, and then.......

    ...off-topic much? ;p
    Oh please we all need to know it has to be rewritten in GO!

    Comment


    • #22
      Originally posted by tildearrow View Post
      I'm going to announce the next PCIe specifications:

      PCIe 8.0: 256 GT/s - 2028
      PCIe 9.0: 512 GT/s - 2031
      PCIe 10.0: 1024 GT/s - 2034

      Come on, it's too early! I don't think there are any consumer PCIe 5.0 devices on the market...
      Consumers don't need more than pcie4, but enterprise servers does.

      Comment


      • #23
        Originally posted by tildearrow View Post
        I'm going to announce the next PCIe specifications:

        PCIe 8.0: 256 GT/s - 2028
        PCIe 9.0: 512 GT/s - 2031
        PCIe 10.0: 1024 GT/s - 2034

        Come on, it's too early! I don't think there are any consumer PCIe 5.0 devices on the market...
        There are a few consumer PCIe 5.0 ssd on the market. Yes M.2. More will come.

        Comment


        • #24
          What I find interesting is how they are going to implement PCIe at such high speeds, I think we are going to start hitting some interest limits around material science/physics. There are already cases with some motherboards where for DDR5 only the slots closest to the CPU have DDR5 enabled just due to the distance and bandwidth involved.

          Comment


          • #25
            Originally posted by mdedetrich View Post
            What I find interesting is how they are going to implement PCIe at such high speeds, I think we are going to start hitting some interest limits around material science/physics. There are already cases with some motherboards where for DDR5 only the slots closest to the CPU have DDR5 enabled just due to the distance and bandwidth involved.
            This is a good point. It also makes it much more expensive to engineer the rest of the hardware to accommodate the bandwidth beyond the interconnect even when it's possible. More energy consumption, increased frailty or reduced longevity are also potential trade-offs.

            If you want/need to have that much bandwidth, you're going to have be willing to pay for it, at that point why wouldn't you use an interconnect specialized for the purpose like InfiniBand?

            Comment


            • #26
              Originally posted by tildearrow View Post
              Clarified my post by adding "consumer".
              This stuff isn't targeted at consumers. Ain't no consumers needing to move hundreds of GB per second. This is a datacenter use case. Yes, eventually, it will trickle down to the consumer space after many years, but that is not the launch target for these high speed interconnects.

              The trend nowadays is all the heavy compute being in the Cloud. Conceptually It's like VT100 dumb terminals all over again, but with a GUI. Your PC is becoming a dumb receiver for displaying the Cloud-generated Cloud-managed content. Look! Listen! Obey! Like a TV set. You don't generate the content, you select which content you want to receive, then sit there and receive the content. Aside from some niche (CAD/CAM, scientific stuff) workloads, everything is moving to the Cloud. Even business desktops are becoming dumb terminals - see Office365 on the Web, Google Docs, etc. Gaming is starting to move there too. This puts huge demands for compute, memory, storage, and IO on the Cloud infrastructure. This all means that your PC doesn't need any high speed stuff, it only needs enough performance to stream and draw the images onto your screen. The data center is where the big money is, and that's what these new technologies are all targeting.
              Last edited by torsionbar28; 22 June 2022, 10:17 AM.

              Comment


              • #27
                Tell the consumer motherboard makers to stop putting PCIe Gen 3/4 1x slots if no one in the world is going to make adapters for them. The ones that exist are Gen 2 only. I think there was just 1 Gen 3 1x SATA adapter. Lots of adapters made that use 4x, but how many consumer boards actually have a 4x slot? unless its a 16x, wired for 4x use.

                It;s like going to the store to buy a pack of hot dogs which only have 8, but the bag of buns have 10.

                Comment


                • #28
                  Originally posted by edwaleni View Post
                  Tell the consumer motherboard makers to stop putting PCIe Gen 3/4 1x slots if no one in the world is going to make adapters for them. The ones that exist are Gen 2 only. I think there was just 1 Gen 3 1x SATA adapter. Lots of adapters made that use 4x, but how many consumer boards actually have a 4x slot? unless its a 16x, wired for 4x use.

                  It;s like going to the store to buy a pack of hot dogs which only have 8, but the bag of buns have 10.
                  Not disagreeing, but I understand why they do it. The x1 PCIe slots that is, not the hot dogs. Gen2 x1 has 500 MB/s of bandwidth. Not many consumer use cases need more than that. That's close to the SATA limit, which is good enough for the vast majority of consumers.

                  For those who need a little more, you can buy an Gen3 x8 SAS adapter for $50 used that gives you 8 SATA/SAS ports at full speed. Or something like Thunderbolt. I honestly cannot picture what kind of consumer use case there is for an x1 card that needs >500 MB/s of bandwidth.

                  Comment


                  • #29
                    Originally posted by mdedetrich View Post
                    What I find interesting is how they are going to implement PCIe at such high speeds, I think we are going to start hitting some interest limits around material science/physics. There are already cases with some motherboards where for DDR5 only the slots closest to the CPU have DDR5 enabled just due to the distance and bandwidth involved.
                    AM5 from AMD will be DDR5 only. Same with will be the DDR5 epic cpu. Does Intel 12 gen in fact support DDR5 mixed with DDR4? I do know there is a board being made with 1 DDR5 and 1 DDR4 slot for intel but it appears to be 1 stick of ram board.

                    ONDA H610M+ supports DDR5 and DDR4 memory The H610M+ supports both standards of memory.  Chinese ONDA announces its Micro-ATX motherboard based on H610 chipset featuring support both DDR4 and DDR5 memory technology. There are two DIMM slots in total, one supporting up to 32GB of DDR4-3200 standard and the other DDR-4800 standard. It is assumed […]


                    Take a close look at the motherboard here. The DDR4 slot is closest to the CPU with the DDR4 slot further away from the CPU. So I suspect you have mixed ram terms with PCIe terms.

                    B550 from AMD with PCI4 slots only covered the GPU and storage slots with PCIe 4.0 with all the only slots being PCIe 3.0.


                    New amd chipsets the B650 only the M.2 slot is PCIe 5.0 everything else is PCIe 4.0 Yes you have to up to X670 to get the same layout as a B550 of old.

                    Interesting point is the new high end AMD motherboard has two chip-set chips. Also all these motherboard chipsets are meant to be fanless.

                    The low end motherboards not getting all slots the same has been that way for a while. Yes PCI5.0 is the first time we have seen that the M.2 being the latest with everything else being the generation before. Of course this will cause interesting market issue. If your low end motherboards don't have PCI 5.0 why make lower end GPU directly support PCI5.0.

                    PCI4.0 slots from the chipset run into a heat problem with requiring active cooling with AMD. The the silicone production improve increase solved that problem for PCI 4.0 in the current generation. Yes needing more cooling area area todo PCI5.0 all ports verses the prior PCI 4.0 is also problem.

                    mdedetrich majority of the PCIe 5.0 problem its turning out to be chipset. It has not turned out to be healthy or user-friendly at all to have on motherboard fans that are not replaceable so users don't want to buy motherboards with fans.

                    The interesting question at PCIe 6.0 will silicon improved that a PCIe 6.0 chip-set can run fan-less. GPUs are not using a full x16 PCIe 4.0 slot bandwidth now. Performance improvements don't appear to be coming in the consumer space from having PCI5.0 other than on the M.2 in the short term.

                    We are running into diminishing returns for you consumer users of these higher PCIe . Its not that motherboard makers cannot make full PCIe 5.0 motherboards or will not be able to make PCIe 6.0 motherboards Its more will the cost be worth it.

                    PCIe 6.0 gets a lot more messy as noted here.

                    FLIT encoding is also being backported in a sense to lower link rates; once FLIT is enabled on a link, a link will remain in FLIT mode at all times, even if the link rate is negotiated down.
                    So this could result in the lower speed slots on a PCIe 6.0 motherboard not in fact being absolutely identical. PCIe 4.0 generation of boards to PCIe 5.0 generation of motherboards there are a few changes. not that disruptive. PCIe 6.0 could be quite disruptive.

                    Its not just the increasing speed here. There is management side changes coming in PCIe 6.0 and 7.0. CXL hardware starts appearing with PCIe 5.0 and more features to make CXL better appear in the newer PCIe 6.0 and PCIe 7.0. Its going to be interesting how this plays out.

                    Like it possible we will have motherboards with PCIe 7.0 slots that are in fact PCIe 5.0 speed max if this happens it will be for features added to PCIe specification to make CXL hardware work better.

                    Comment


                    • #30
                      Originally posted by torsionbar28 View Post
                      Not disagreeing, but I understand why they do it. The x1 PCIe slots that is, not the hot dogs. Gen2 x1 has 500 MB/s of bandwidth. Not many consumer use cases need more than that. That's close to the SATA limit, which is good enough for the vast majority of consumers.

                      For those who need a little more, you can buy an Gen3 x8 SAS adapter for $50 used that gives you 8 SATA/SAS ports at full speed. Or something like Thunderbolt. I honestly cannot picture what kind of consumer use case there is for an x1 card that needs >500 MB/s of bandwidth.
                      When people start using risers to put GPU and the like into x1 PCIe slots for compute the faster the PCIe slot the better. Vast majority of consumers don't end up using their PCIe x1 slots. The M.2 PCIe x4 do get used more.

                      Comment

                      Working...
                      X