Announcement

Collapse
No announcement yet.

PCI Express 7.0 Specification Announced - Hitting 128 GT/s In 2025

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by oiaohm View Post
    When people start using risers to put GPU and the like into x1 PCIe slots for compute the faster the PCIe slot the better. Vast majority of consumers don't end up using their PCIe x1 slots. The M.2 PCIe x4 do get used more.
    That's not a consumer use case. PCIe risers and GPU compute on consumer mobos is very much a niche hobbyist segment. Not something any OEM is going to design to.

    Comment


    • #32
      Originally posted by torsionbar28 View Post
      Not disagreeing, but I understand why they do it. The x1 PCIe slots that is, not the hot dogs. Gen2 x1 has 500 MB/s of bandwidth. Not many consumer use cases need more than that. That's close to the SATA limit, which is good enough for the vast majority of consumers.

      For those who need a little more, you can buy an Gen3 x8 SAS adapter for $50 used that gives you 8 SATA/SAS ports at full speed. Or something like Thunderbolt. I honestly cannot picture what kind of consumer use case there is for an x1 card that needs >500 MB/s of bandwidth.
      Gen 3 1x slot = .985GBps
      Gen 4 1x slot = 1.969GBps
      Gen 5 1x slot = 3.93GBps

      I could think of a lot of use cases for a Gen 4 1X slot, even in consumer boards.





      Comment


      • #33
        Originally posted by edwaleni View Post
        I could think of a lot of use cases for a Gen 4 1X slot, even in consumer boards.
        Cool! Like what?

        Comment


        • #34
          Originally posted by cthart View Post
          Now the CPUs are playing catchup to the buses.
          Just to nit-pick, CPUs are plenty fast (e.g. AMD's 8-core 5800X3D has L3 cache bandwidth of 2 TB/s) - it's really the memory subsystem that's the current bottleneck. The nominal DDR5 bandwidth of Alder Lake is 76 GB/sec. A single PCIe 5.0 x16 card can theoretically deliver 64 GB/s in one direction. If you somehow kept transfers in both directions going, like with a 400+ Gbps NIC, it could easily soak up the entire memory bandwidth and then some.

          CXL memory is both a solution to this problem, as well as fueling it (i.e. with CXL leveraging the PHY layer of PCIe). In other words, if you put your memory out on the bus, as well, then you can presumably scale it in a way that's balanced against your system's overall needs. However, putting it on the bus means you now need ~double the aggregate bus bandwidth than before.

          Comment


          • #35
            Originally posted by risho View Post
            if only we could get a new version of thunderbolt with each new version of pcie.
            You want cables > 0.5 m that are as big as a garden hose and require active power?

            Comment


            • #36
              Originally posted by commodore256 View Post
              I think this spec might be mostly be used for glue logic chiplets for accelerators
              Not exactly. PCIe carries a lot of baggage needed for longer links and still won't scale up to speeds high enough for many chiplet interconnect needs (think multi-die GPUs or Apple's M2 Ultra).

              CCIX and CXL have tried to tackle the chiplet interconnect problem, but UCIe represents a more comprehensive framework for addressing it. It can use the existing PCIe and CXL protocols, without the baggage from their PHY specifications. It also supports implementation-specific custom protocols, so that someone like AMD could continue to use their Infinity Fabric interconnect between two of their chiplets, over a UCIe PHY layer.

              Comment


              • #37
                Originally posted by oiaohm View Post
                There are a few consumer PCIe 5.0 ssd on the market. Yes M.2. More will come.
                Name a single one that's actually shipping. I've seen announcements of controllers and Samsung has at least one enterprise SSD on the market (2.5" U.2 form factor, BTW), but I have yet to find any reviews of PCIe 5.0 NVMe M.2 SSDs.

                And no: consumers don't need (i.e. can't benefit from) PCIe 5.0 SSDs. There's plenty more headroom left in PCIe 4.0. Intel simply got embarrassed by AMD leap-frogging them on PCIe 4.0 and decided to turn the tables, in Alder Lake. There's no practical justification for what they did, but it certainly had the effect of pushing up prices of Alder Lake motherboards.

                Comment


                • #38
                  Originally posted by tildearrow View Post
                  I'm going to announce the next PCIe specifications:

                  PCIe 8.0: 256 GT/s - 2028
                  PCIe 9.0: 512 GT/s - 2031
                  PCIe 10.0: 1024 GT/s - 2034

                  Come on, it's too early! I don't think there are any consumer PCIe 5.0 devices on the market...
                  I don't see any harm in it, unless they get too disconnected from actual implementation issues. It does make me expect that various parties will start skipping over PCIe versions eventually. If PCIe isn't replaced by that time.

                  Comment


                  • #39
                    Originally posted by torsionbar28 View Post
                    Cool! Like what?
                    - Lower duty graphics cards to get more storage into the 16 lane slot. PCIe 4.0 1x equals a PCIe 1.0 8x slot. (Not everyone is a gamer)
                    - 10Gbps Ethernet, move it off the 4x/8x slot
                    - Extra USB3 ports
                    - Non RAID SATA ports for JBOD

                    Comment


                    • #40
                      Originally posted by s_j_newbury View Post
                      There really isn't any obvious use-case that I can see for such bandwidth, at least outside of HPC or scientific data acquisition, perhaps. It would make more sense to keep to a standard, and reduce system costs, or improve robustness rather than over specify and fragment the market. At least until there is a demonstrable need for something better.
                      High speed Ethernet adapters and high speed disk I/O are pretty obvious use cases and lots of demand outside of HPC or scientific data acquisition.

                      Comment

                      Working...
                      X