Announcement

Collapse
No announcement yet.

PCIe 6.0 Specification Released With 64 GT/s Transfer Speeds

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    If you look at the Nvidia InfiniBand ConnectX 7 card it's already using PCIe 5 x32, so PCI 6 is going to have a use case.

    Just because you don't need or can't afford to use it doesn't mean you should be against it, as half a dozen prior comments have said, it's useful for your cloud provider and will be offered (affordable) to consumers within several years. It will also improve CXL.

    Comment


    • #32
      Originally posted by JustRob View Post
      If you look at the Nvidia InfiniBand ConnectX 7 card it's already using PCIe 5 x32, so PCI 6 is going to have a use case.

      Just because you don't need or can't afford to use it doesn't mean you should be against it, as half a dozen prior comments have said, it's useful for your cloud provider and will be offered (affordable) to consumers within several years. It will also improve CXL.
      I can understand the worry. Higher pcie speeds have result in chipset on consumer motherboards running hotter. Yes with non replaceable fans in a lot of cases leading to quite a lot of 3-5 year motherboard failures.

      Now if PCI 6.0 equals direct from cpu to cards so the chipset is no longer trying to fry itself with a non replaceable fan problems will go away.

      Consumer space there is this question. Because faster pcie for short motherboard life is serous-ally not worth it.

      Cloud providers with their motherboards and cases are not having this cooling problem. Yes passive heatsinks everywhere with high speed replacable fans blowing over the top being ducted over the heatsyncs.

      So there is a different problem than being affordable to consumers. Its been reliable to consumers.

      I hope AMD and Intel will be smart enough not to make PCIe 6.0 bridge in the chipset if they cannot get the temperatures under control. Historically this has wiseness is not always the case. Sometimes its we want to put the latest and greatest on our hardware who cares if it only lasts 5 years.

      Comment


      • #33
        Originally posted by oiaohm View Post
        You need to look closer.

        The chipset 710 is only PCIe x8 2.0. Yes they added a bridge chip. So it has 2x PCIe 2.0 from the PCIe 3.0 So by PCIe 5.0 with a bridge chip a PCIe x1 slot will do all that the card can do.
        Right, because it's a specialty product. That even further underscores my point that it's not made to hit a sweet spot attractive to the low-end market. It's for those folks in a situation when a x1 slot is all they have available.

        Comment


        • #34
          Originally posted by Developer12 View Post
          It would of course still be over copper if that were true. I'm talking about the possibility for alternative interconnect technology to implement the standard, eg to compose many more PCIe devices in a rack into a single system, with considerably less power consumption.
          Sure, you could run the protocol over a different signalling standard, but I get the impression that an increasing number of design elements in PCIe are focused on the specific issues involved in transmission over copper. For instance, segmentation into FLIT packets is done specifically to support the FEC and CRC fields, which are probably sized precisely to deal with the expected error rates of PAM-4 signalling over signal paths with the specified noise characteristics. Replace the transmission medium with optical and that whole cascade of decisions should probably be revisited.

          I'm not disagreeing with the principle, but I think there's a lot of work that would be needed to make it happen. It's not a simple addendum.

          Originally posted by Developer12 View Post
          intel is making switch chips with the optical fibers being routed directly to the main switch chip itself. https://newsroom.intel.com/news/inte...hernet-switch/
          Interesting. Yeah, Intel actually was doing this around 2016, using their proprietary Omni-path interconnect (100 Gbps, with 200 soon to follow) connecting directly to some Xeon processors (Xeon Phi, with other Xeons at least planned), but then Omnipath was abruptly cancelled and I guess buying Barefoot Networks might've been part of their Plan B.

          Comment


          • #35
            Originally posted by Developer12 View Post
            Not nearly as toasty as those for copper cables. It's why there's a strict low limit to the number of multi-gigabit copper cable adapters you can insert into an SFP+ switch, while you can generally insert as many multi-gig optical transceivers as you like and fill all the slots.
            Oh, don't confuse RJ-45 transceivers with direct-connect copper, though. 10 Gigabit Ethernet over RJ-45 is definitely a power hog, using multiple times what's needed for a direct-connection cable (i.e. of the type used within racks).

            With RJ-45, you're looking at a different modulation scheme and the capability of driving over much greater distances. That's simply not comparable to what's going on with the PCIe bus inside a machine.

            Comment


            • #36
              Originally posted by GI_Jack View Post
              imagine if nvmes are at most 4x PCIe, but you can run them as 1x. Now imagine a PCI 6.0 RAID card for NVMe Drives? How many NVMe drives could you fit in a 1U stoage chassis if so designed?
              The bottleneck would become memory. On a CPU with 12-channel DDR5, maybe you get enough bandwidth to support 96 lanes of PCIe 6.0 connectivity, and that's without leaving any bandwidth for anything else. I guess someone is going to point out that memory could be scaled up by using CXL, but that data still has to get in/out of the CPU to be of much use (yeah, niche GPU-direct cases excepted).

              Another concern would be the demands placed on the switch fabric. Given how much power the NVswitch chips burned, which Nvidia built to do lower speeds than this, I'd be concerned about that.

              Originally posted by GI_Jack View Post
              The NVMe doesn't have run run at PCI6.0, just the mobo and RAID card. So imagine these in RAID-10, or RAID 60.
              All I can say about that is you probably wouldn't have just one RAID card. Also, I don't really know how common conventional RAID is, in hyperscale environments. I think they're more likely to use replication, as it scales better.

              Originally posted by GI_Jack View Post
              Storage density should approach 3.5" drives if factored in size difference, but much much more performant.
              There are 3.5" SSDs with > storage density than hard drives. The problem is they're still massively more expensive per TB.

              Comment


              • #37
                Originally posted by JustRob View Post
                If you look at the Nvidia InfiniBand ConnectX 7 card it's already using PCIe 5 x32
                Thanks for that. I was aware of the theoretical possibility of x32, but never knew of an actual example!

                Originally posted by JustRob View Post
                Just because you don't need or can't afford to use it doesn't mean you should be against it, as half a dozen prior comments have said,
                I don't think a single post in this thread is against it. If anything, we're just skeptical that it's destined for consumers in any kind of foreseeable timeframe, if ever.

                Originally posted by JustRob View Post
                it's useful for your cloud provider and will be offered (affordable) to consumers within several years.
                Yeah, exactly like what happened with 10 Gigabit Ethernet, right? Datacenters had that in like 2003 (standard ratified in June 2002). And yet, 1 Gbps ethernet is still entrenched in the mainstream, with a few enthusiast boards having 2.5 Gbps and only some workstation-oriented boards going above that.

                There are many datacenter technologies that never reach consumers.

                Originally posted by JustRob View Post
                It will also improve CXL.
                Yes, it's the basis of CXL 2.0.
                Last edited by coder; 13 January 2022, 07:19 AM.

                Comment


                • #38
                  Originally posted by coder View Post
                  Right, because it's a specialty product. That even further underscores my point that it's not made to hit a sweet spot attractive to the low-end market. It's for those folks in a situation when a x1 slot is all they have available.
                  When you consider the recently released AMD gpu is a 4 lane PCI 4.0 card. So under a 5 year old card in a PCI 6.0 1x lane is possible.

                  Please note a A single PCIe 3.0 lane is only really equal to 2 PCie 2.0 lanes and the card was designed for 8 lanes of PCI 2.0. So we are not going to have the card this badly crippled.

                  Yes the current x1 card that are made are not in the sweet spot to be attractive to the low end. But the x1 cards by PCI 6.0 this could be a very different story. At least should be respectable.

                  Comment


                  • #39
                    Originally posted by oiaohm View Post
                    When you consider the recently released AMD gpu is a 4 lane PCI 4.0 card. So under a 5 year old card in a PCI 6.0 1x lane is possible.
                    You're forgetting that GPUs will be faster, by the time consumers have PCIe 6.0 motherboards (if they ever do, that is). What's enough I/O bandwidth for a low-end dGPU today won't be, for one in 2025+.

                    Comment


                    • #40
                      Originally posted by coder View Post
                      You're forgetting that GPUs will be faster, by the time consumers have PCIe 6.0 motherboards (if they ever do, that is). What's enough I/O bandwidth for a low-end dGPU today won't be, for one in 2025+.
                      The horrible fact here is dGPU i/o requirements for the past 2 generations in the consumer space is basically the same.

                      16 lanes of 3.0 for top end. Then equal to 8 lanes of 3.0 for bottom end. AMD high end dGPU cards are only PCIe 4.0 x8 not x16 even that they take x16 slot.

                      The NVIDIA RTX 3080 GPU uses PCIe Gen4, and so we have another opportunity to benchmark PCIe 3.0 vs. 4.0, using an AMD X570 platform for benchmarks as to whe...


                      Even with Nvidia 16x PCIe 4.0 cards there is to the point that there is no difference performance to be gained from faster. Yes there is a 1 to 2 percent performance uplift but this does not come from the higher bandwidth of PCIe 4.0 but from the minor changes in the specification to allow better efficiency and you get this uplift even on motherboard that only has 8x 4.0 Pcie to the 2 x16 video card slots.(yes those motherboards split the 16x for 1 video card in two)

                      Main reason for gGPU at the moment to be x16 is to be compatible with PCIe 3.0 systems. In the current crop of dGPU there is really no reason to give a dGPU 16 lanes of PCIe 4.0 its not going to give you any performance improvement from the bandwidth increase.

                      Question is how long is the stall in expanded I/O for dgpu going to last. What is 4 and 8 4.0 pcie comes 2 and 4 Pcie 5.0 and 1 and 2 Pcie 6.0.

                      With the current stall expecting dgpu to go in a 4x lane slot by Pcie 6.0 time frame is most likely being conservative due to how much a dGPU i/O bandwidth expand as slowed/stalled. Really its unlikely to be double the current bandwidth be required for future dGPUs.

                      Yes you do see with amd high end cards with only 8x lanes of 4.0 if you put those cards in pcie 3.0 you run bandwitdth at times. 16x PCIe 3.0 8x PCIe 4.0 that our current dgpus. Dgpus have been stuck at this bandwidth level for a while now.

                      Yes we have seen dgpus get faster than what was used in pcie 3.0 with pcie 4.0 cards but we have not seen them need more bandwidth because they are doing more processing on the same amount of data. Bandwidth requirement and dgpu speed is not tightly linked. Question is how not tightly linked is dgpu speed to bandwidth usage.

                      Basically PCIe bandwidth is expanding faster than dgpus can use it question is by how much. X1 maybe for entry level dgpu by Pcie 6.0 x2 more likely. x2 maybe for a high end card by Pcie 6.0 but 4x is more likely. This is still a massive reduction in lanes required from the current 8x PCIe 4.0 and 16x PCie 3.0

                      coder this would be a different matter if we were seeing current day dgpus in fact use all of PCIe 4.0 x16. but reality we are not

                      Comment

                      Working...
                      X