Announcement

Collapse
No announcement yet.

PCIe 6.0 Specification Released With 64 GT/s Transfer Speeds

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by risho View Post
    i know this will never happen, but it would really be something if thunderbolt 5 could be 4 pcie 6 lanes. would make it pretty future proof for running thunderbolt gpu enclosures.
    The cable + enclosure would probably cost more than the actual GPU. Existing top-spec Thunderbolt cables are already crazy expensive, and that's just with PCIe 3.0 x4. Any cables over 0.5 m require active amplification.

    Comment


    • #22
      Originally posted by mlau View Post
      why? usually you don't need active components on a mb for pcie at all.
      Even PCIe 4.0 motherboards generally need retimers.

      Originally posted by mlau View Post
      the root complex is in the cpu and the devices each have their own interface (which is usually a standard ip core from the usual providers); all the MB provides is the copper traces and connectors.
      The PAM4 encoding is going to add cost & complexity in the PHYs, from what I've read.

      The main cost is probably going to come from the additional layers & more exotic materials needed to provide motherboards with sufficiently good signal quality.

      Comment


      • #23
        Originally posted by tildearrow View Post

        It's a luxury hobby already considering the onerous prices of graphics cards.
        I obviously forgot the poor guys who had to build a PC during the last two years.

        After a decade of underinvestment, I am looking forward for all the new fabs going online in the coming years. Sooner or later that will bring supply back in order.

        Comment


        • #24
          Originally posted by coder View Post
          To be pedantic, PCIe 6.0 x1 would only be equivalent to PCIe 3.0 x8. So, the only cards with a x1 link would be extremely low-end cards. I guess we've already seen x4 graphics cards, with the new Radeon RX 6500 XT @ PCIe 4.0, but remember that when PCIe 6.0 hits, even something in that product tier is going to need more bandwidth than PCIe 4.0 x4.

          So, it seems likely that x4 is going to remain the minimum width for consumer GPUs. I don't entertain the possibility of PCIe 5.0 x2, since I suspect PCIe 4.0 x4 is probably cheaper to implement.
          https://www.amazon.com/ZOTAC-GeForce...dp/B01E9Z2D60/ Yes we have extremely low end cards today that are PCIe 3.0 x1 link. Yes a PCIe 6.0 x1 card would be able to be way more functional entry level card.

          PCI 4.0x4 is still a PCIe 3.0 x8 in speed. Yes PCIe 5.0 makes that PCI 3.0x16 in speed then PCIe 6.0 makes that PCIe 3.0x32 in speed.

          Something to consider is you bare bones low end cards are going to be PCIe x1 as they have been since PCIe started but with more bandwidth these will be more functional at PCIe 3.0 x8 this will be fast enough to run old titles and low GPU needing title decently well so more people might use low end cards. The reality here is most cases today a low end x1 PCIe cannot out a AMD APU but with the higher bandwidth this could change. You general entry level consomer at PCI 6.0 to could be x1 or x2 time will tell.

          High end GPU it will depend how fast PCIe 6.0 comes out. I don't suspect we will be talking greater than 4x and when having a 4x lane pcie the card is not going to be a single ATX motherboard slot in width.

          ATX motherboard you have 7 slots total.

          Remember you have either 28 or 24 lanes direct from CPU these days to play with. AM5 is to be 28 sub trace 4x for the chip set to be able to drive USB ports at high speed and the like this leaves 24 subtract 4 for direct m.2 this leaves 20 to place. By PCIe 6.0 we could have AM6 so more .

          Starting at CPU and working down the slots as I would of expect them with PCIe 6.0 speeds.
          1) x4
          2) x2 but this is a shared 2x based off slot one that will only be access if you put in a single slot cpu. So still only consumed 4 lanes.
          Now you can do 4 ports of x4 so coming to a 6 slot motherboard allowing that person is wanting to put double width card at bottom. You might do another GPU location combination of x4 and x2 that would take you up to 7.

          So at this point you are making quite a few motherboard combinations and I have not been using any 1x yet. Lets say in PCie 6.0 the only 1 slot GPUs are 1x and the 2 slot ones are only x2 yes 4x GPU be the modern day triple slot monsters. If that the case you can end up with a slot pattern like the following.

          1) X4
          2) x1
          3) x2 Yes you have only used 4 PCIe lanes at this point.

          If that the case with 20 PCIe lanes to place. the remaining 4 can be can 4x lanes. You could make one of those x2 to give a x2 M.2.

          I do suspect at PCIe 6.0 we will seen more cases of entry level motherboards having all PCIe slots wired directly to the CPU socket and these be the motherboards without a chipset fan.

          Your high end AMD threadripper and epic cpus used in pro desktops these have 128 lanes direct from the CPU. These already normally use direct to slot without going by Chipset. 128 lanes of PCIe 6.0 is scary. Yes these boards you common see direct to cpu. Yes 128 lanes is enough to direct populate 8 x16 lane slots and standard atx motherboard you only have 7 slots. So 4x for chipset and 3 m.2 slots sounds good to use up the x16.

          PCIe 6.0 I suspect we might see the end of the motherboard chipset being PCIe bridge. Yes that PCIe bridge is the biggest heat generator in the chipset. PCIe 6.0 you will be able to put up a decent offering with limited PCIe lanes. High end CPU with their 128 lanes of PCIe is already in most cases for a desktop motherboard overkill at PCIe 3.0 speeds at PCIe 6.0 it will be even more over kill.

          Even if the entry level cpu for pcie 6.0 has AM5 level of lanes with the speeds of PCIe 6.0 at this stage looks like you will be able to build decent motherboard with direct to CPU pcie. Of course if entry level CPU increases number of CPU direct lanes this will get simpler.

          Comment


          • #25
            Originally posted by mlau View Post

            why? usually you don't need active components on a mb for pcie at all. the root complex is in the cpu and the devices each have their own interface (which is usually a standard ip core from the usual providers); all the MB provides is the copper traces and connectors.
            Which is why I said PC, not motherboards specifically. when you start using PAM, things get more complicated and more expensive.

            Comment


            • #26
              Originally posted by oiaohm View Post
              https://www.amazon.com/ZOTAC-GeForce...dp/B01E9Z2D60/ Yes we have extremely low end cards today that are PCIe 3.0 x1 link. Yes a PCIe 6.0 x1 card would be able to be way more functional entry level card.
              Except note how it's a PCIe 3.0 card. A product like that isn't designed to hit any kind of sweet spot between price and performance. It's a x1 card for people only have an x1 slot available. Sure, they might replace it with a PCIe 4.0 version, someday.

              But, we're really not talking about such specialty products. We're talking about options for providing the best value to consumers. And I don't honestly know if it'll ever be cost-effective to make a true bottom-tier consumer card with PCIe 6.0.

              Comment


              • #27
                Originally posted by coder View Post
                Except note how it's a PCIe 3.0 card. A product like that isn't designed to hit any kind of sweet spot between price and performance. It's a x1 card for people only have an x1 slot available. Sure, they might replace it with a PCIe 4.0 version, someday.

                But, we're really not talking about such specialty products. We're talking about options for providing the best value to consumers. And I don't honestly know if it'll ever be cost-effective to make a true bottom-tier consumer card with PCIe 6.0.
                You need to look closer.

                The chipset 710 is only PCIe x8 2.0. Yes they added a bridge chip. So it has 2x PCIe 2.0 from the PCIe 3.0 So by PCIe 5.0 with a bridge chip a PCIe x1 slot will do all that the card can do.

                The bottom-tier cards PCIe x1 at the moment are some horrible abuse of legacy gpus due to not giving them enough bandwidth.

                Radeon RX 6400 that just being released is a PCIe 4.0 x4 card. Now if groups like Zotac can get mits on this when PCIe 6.0 releases we will see PCIe x1 slots versions of it.

                x1 I don't think they are going to be new designs by time time of PCIe 6.0 Yet. x2 we may be seeing new designs at this level at the time of PCIe 6.0.

                How the PCIe x1 today are being made is the way it going to stay into the future. Taking earlier generation GPU and make it work with x1. PCIe 6.0 there is enough bandwith here at x1 that some of todays generations of GPU will work perfectly. 4x pcie 4.0 to 1x pcie 5.0 is not as bad as current zotac 8x PCIe 2.0 shoved into 1x PCIe 3.0.

                Yes you are right coder the current pcie 1x gpu produces have not been designed for sweet spot but it fit into 1x slots. The uplift in performance of PCIe and the start of release of less using PCIe graphics card is going to cause a cross at some point.

                Basically x1 GPU in a PCIe 6.0 I would suspect would be a current PCIe 4.0 x4 card just with a bridge or a PCIe 5.0 x2 card with a bridge. Not going to be the absolute best but they are not going to be absolutely horrible either because they will be x1 GPU with correct amounts of bandwidth.

                Comment


                • #28
                  Originally posted by coder View Post
                  Not only that, it uses the same edge connector and remains backward-compatible with legacy cards and motherboards.
                  It would of course still be over copper if that were true. I'm talking about the possibility for alternative interconnect technology to implement the standard, eg to compose many more PCIe devices in a rack into a single system, with considerably less power consumption.

                  Off-chip transmission is a significant portion of modern chips' power budgets. It's one of the reasons intel is making switch chips with the optical fibers being routed directly to the main switch chip itself. https://newsroom.intel.com/news/inte...hernet-switch/

                  Comment


                  • #29
                    Originally posted by kiffmet View Post
                    Developer12 The transcievers in high speed optical connections do still get pretty toasty.
                    Not nearly as toasty as those for copper cables. It's why there's a strict low limit to the number of multi-gigabit copper cable adapters you can insert into an SFP+ switch, while you can generally insert as many multi-gig optical transceivers as you like and fill all the slots.

                    This has generally be the trade off from what I've seen looking at upgrading to 10Gb networking. You either buy cheap copper cable and expensive/limited NICs (I'm talking like 5 ports on an expensive 1U switch, it looks absurd) or you can buy expensive fiber cables and cheaper/more capable SFP+ equipment.

                    Comment


                    • #30
                      Obvious use case is NVMe cards.

                      Now imagine if nvmes are at most 4x PCIe, but you can run them as 1x. Now imagine a PCI 6.0 RAID card for NVMe Drives? How many NVMe drives could you fit in a 1U stoage chassis if so designed? what about 4U? The NVMe doesn't have run run at PCI6.0, just the mobo and RAID card. So imagine these in RAID-10, or RAID 60. Storage density should approach 3.5" drives if factored in size difference, but much much more performant.

                      Comment

                      Working...
                      X