Announcement

Collapse
No announcement yet.

PCI Express 6.0 Announced For Release In 2021 With 64 GT/s Transfer Rates

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by stormcrow View Post

    I must have missed the part that mentioned clock frequency. Right now, unless you have access to the specification which is members only, all I've seen is a bunch of best-case numbers thrown out by PCI-SIG's marketing department.

    Also, there's not a linear relationship between performance and clock speed in most applications. Just because PCI-SIG advertises doubling the throughput doesn't mean they've gotten there by doubling the clock frequency on the entire bus.

    But you're right, doubling a line frequency can have a profound impact on the underlying materials, E&M problems, and thermal problems in an electrical system. It also doesn't follow that the OP's estimates are correct because when you do change the electrical properties in a circuit that the materials in the current circuits and circuit boards will be related to the circuits in the future board materials and layouts. There's a fundamental difference between the materials in a top of the line server board and an enthusiast or OEM board. The top line high quality board is likely to be all, or nearly all, gold circuits and will, generally, electrically perform better. As you go down in price the gold is replaced with copper where electrically and thermally feasible - and sometimes even when it's not. You can take a guess which board is more likely to come close to the theoretical maximums in the PCI-SIG specs all else being equal. Once you get to certain points the materials may no longer allow the electrical and thermal properties of one set of materials and instead mandate other more expensive materials due to thermal or E/M issues resulting in different price characteristics.
    I'm talking specifically about going from PCIe 3.0/3.1 to PCIe 4.0, which did involve a change in frequency (and thus further demands on physical design). As others have pointed out, PCIe 6.0 is a new direction.

    Comment


    • #12
      Originally posted by Drizzt321 View Post
      This isn't doubling of the frequency, it's doubling of the bandwidth. Anandtech has a good article on this https://www.anandtech.com/show/14559...o-land-in-2021.
      The statement I'm referring to is about going from PCIe 3.1 to PCIe 4.0, which did involve an increase in frequency. I was refuting the statement that a dramatic change in physical design was not required to enable PCIe 4.0, when in fact one was, due in part to frequency increases.

      Comment


      • #13
        Originally posted by betam4x View Post
        I think they should probably hit the pause button on doubling the bandwidth every generation. PCIE 4.0 currently already requires multiple layers to implement properly, making motherboards far more expensive than in the past. I can only imagine what will happen with 5.0 and 6.0.
        Can't find my post for some reason but there's actually a few efforts to quicken the pace of these increases coming out, such as GenZ or CAPI. The hardware is there from what it looks like.

        Comment


        • #14
          The PCIe bandwidth boosts are great. But there's still that bottleneck with CPUs isn't there that don't have devices directly connected to lines, DMI or something(at least for Intel, AMD has their own equivalent I think). Last I knew it was stuck on x4 lanes.

          Originally posted by chroma View Post
          I suppose it's a matter of perspective. I'll be happy to see any of these hit market, but with three announced, naturally I want the fastest of the three so I'd generally wait for it to hit the market, but PCIe 6.0 is not going to be available any time soon.
          Judging by the current time to market that I mentioned in the 5.0 article here:
          https://www.phoronix.com/forums/foru...17#post1102917

          I'd say holding out for 6.0 which doesn't finalize until 2021, isn't going to see any devices until 2023 at earliest?(in the 2 years since 4.0 was finalized, there's only a small handful of 4.0 devices available, and only just now is mobo(which I think also requires CPU support?) is becoming available, so push off 6.0 being more of an option in 2024-2025, by then you'd probably have new versions of PCIe in the works/announced too, so the wait train never ends :P

          4.0 has been a long time coming though and very welcome. Just need newer versions of Thunderbolt and USB that raise their bandwidth limits, and for motherboard vendors to ship products with those chipsets too. Especially laptops where external I/O can be a real boon like eGPU. Though I assume we'll start with more premium/enthusiast boards getting the support before it's more mainstream and later reaches laptops?

          Comment


          • #15
            And Thunderbolt uses short, heavily shielded cables with active signal repeaters. These things are not easy to build.

            25 feet is ridiculous and even if it managed to connect, I bet the error rate was insanely high.\

            And all of you have seen the limits on 100 gigabit Ethernet cables right? Half a meter. That is all. If you want more go optical.

            PCIe 5 and 6 will be lucky if they can work past the first slot.

            Comment


            • #16
              Originally posted by cb88 View Post
              That's mostly bunk mobo marketing phooey... PCIe 3.1 can run across a 25ft cable with virtually no performance loss. It is likely that the same is true for PCIe 4.0... if anything they are just tightening up the margins on vendors so they can get the performance out of the silicon that is already there.
              Major much in signal theory did you?
              PCIe 3.0 runs 8GT on each diff pair. Nyquist frequency is 4GHz and Nyquist rate is obviously 16 gigasamples. You usually use 8 or 16GHz SerDes trancievers.
              PCIe 4.0 will run 16GT on each pair. It will double frequency. Probably using 16 or 28GHz SerDes trancievers. It has PROFOUND EFFECT on signal integrity. You usually need signal repeaters on 10 inches+.
              PCIe 5.0 will run 32GT on each pair. It will also double frequency. 28GHz+ SerDes trancievers. It uses NRZ encoding. This is stupid difficult and has a reach that you count in INCHES, not feet.
              PCIe 6.0 will probably not increase frequency but will encode using PAM4 for doubling the data transfer. Most likely it will use some FEC encoding as PAM4 has even worse transmission length than NRZ (PAM2) for obvious reasons.

              Frequency increases just don't behave as you'd obviously like them to. And I'd like to see the "25ft cable with virtually no performance loss". If it is even remotely true I HIGHLY doubt they are using the same electrical physical transmission encoding or cable to encode PCIe 3.1. That some garage stunt can pull a long cable and claim some distance is about as useful as claiming that all CPUs will reach the same as a max overclock record by fiat.

              But sure. Go ahead with your 25ft electrical cable for PCIe 4.0+. I'll bring the popcorn.

              Comment


              • #17
                Originally posted by polarathene View Post
                The PCIe bandwidth boosts are great. But there's still that bottleneck with CPUs isn't there that don't have devices directly connected to lines, DMI or something(at least for Intel, AMD has their own equivalent I think). Last I knew it was stuck on x4 lanes.
                Nope. Biggest bottlenecks are the external interfaces. Internal interfaces can easily be fixed from a generation to next. External ones are standards and are far harder to "fix".
                Pushing ultra-high rate A/D data straight into a CPU, like GHz software defined radio places the biggest strain on the external links. They are just not fast enough. Even PCIe 5.0 is "slow". That's why the industry is in a hard push towards increasing standardized external interface bandwidth.

                Comment


                • #18
                  Originally posted by computerquip View Post

                  Can't find my post for some reason but there's actually a few efforts to quicken the pace of these increases coming out, such as GenZ or CAPI. The hardware is there from what it looks like.
                  CAPI and PCIe usually shares the same SerDes on a SoC. The biggest limitation is not protocol but rather physical transmission capability. CAPI 2.0 / NvLink 2.0 has some intrinsic advantages over PCIe 4.0 but really shares about the same max transmission numbers per lane.

                  Comment


                  • #19
                    Originally posted by betam4x View Post
                    I think they should probably hit the pause button on doubling the bandwidth every generation. PCIE 4.0 currently already requires multiple layers to implement properly, making motherboards far more expensive than in the past. I can only imagine what will happen with 5.0 and 6.0.
                    It seems like PCI-SIG's goal is to make something like an open-standard Infinity Fabric. I, for one, can't wait for the ability to stick extra CPUs into my PCI express slots at will. The increased bandwidths of newer PCIe generations will start to make things like this practical.

                    Not to mention the possibility of discrete GPUs using your main system memory with actually reasonable performance, like what Intel tried to do in the 90s.

                    Comment


                    • #20
                      Originally posted by chroma View Post
                      How can they design PCIe 6.0 to be backwards compatible with PCIe 5.0 and 4.0 when few vendors have even implemented PCIe 4.0 yet? Who even wants to bother implementing PCIe 4.0 now, knowing it's already obsolete twice over?
                      Lol, is this a joke/sarcasm or are you seriously asking this?

                      Comment

                      Working...
                      X