Announcement

Collapse
No announcement yet.

PCI Express 6.0 Announced For Release In 2021 With 64 GT/s Transfer Rates

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Zan Lynx View Post
    And Thunderbolt uses short, heavily shielded cables with active signal repeaters. These things are not easy to build.

    25 feet is ridiculous and even if it managed to connect, I bet the error rate was insanely high.\

    And all of you have seen the limits on 100 gigabit Ethernet cables right? Half a meter. That is all. If you want more go optical.

    PCIe 5 and 6 will be lucky if they can work past the first slot.
    https://www.mellanox.com/blog/2016/1...-linkx-cables/
    Your DAC instances are out. Its 5 meters for 100 gigabit Ethernet DAC and 3 meters for 200 gigabit Ethernet. So even allowing for running on the motherboard you should have a foot or 2 off the motherboard to play with.

    Not all thunderbolt cables are electrical.
    https://www.youtube.com/watch?v=0BiIllitCh4
    Some are AOC and AOC Thunderbolt 60meters ~ 200feet.

    Originally posted by milkylainen View Post

    Major much in signal theory did you?
    PCIe 3.0 runs 8GT on each diff pair. Nyquist frequency is 4GHz and Nyquist rate is obviously 16 gigasamples. You usually use 8 or 16GHz SerDes trancievers.
    PCIe 4.0 will run 16GT on each pair. It will double frequency. Probably using 16 or 28GHz SerDes trancievers. It has PROFOUND EFFECT on signal integrity. You usually need signal repeaters on 10 inches+.
    PCIe 5.0 will run 32GT on each pair. It will also double frequency. 28GHz+ SerDes trancievers. It uses NRZ encoding. This is stupid difficult and has a reach that you count in INCHES, not feet.
    PCIe 6.0 will probably not increase frequency but will encode using PAM4 for doubling the data transfer. Most likely it will use some FEC encoding as PAM4 has even worse transmission length than NRZ (PAM2) for obvious reasons.

    Frequency increases just don't behave as you'd obviously like them to. And I'd like to see the "25ft cable with virtually no performance loss". If it is even remotely true I HIGHLY doubt they are using the same electrical physical transmission encoding or cable to encode PCIe 3.1. That some garage stunt can pull a long cable and claim some distance is about as useful as claiming that all CPUs will reach the same as a max overclock record by fiat.

    But sure. Go ahead with your 25ft electrical cable for PCIe 4.0+. I'll bring the popcorn.
    I would not be saying 25ft cable is impossible. I do wonder if the person can tell the difference between a DAC cable and a AOC cable. DAC 25ft at PCIe 4.0 and up highly unlikely heck with PCIe 3.0 it trouble. PCIe 3.0 by AOC cable to 25 feet heck even 60 meters/200 feet works perfectly .

    AOC thunderbolt cables have zero performance loss out to 60meters. At 60 meters you start running into a power provide problem. Do note AOC is optical inside even that it what looks like normal copper plugs both end. You get AOC these for long run USB, Thunderbold, server to server in rack direct motherboards connect... There is no repeaters in these cables the covert to optical is at both ends. Yes AOC USB cables have been used in some of the China PCIe 3.0 break out cables that give you 25 feet/7 meters. This why I wonder if the person cannot tell the difference between an AOC cable and a DAC cable. DAC cables have lots of interference problems. AOC are a lot longer and very light on the interference problems there are some forms of PCIe 3.0 breakout cables that are in fact AOC. I would expect to see AOC cables appear for PCIe 4.0 to 6.0 as well. 60 meter runs in AOC cables is not that odd.

    I do wonder how they are going todo PCIe 7.0 hopefully not AOC run around the motherboard because that would be expensive pain in the ass to make.

    Comment


    • #22
      Originally posted by milkylainen View Post
      Even PCIe 5.0 is "slow". That's why the industry is in a hard push towards increasing standardized external interface bandwidth.
      Umm... I think you misunderstood me? Read about DMI 3.0 here:
      https://en.wikipedia.org/wiki/Direct_Media_Interface

      Anything that goes through that(multiple devices or just a single one) will be capped at 4GB/s / 32GT/s(8GT/s per lane). PCIe 5 is 32GT/s and 6 64GT/s per lane, they're plenty fast in that regard with an obvious bottleneck in bandwidth being when a device does not have direct lanes to the CPU which is what I mentioned..

      Typically some lanes / slots for PCIe devices will have direct access to the CPU, these are often used for GPU(s). The remaining devices and any chipsets on the motherboard can have high I/O, but any communication that needs to go through DMI for all those devices are bottlenecked by DMI, they all share that tiny x4(or x2) lanes of bandwidth.

      If it helps here's two random image results that convey the distinction visually:

      https://images.anandtech.com/doci/94...20Platform.jpg
      https://images.anandtech.com/doci/11...70_chipset.png

      It seems that AMD's newer chipset x570 takes a big leap here(I think this is the equivalent to DMI bottleneck for I/O):
      https://en.wikipedia.org/wiki/List_o...s#AM4_Chipsets

      That's important, afaik, chipsets like thunderbolt that you'd get in laptop or motherboard without any extension PCIe cards, is going to be limited by that. So currently AFAIK, Intel is bottlenecked by 32GT/s while AMD's x570 is much nicer 256GT/s ceiling.

      ---

      EDIT: Well I feel stupid now haha. Turns out while looking for the DMI equivalent on AMD's end, I misinterpreted the prior chipset data from that table to be the link bandwidth to the CPU. I've since been informed that it's still bottlenecked at x4 lanes, just upgraded PCIe 4.0, thus the ceiling is 64GT/s (8GB/sec)... not great, not terrible.

      For anyone else interested, here's a useful diagram that makes it more clearer how I/O bandwidth flows and connections in x570:

      https://www.gamersnexus.net/images/m...ck-diagram.jpg
      Last edited by polarathene; 19 June 2019, 07:11 AM. Reason: I'm an idiot :)

      Comment


      • #23
        Originally posted by chroma View Post
        How can they design PCIe 6.0 to be backwards compatible with PCIe 5.0 and 4.0 when few vendors have even implemented PCIe 4.0 yet? Who even wants to bother implementing PCIe 4.0 now, knowing it's already obsolete twice over?
        Specs for both 4 and 5 have been finalized, they don't change now. A vendor just has to implement following the specs. I don't know the specifics regarding implementing the hardware for such, but afaik we still support USB 1 and PCIe 1 backwards compatibility? It'd just be approached the same way, new boards aren't dropping that support are they? Devices for 4 and 5 will be released before a board with 6 is available(which you probably don't want to design hardware for atm if you need to wait for the actual spec to finalize, at best you can get started with 5 today then). I don't know the time it takes to bring a new product design to market, but I imagine there's a fair amount in the works for 4.0 already on the way, plus sometimes for marketing purposes you can release a 5.0 edition at a later point(when there's actually support for it to justify sales).

        Comment


        • #24
          Originally posted by microcode View Post

          The statement I'm referring to is about going from PCIe 3.1 to PCIe 4.0, which did involve an increase in frequency. I was refuting the statement that a dramatic change in physical design was not required to enable PCIe 4.0, when in fact one was, due in part to frequency increases.

          Ah. Given we were discussing PCIe 6.0, I thought you were referring to 5.0->6.0. I'm much less familiar with the other jumps, but yea, increasing frequency, especially 2x on something already so high frequency is NOT a trivial design change. Which is about all I know about it, lol, which is why lots of much smarter people are working on it.

          Comment


          • #25
            Originally posted by chroma View Post
            How can they design PCIe 6.0 to be backwards compatible with PCIe 5.0 and 4.0 when few vendors have even implemented PCIe 4.0 yet? Who even wants to bother implementing PCIe 4.0 now, knowing it's already obsolete twice over?
            From what bit I know, most backwards compatible architectures usually start at the lowest common denominator and negotiate upwards to the highest versions that both the device and the host (possibly including all devices on the bus, vary's by implementation) support. That's generally the safest and keeps things from damaging each other until all parties know what voltage and signaling they support.

            Comment


            • #26
              Originally posted by milkylainen View Post

              Major much in signal theory did you?
              No my signals teacher was shit. That said... everyone said *Exactly* the same thing about PCIe 3.1 and before and denied that it would work over a few feet away... but there you go a properly shielded cable and you get tens of feet. I would fully agree with you if PCIe was still a parallel bus like PCI was... but since it is designed much more like a serial bus the sending and recieving end of things can be designed to be *much* more tolerant of skew, so as long as noise is kept low and not too much voltage drop occurs the signals don't care how long the cable is. I won't be supprised if PCIe 4.x and greater doesn't make it as far... but the overengineering done for PCIe allows for quite some margins in what actually will work when pushing the limits.
              Last edited by cb88; 19 June 2019, 10:20 AM.

              Comment


              • #27
                Originally posted by computerquip View Post

                Can't find my post for some reason but there's actually a few efforts to quicken the pace of these increases coming out, such as GenZ or CAPI. The hardware is there from what it looks like.
                Guys, have a look over here in this White Paper of the Gen-Z consortium: https://genzconsortium.org/wp-conten...dsPECFF_WP.pdf

                That advertises to be a more cost effective solution than anything the PCI-SIG seems to offer. There is one major disadvantage though: You need a new motherboard and add-in cards supporting this new connector.

                Comment


                • #28
                  Originally posted by oiaohm View Post
                  Not all thunderbolt cables are electrical.
                  Not much of a reader are you? I specifically said electrical cable. Several times.
                  Optical cabling is very expensive and is of little use inside computers. 99.9(a lot of nines) PCIe installations are electrical.

                  Implying equality between signal integrity on electrical or optical cabling is just being a *****.
                  This is electrical signalling. The article is about the electrical PCIe spec. Not optical. It has nothing to do with optical specs.
                  OCuLink is another history and is currently stuck at PCIe 3.0 speeds.
                  Running PCIe over your own optical protocol is also a possibility, but still has nothing to do with the electrical interface and it's characteristics.

                  And no, you can't transit PCIe on arbitrary length because "It's optical".
                  A 300km imaginary optical cable would have a minimum 2ms round trip time. Absolutely eons.

                  Comment


                  • #29
                    Originally posted by chroma View Post

                    I suppose it's a matter of perspective. I'll be happy to see any of these hit market, but with three announced, naturally I want the fastest of the three so I'd generally wait for it to hit the market, but PCIe 6.0 is not going to be available any time soon. The other thing is that the experience of implementing PCIe 4.0 is not going to inform the standard for PCIe 5.0, nor will PCIe 5.0's real experience inform the standard for PCIe 6.0. This seems like a strange way to run a railroad, but it's still better than what's happened with USB 3.0, 3.1, and 3.2 nomenclature. At least the PCI Consortium is bothering to increment the major rev number so it's moderately less confusing to casual consumers. c__c
                    Yeah optical anything is probably dead until some third party design house comes up with an IP block companies can just drop into their design. So Intel probably won't license any of their optical stuff and etc...

                    It think they are going straight to PCIe 6.0 as they know it's going to take awhile to nail down... it will be years before they even get to implementing test silicon.

                    Comment


                    • #30
                      This is great news. I hope this will make PC market feel less stagnated, and that hardware manufacturers will be able to take advantage of all that bandwidth available to them.

                      Comment

                      Working...
                      X