Announcement

Collapse
No announcement yet.

PCIe 7.0 Specification v0.5 Published - Full Spec Next Year

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by pong View Post
    But as the NVIDIA 4090 shows basically the GPU's already just about the size of many motherboards so even if you HAD the slots for more,
    you're not going to have the space mechanically or PSU / cable sanity if you tried.
    That's what riser-cables are for and the sort of people that operate more than one 4090 at a time are happy with that. From a thermal standpoint you don't want all of those multiple GPUs you have stuck in a hotbox anyway.

    Personally, I'd be happy if motherboards started coming with more .m2 slots. I have 2 in my machine and wanted to install a 3rd recently for increased capacity. I had to buy a PCI adapter to do it and those aren't cheap if you need support for multiple drives. In my case it was just a single drive so the cheap adapter I got from Sabrent worked fine but I hope future generations of motherboards address this. I really don't need so many SATA ports anymore and would gladly trade some of them for .m2 slots.

    Comment


    • #12
      Originally posted by ahrs View Post

      That's what riser-cables are for and the sort of people that operate more than one 4090 at a time are happy with that. From a thermal standpoint you don't want all of those multiple GPUs you have stuck in a hotbox anyway.

      Personally, I'd be happy if motherboards started coming with more .m2 slots. I have 2 in my machine and wanted to install a 3rd recently for increased capacity. I had to buy a PCI adapter to do it and those aren't cheap if you need support for multiple drives. In my case it was just a single drive so the cheap adapter I got from Sabrent worked fine but I hope future generations of motherboards address this. I really don't need so many SATA ports anymore and would gladly trade some of them for .m2 slots.
      [ Sata 3.0 was announced ~2008/2009 and revisions updated up to 3.5 ~2020, but stayed with ~6Gbps; Sata Express never got popular (with appearing on markets with m.2), but could have provided ~2GBps(~16Gbps), while being told requiring significantly increased power compared to serial ATA 3.x ]

      Comment


      • #13
        Originally posted by schmidtbag View Post
        Crazy how 7.0 x1 is faster than 3.0 x16, when you consider 3.0 is still quite good in most cases.
        Not faster, but approximately the same speed. Each step is about double, so the version number works like an exponent. log2(16) = 4 and 7 - 3 = 4. Or, to put it another way: (2^7 / 2^3) = 16


        Comment


        • #14
          Originally posted by vsteel View Post
          That is why they went to PAM4 encoding. More bits with less frequency, trying to mitigate the high speed signaling issues.
          I think PAM4 only really helps with power (i.e. by comparison with yet another frequency-doubling). However, it's still less noise-tolerant than PCIe 5.0, and that requires more expensive boards (or cables) with better signal integrity.

          At some point, optical is going to be the only pragmatic option to further increase bandwidth. That's where I think either PCIe will end, or perhaps it'll continue to be called PCIe in name only. The reason being that I think the way you design an optical networking protocol differs, due to facing different problems than the ones you have to work around with a high-speed copper link. So, while you could just take the same protocol and run it through an optical transceiver, the smart move would be to rework the protocol to better address the challenges and opportunities of that technology.
          Last edited by coder; 03 April 2024, 03:42 AM.

          Comment


          • #15
            Originally posted by ahrs View Post

            That's what riser-cables are for and the sort of people that operate more than one 4090 at a time are happy with that. From a thermal standpoint you don't want all of those multiple GPUs you have stuck in a hotbox anyway.

            Personally, I'd be happy if motherboards started coming with more .m2 slots. I have 2 in my machine and wanted to install a 3rd recently for increased capacity. I had to buy a PCI adapter to do it and those aren't cheap if you need support for multiple drives. In my case it was just a single drive so the cheap adapter I got from Sabrent worked fine but I hope future generations of motherboards address this. I really don't need so many SATA ports anymore and would gladly trade some of them for .m2 slots.
            Yeah in and of itself PCIE is nice. But they make it very unusable compared to reasonable requirements as we seem to agree at least in the case of
            M.2s. I've got hmm some motherboards with two M.2s and no way to add more other than by USB bridge or taking a x4/x8/x16 PCIE slot
            that is bifurcated and can accept a M.2 holder.

            I agree the (effective) limit of 1-2 M.2 sockets is maddening; we've got THE SMALLEST form factor mass storage "drive" PCs have EVER
            had (c.f. 2.5in SSD, 3.5in HDD, 5.25in HDD) and yet we've typically got the LEAST facile expansion capability to add "a few" / "several" 3-8 or
            whatever drives in the case / attached to the motherboard ever. Lots of motherboards had at least 4-6 SATA ports, 4 IDE drive attachments easily, and NVME maybe 2 and you're done.

            It's not the fault of PCIE / NVME specs but the penny-pinching frugality of having not nearly enough PCIE lanes to go to 3-4-6-8 NVME drives and
            also not having bridge / switch etc. chips common so one can at least put 2-4 or whatever drives on one shared x4 link or whatever.

            So absent having 4 M.2s you can't even meaningfully use RAID with them beyond mirroring 1 drive which would be so anemic in storage
            space it's hardly appealing.

            As for risers? Yeah sure it CAN work. But rarely to you even see DGPUs that don't take up at least two "slots" (comfortably) if not more
            so what's the point of having PCIE slots that almost 100% are getting used for a GPU but spacing them and having so few lanes that
            even installing one GPU is a pain and 2-3 is nigh impossible without major compromises.
            Risers are a bad work-around to an obsolete / broken PC platform design that should have wider spaced slots and more x8 / x16 slots
            or should have just embraced the EGPU / rack oriented techniques to expand the PCIE x16 bus out to 2+ GPUs that don't have to be
            crammed in an undersized case with bad power routing etc. etc.

            And now they're scaling the PCIE bit rate ever upwards so I don't really see things on the motherboard scaling well without
            physically and electrically changing the architecture so things can be more distributed and scalable.

            Yeah I'd be also happy to have less SATA ports and more x4 NVME M.2 slots. Probably like 8 M.2 drives would be quite nice for
            say database or virtualization stuff where one may want several TBy of fast random access data but also comfortably use RAID10
            for some degree of reliability with mirrirong.

            It's odd so many (comparatively) 6GBps SATA ports are commonly supported but so few PCIE Gen2 (5GBps) / Gen3 (8Gbps) M.2 drives are -- the cost of the N Gb/s SERDES 1-lane links would be kind of comparable for SATA vs older levels of PCIE so why not scale out the M.2 slots.
            The only real advantage (wrt. a motherboard designed conscious of PCB real estate) of SATA is it's a cabled interconnect to a drive you mount off-board vs. a M.2 on-board slot but one surely could have cabled M.2 "bays" that have a cabled PCIE link going out to 2-4 M.2 drives for a conceivably reasonable cost.

            Comment


            • #16
              Originally posted by coder View Post
              I think PAM4 only really helps with power (i.e. by comparison with yet another frequency-doubling). However, it's still less noise-tolerant than PCIe 5.0, and that requires more expensive boards (or cables) with better signal integrity.

              At some point, optical is going to be the only pragmatic option to further increase bandwidth. That's where I think either PCIe will end, or perhaps it'll continue to be called PCIe in name only. The reason being that I think the way you design an optical networking protocol differs, due to facing different problems than the ones you have to work around with a high-speed copper link. So, while you could just take the same protocol and run it through an optical transceiver, the smart move would be to rework the protocol to better address the challenges and opportunities of that technology.
              [ Some offload from cpu towards storage will be with direct interaction between storage devices (or peripherals in general) and with optical in that platform section and with user acceptance for e.g. external optical connectors&cabling (e.g. 98'(30m), 'do not bend in sharp edges and never crease, beware of elevated high temperatures') standardization will be an advantage for increased compatibility&interconnection between x86(s), armv7-9, risc-V, protocols for gpu, deep learning, storage (e.g. PCIe, Sata, Usb), DP/Hdmi, network and maybe (computing on)memory devices.
              Peak bw requires peak power, while mostly highest enduring bw on consumer hw (even more on PCIe~7.0 levels) is probably a lower requirement (compared to keeping costs on connection to mass markets)?

              Reduced costs and versatile connectivity is on USB3/4(?) ]
              Last edited by back2未來; 03 April 2024, 06:36 AM. Reason: content

              Comment


              • #17
                Originally posted by coder View Post
                Not faster, but approximately the same speed. Each step is about double, so the version number works like an exponent. log2(16) = 4 and 7 - 3 = 4. Or, to put it another way: (2^7 / 2^3) = 16
                Whoops you're right - I meant the same.

                Comment


                • #18
                  Originally posted by pong View Post

                  Yeah the whole PC platform "architecture" is royally screwed up.

                  The apparent model is that a basic PC with CPU+IGPU should be enough for "most anyone" so expansion capability is almost neglected entirely in practice other than by plugging in random slower USB2/3 stuff.

                  Then for the "gamers" or "productivity" people, ok, buy a premium motherboard and we'll give you one decent PCIE x16 slot where you can install one GPU and probably have things sort of work mechanically / thermally / electrically.

                  Oh, you want more M.2 SSDs, 2-4 GPUs, maybe a couple 10-100 Gb NICs? Several high capability TB / USB4 / type C ports? Too bad for you, you're not getting anywhere near enough PCIE lanes / slots / USBC ports / USB4 ports etc. to basically get away with more than a couple significant peripherals. Maybe if you buy the halo $1200 motherboard you can have another usable slot or two for PCIE.

                  So USB4 / newer thunderbolt, newer PCIE4/5/+ are all very nifty things. So is ECC DRAM etc. etc. M.2 NVME SSDs. I'm looking forward to the day when I can actually USE a non-trivial amount (1-2) of such things in a reasonable "prosumer" computer.

                  But as the NVIDIA 4090 shows basically the GPU's already just about the size of many motherboards so even if you HAD the slots for more,
                  you're not going to have the space mechanically or PSU / cable sanity if you tried.

                  Can't we just make PCs scalable again? I remember "easily" being able to get 6-8 ISA or PCI slots on motherboards for modest cost.
                  Dual-socket ones also.

                  Now the back panel is such a cluster you can't even really see or have room to plug in adjacent USB etc. ports.

                  How's this going to work for the next 3-4 desktop PC generations?
                  I feel your pain. I wouldn't say I'm in full on buyers remorse phase, but as someone who got a bit too addicted to buying old workstations on eBay, I'm definitely a bit underwhelmed with the new build. It's fairly beefy with a Ryzen 9 7900X3D, 96GB DDR5, 1000W PSU, Fractal Design Meshify 2 case that can hold 11 3.5 HDDs, etc. My primary requirements for the motherboard were as many "useful" PCIe slots as possible, as many M.2 slots as possible, and triple display outputs. Here's the PCIe layout of the new board..
                  • PCIe 5.0 x16
                  • PCIe 3.0 x1
                  • PCIE 4.0 x4
                  • PCIE 4.0 x2
                  There are 4 M.2 slots.
                  • PCIe 5.0 x4
                  • PCIe 4.0 x4
                  • PCIe 4.0 x4
                  • PCIe 4.0 x4 (this one shares bandwidth with the last PCIe slot and only runs at PCIe 4.0 x2 if something is in that PCIe slot)
                  There's 4 SATA ports on the motherboard.

                  I know it's apples to oranges, but I definitely miss some things about the ~8 year old Z840 I was using. It had this PCIe layout.
                  • PCIe 3.0 x4
                  • PCIe 3.0 x16
                  • PCIe 3.0 x4
                  • PCIe 3.0 x8
                  • PCIe 3.0 x16
                  • PCIe 3.0 x8
                  • PCIe 3.0 x16
                  • PCIe 2.0 x1​
                  All of the x8 and x16 slots support bifurcation. It has 6 SATA ports from the chipset and another 8 SATA ports from a nice embedded LSI SAS/SATA controller. I had a dual slot GPU, 3 NVMe drives, and I still had 2 x16 slots open + an x8. If I wanted to go batshit crazy on fast storage I could have added 10 more NVMe drives! I also had an 8x 2.5" hot swap bay for SATA SSDs hooked up to the LSI controller in addition to the 4x 3.5" blind-mate HDD slots. The workstation / HEDT platforms are just so damn extensible because they have oodles of PCIe lanes and slots.

                  Some of the high end X670E boards I saw were ridiculous. $450 for a board with 2 PCIe 5.0 x16 slots total. You can only do 1 x16 or 2 x8. There aren't any PCIe 5.0 GPUs. So you put anything in the bottom slot and your PCIE 4.0 x16 GPU runs at x8 and you have a free PCIe 5.0 x8 slot to put 2 NVMe drives in? That sucks. I'd much rather have 4 PCIe 3.0 x16 slots than 1 PCIe 5.0 x16 slot. PCIe switches have also seemingly disappeared from consumer boards. They got more expensive, but it seems like a $450 board could absorb the cost. Most users aren't hammering every slot concurrently so that would be another path to better expansion options.
                  Last edited by pWe00Iri3e7Z9lHOX2Qx; 03 April 2024, 11:13 AM.

                  Comment


                  • #19
                    Just in time for CXL 4.0+ to bake in PCIe 7's capabilities. These two protocols will be just what the doctor ordered for A.I. and hyperscale computing going forward from 2025 onward. Now if UXL and UCIe can get their act together and really come online by 2025 as well then you will have a truly universal heterogeneous platform that was promised over 12 years ago with AMD's HSA. Granted, this is a heterogeneous platform spread out over 3 protocols...CXL for heterogeneous memory, UXL for heterogeneous accelerators and compute with oneAPI as the basis for compute and as a competitor to CUDA and finally UCIe as the heterogeneous standard for interconnects. But even though it's three protocols there is buy in from all major players and ISA's from consumer OEMs to hyperscalers. Of course, for us schmucks down here in normal computing land PCIe 7 won't be ubiquitous until say....2030 ? And by then at the rate the socket keeps swelling in size from the ever growing chiplet craze, we could see computers with PCIe 7 where there is no motherboard area left after the huge chiplet socket is in place for anything more than traces running out from the socket to various external connectors like USB and a couple of PCIe GPU connectors if that.

                    Comment


                    • #20
                      Originally posted by Jumbotron View Post
                      Just in time for CXL 4.0+ to bake in PCIe 7's capabilities. These two protocols will be just what the doctor ordered for A.I. and hyperscale computing going forward from 2025 onward. Now if UXL and UCIe can get their act together and really come online by 2025 as well then you will have a truly universal heterogeneous platform that was promised over 12 years ago with AMD's HSA. Granted, this is a heterogeneous platform spread out over 3 protocols...CXL for heterogeneous memory, UXL for heterogeneous accelerators and compute with oneAPI as the basis for compute and as a competitor to CUDA and finally UCIe as the heterogeneous standard for interconnects. But even though it's three protocols there is buy in from all major players and ISA's from consumer OEMs to hyperscalers. Of course, for us schmucks down here in normal computing land PCIe 7 won't be ubiquitous until say....2030 ? And by then at the rate the socket keeps swelling in size from the ever growing chiplet craze, we could see computers with PCIe 7 where there is no motherboard area left after the huge chiplet socket is in place for anything more than traces running out from the socket to various external connectors like USB and a couple of PCIe GPU connectors if that.
                      Your consumer motherboard in 2030 will give you one PCIe 7.0 x16 slot for your 5 slot wide PCIe 6.0 GPU. And maybe one PCIe 7.0 x4 M.2 slot with a case sized active cooler for your SSD .
                      Last edited by pWe00Iri3e7Z9lHOX2Qx; 03 April 2024, 05:44 PM.

                      Comment

                      Working...
                      X