Announcement

Collapse
No announcement yet.

PCI Express 4.0 Is Ready, PCI Express 5.0 In 2019

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by Michael View Post

    I'll need to see if I have any motherboards that allow limiting PCI-E 3.0 slots to PCI-E 2.0..... Back in the day I know some DRM drivers had options for whether to run at PCI 1 or 2.0, but don't recall anything for 2 vs. 3.... Or am I missing something?
    I'm not sure, I think that the Tom's hardware test did exactly that, they limited the hardware from the BIOS. At least they decreased the number of lanes that the card could run on.

    http://www.hardwaresecrets.com/pci-e...formance-gain/

    Edit: My bad, it wasn't Tom's hardware.

    Comment


    • #42
      Originally posted by Azpegath View Post
      Michael, I've asked you for this once before, but never saw any test of it, and since this article brought it up again, I might as well ask again (as a lifetime premium member): Could you do a test with a couple of games (like you usually do), comparing the performance between running the same card (preferably a newer AMD) over a PCI2 and PCI3? I've read Tom's Hardware tests that are a couple of years old, and there was no difference whatsoever, even on high-demanding games. I guess where we would see a difference the most is games where texture or polygon streaming is mostly utilized. But I'll leave that for you to test...
      It might be easier to test pcie 3.0 x4 vs x8 vs x16 as you can do that with cheap dumb pcie risers, and for most cases it's the same as testing it with PCIe 2.0 as the only thing that matters is the actual bandwith.

      PCIe 3.0 x8 has same bandwith as PCIe 2.0 x16, for example so Michael can get away with placing the card in the second x16 (electrically a x8 slot) on most gaming mobos, at no added cost.

      Comment


      • #43
        Originally posted by Azpegath View Post

        I'm not sure, I think that the Tom's hardware test did exactly that, they limited the hardware from the BIOS. At least they decreased the number of lanes that the card could run on.

        http://www.hardwaresecrets.com/pci-e...formance-gain/

        Edit: My bad, it wasn't Tom's hardware.
        Found an option in BIOS on a test system... Running tests this morning.
        Michael Larabel
        https://www.michaellarabel.com/

        Comment


        • #44
          Originally posted by Michael View Post

          Found an option in BIOS on a test system... Running tests this morning.
          Cool, thank you!

          Comment


          • #45
            On youtube there are various videos with tests like that already (games on Windows) In ex. matching PCIE 3.0 x16 GTX1080Ti with PCIE 2.0 x16 able Sandy Bridge. In single card setups PCIE 2.0 bandwidth is not exhausted so there is virtually zero difference. Perhaps with three or more cards PCIE 2.0 would start becoming a problem.

            Comment


            • #46
              Originally posted by Michael View Post

              Found an option in BIOS on a test system... Running tests this morning.
              Cool, this would be really interesting to see on Linux. I've seen tons of PCIe benchmarks on high end Windows gaming hardware, but my guess is, the PCIe speed is often less relevant with older games and on Linux.

              Comment


              • #47
                Originally posted by starshipeleven View Post
                I wouldn't call pre-SB Intel iGPUs worth of anything, their goal was office PCs.
                That's irrelevant. The idea is old. Outside x86, some earlier systems also had GPU on the mobo, maybe even one with a socket in Motorola/RISC/UNIX land. Can't remember.

                And they were in the chipset, btw.
                That's what I meant with 'were located on a chip on the motherboard', the north bridge chip.

                Comment


                • #48
                  Originally posted by starshipeleven View Post
                  Yawn, it's getting into the "ridicolous overkill" bandwith. I like the fact that this means even smaller connectors/cables can run a GPU properly
                  Storage has replaced GPUs as the main driver for PCIe bandwidth.

                  Late 2016 consumer M.2 SSDs already exceed PCIe 2.0 x4 bandwidth. I guess in 2018 they will exceed PCIe 3.0 x4 bandwidth. So the new PCIe 4.0 standard arrives just in time.

                  Comment


                  • #49
                    Chip interconnects via PCIe benefit from this.. For example AMDs server platform (Naples) has 256 pins in their socket dedicated to PCIe Lanes, which makes up a massive 4094 pin count.... Each pin costs some money to manufacture, you can´t really drop pin prices, it´s some cents per pin (bondwire, pin, contact in socket).. Having 4x the bandwidth on one lane can reduce pin count and thereby drop cost, i guess this is one of the main reasons for the faster PCIe specs..

                    Furthermore one can use PCIe as a bus to archive NUMA between different compute nodes, this is especially usefull for highly dependent datasets, like CFD simulations..

                    Comment


                    • #50
                      Originally posted by Electric-Gecko View Post
                      You appear to misunderstand. Expresscards are hot-swappable expansion cards for laptops.
                      They aren't common anymore though, probably due to them being stuck at PCIe 2.0, and the trend of laptops being thin and stripped of hardware features.
                      Oh, I think I got you right, I was just disagreeing on the assertion that PCIe 2.0 wasn't enough for graphics cards
                      But others have valid points, also. Thunderbolt is superseding Expresscards, and is generally more convenient (though maybe a bit less if you want a permanent attachment to your Laptop, I will give you that)

                      Comment

                      Working...
                      X