Announcement

Collapse
No announcement yet.

PCI Express 1.0 vs. 2.0 vs. 3.0 Performance With NVIDIA/Radeon Graphics On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by schmidtbag View Post
    I'm not so sure about that. Notice how almost all of the 1.0 results are nearly exactly 50% of the other two. This leads me to believe the drivers depend on the availability of all 16 lanes, or at least a certain amount of bandwidth. Meanwhile, perhaps the motherboard doesn't actually use 1.0 speeds, but rather does 2.0 speeds and just chops the slot to 8x lanes. Just a theory.

    Michael - it might be interesting to test the Fury on a 2.0 slot with 8x lanes. I'm also curious if Windows for this same build will yield the same results, because if you look at this article, you'll find the PCIe 1.1 16x slots aren't a whole lot slower than the 2.0 and 3.0 16x slots. In many cases, even 8x slots don't have any major difference either.

    If we can prove this is a driver situation, this could be very important for the performance of some GPUs for many people. I'm curious how many other AMD GPUs on the open-source drivers and Windows' closed drivers may share this issue.



    And yet people were moaning about the AMD X370 chipset "only" having 16 lanes total for 2x GPUs.
    Well we certainly have not this kind behaviour under Nouveau. Sure there is some form of speed improvement, but it usually does matter as much as it matters with Nvidias drivers

    Comment


    • #12
      Can you check the pcie config cap registers and see if changing the sbios setting actually disables advertising the higher speeds or not? It may just change the default negotiated speed.

      Comment


      • #13
        Originally posted by agd5f View Post
        Can you check the pcie config cap registers and see if changing the sbios setting actually disables advertising the higher speeds or not? It may just change the default negotiated speed.
        Worth trying, but at least while I did my experiments on Nvidia regarding PCIe I experienced exactly the same behaviour. PCIe bandwidth is nearly not important and accounts for around ~5% perf boost at most in real life scenarios. When Prime offloading it's a totally different story, but otherwise it just doesn't matter much.

        EDIT: this was with testing on FullHD Resolutions though, might make a bigger difference on 4K

        Comment


        • #14
          Thank you Michael, great test! I'm very surprised by the results when it comes to the the AMD vs NVidia difference as well! It seems to spark some questions here in the forum, it will be interesting to see if we get any comments from any AMD devs on what they think the difference might come down to.

          Comment


          • #15
            Originally posted by microcode View Post

            I would advise against getting a Skylake X, they're prettymuch the same as the socket 2011 equivalents, but more expensive, and with mainboard-level DRM on chipset features.
            Just curious, what do you mean with the mainboard-level DRM comment, where can i read up on this ? And, isnt DRM pretty standard in most new hardware in one way or the other ? Im thinking protected video paths in modern graphicscards that support DRM, TPM modules, secure boot ...

            Comment


            • #16
              Just my humble opinion....setting the link rate in the BIOS wasnt confirmed as set post boot. What if the NVidia card forces the BIOS to override its setting by advertising it will only take the highest available link rate?

              I have seen many PCIe cards adapt their lane usage depending on the Gen level and the type of slot they are in.

              its a compelling test indeed and worthy of further analysis.

              Comment


              • #17
                Originally posted by microcode View Post

                I would advise against getting a Skylake X, they're prettymuch the same as the socket 2011 equivalents, but more expensive, and with mainboard-level DRM on chipset features.
                No idea what that so-called "mainboard DRM" is about but I'm more than willing to bet that the company that is consistently #1 or at least one of the largest contributors not only to the Linux kernel but to a vast number of open source projects* that are regularly used not only by Intel processors but by the wider community will be able to install & run Linux just fine.

                Not to mention that Skylake Xeons have been running Linux on cloud servers since the beginning of the year and I'm pretty sure they work with Linux too.

                * Including literally inventing the direct rendering infrastructure that AMD relies on for its Linux drivers, can't say that AMD has contributed some vital piece of infrastructure that benefits literally thousands of applications in the same way.

                Comment


                • #18
                  the different is only really notice with sli or crossfire

                  Comment


                  • #19
                    Originally posted by edwaleni View Post
                    Just my humble opinion....setting the link rate in the BIOS wasnt confirmed as set post boot. What if the NVidia card forces the BIOS to override its setting by advertising it will only take the highest available link rate?

                    I have seen many PCIe cards adapt their lane usage depending on the Gen level and the type of slot they are in.

                    its a compelling test indeed and worthy of further analysis.
                    Then there wouldn't be any difference between tests, would it?

                    Comment


                    • #20
                      Originally posted by chuckula View Post
                      No idea what that so-called "mainboard DRM" is about but I'm more than willing to bet that the company that is consistently #1 or at least one of the largest contributors not only to the Linux kernel but to a vast number of open source projects* that are regularly used not only by Intel processors but by the wider community will be able to install & run Linux just fine.

                      Not to mention that Skylake Xeons have been running Linux on cloud servers since the beginning of the year and I'm pretty sure they work with Linux too.

                      * Including literally inventing the direct rendering infrastructure that AMD relies on for its Linux drivers, can't say that AMD has contributed some vital piece of infrastructure that benefits literally thousands of applications in the same way.
                      I mean that there are physical cryptographic devices required to unlock the full capabilities of the chipset. I'm typing this to you on a Skylake-EP Xeon, I am aware that they work.

                      Comment

                      Working...
                      X