Announcement

Collapse
No announcement yet.

Intel Publishes PCIe Bandwidth Controller Linux Driver To Prevent Thermal Issues

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by erniv2 View Post
    And i guess only server grade hardware implements temp sensors on anything else than the main x16 slot, most desktop boards dont even have a proper ASPM support.
    It isn't clear to me from the post or the mailing list cover letter if it is the PCIe controller in the CPU or the actual traces/slots getting too hot. I would assume the former though.

    Comment


    • #12
      This new feature is one more reason why you will be mad trying to find why your computer is superslow… and it will happen in games in most hot scenes

      Comment


      • #13
        Originally posted by schmidtbag View Post
        I feel that if there's a need to throttle PCIe bandwidth due to thermal issues, maybe we need to stop pushing later generations of PCIe so aggressively...
        I don't recall PCIe ever causing heat problems.
        No, but PCIe SSDs, specifically in laptops with chronically poor airflow and/or thermal conductivity, even gen3 devices, can get hot easily, and that hurts their ability to retain data. At a persistent 70C a TLC drive's lifespan can drop to weeks, with the drive forced to perform constant internal block rewrites to prevent data loss. If throttling the link can bring the drive down from 8W to 3W, that can drop it from 70C to 50C which still allows for a year of operation in spec. And yes the user will notice that performance drop. And yes the user should do something about it, so making it noticeable is the right thing to do.
        Last edited by linuxgeex; 17 August 2023, 03:08 PM.

        Comment


        • #14
          Originally posted by schmidtbag View Post
          I feel that if there's a need to throttle PCIe bandwidth due to thermal issues, maybe we need to stop pushing later generations of PCIe so aggressively...
          I don't recall PCIe ever causing heat problems.
          It’s a compounding problem. Servers/datacenters need faster PCIe for networking more than anything else (I/O is a close second for large databases), and this is why we’ve seen an explosion in new PCIe protocol versions with doubled bandwidth; consumer market generally benefits from faster SSDs and not much else. But remember, PCIe also has to retain backwards compatibility with all previous versions. This limits design considerably.

          On top of all that, silicon lithography is getting smaller and denser. The analog PCIe PHYs don’t shrink well and PHYs designed for external traces are quite large to begin with for signal integrity, but the controller logic can. So, running faster, higher-bandwidth logic in a denser package naturally creates local hotspotting that may need thermal control now.

          Do consumers need future PCIe 6.0 or even current 5.0? Not in the least. Servers/datacenters do as the proliferation of cloud services has put considerable strain on network bandwidth requirements. And yet we, as consumers, also connect with various servers everday. Streaming video comes to you via CDNs which connect to dedicated servers, themselves housed in large server farms or datacenters. Streaming audio from dedicated servers. YouTube, TikTok, Snap, Facebook/Instagram all require datacenters, nevermind huge players like Microsoft Azure and Amazon AWS, which often host gaming servers for various companies on top of data access/storage for innumerable corporations for remote access.

          Future PCIe may need to move to optical data transmission via fiber optics to tackle both heat and electrical power consumption.

          Comment


          • #15
            Yaay, more enablement for disfunctional devices and unsustainable standards!

            Comment

            Working...
            X