Announcement

Collapse
No announcement yet.

Intel Speed Select Linux Tool Updated To Handle 32 Socket Servers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Speed Select Linux Tool Updated To Handle 32 Socket Servers

    Phoronix: Intel Speed Select Linux Tool Updated To Handle 32 Socket Servers

    The intel-speed-select tool that lives within the Linux kernel source tree has seen a set of patches prepared for the upcoming Linux 6.6 merge window. Arguably most interesting with this updated Intel Speed Select tool is now the ability to work with more than eight CPU sockets per platform -- the new limit is 32...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    How would they fit 32 CPUs in a single motherboard?
    Do they stack PCBs, use some kind of interconnect or something?

    Comment


    • #3
      Originally posted by tildearrow View Post
      How would they fit 32 CPUs in a single motherboard?
      Do they stack PCBs, use some kind of interconnect or something?
      Same question. I'm assuming 32 is to give some buffer room, and 16 is probably the current “max” they expect. If a chiplet is being counted as a “package”, then I could see a quad-processor motherboard claiming 16 CPU “packages” (an SPR CPU has four(?) processor chiplets plus HBM). But if they mean literal packages, where in a server rack are you going to fit 16 physical processor sockets (plus heat sinking), RAM to feed them, and still have space leftover for cooling and storage?

      Comment


      • #4
        Originally posted by tildearrow View Post
        How would they fit 32 CPUs in a single motherboard?
        Do they stack PCBs, use some kind of interconnect or something?
        You don't. An updated version of the Huawei system that also has a link to the HPE option.



        Comment


        • #5
          Because the same questions come up every time:
          • "Why?" Because a lot of applications prefer to have one very, very large coherent memory image. Fewer need it than used to, because socket compute throughput has often grown faster than workloads have, and new applications are often written in a cluster-friendly manner rather than a manner assuming one big tightly-bound SMP.
          • "Who?" IBM does 16 socket Z and Power. Fujitsu does, if I recall, 32-socket SPARC64, though they are exiting that business. Atos and HPE both do big scale-up x86 - Atos supports 32 sockets, HPE supports 16.
          • "How does it fit on a board?" It doesn't. The gist of it is that you run your CPU interconnect links off of blades or individual rack systems into a low-latency switch* that can speak that interconnect. These are not physically small machines and they are not one board, but many. Note that the individual "building blocks" are not just plain ol' rack machines, but ones purpose-designed to be a switched component of a larger infrastructure.

          * Use of a central switch is how HP has historically done it and I suspect still does, but I'm less familiar with their post-sx3000 systems. IBM is an exception here as their scale-up machines are glueless, and have been since at least Power7. Intel does not do glueless scalability beyond eight sockets, and AMD does not (presently) do so beyond two.
          Last edited by Dawn; 09 August 2023, 03:38 PM.

          Comment


          • #6
            But there answer is 42, there must be a mistake

            Comment


            • #7
              They have the cxl asymmetric coherency, IPUs that will be updated with CXL, a hotchips presentation on an optical mesh to mesh fabric, in-package HBM, and cxl 3.0 advanced networking on the roadmap. Looks like the need for on-board ddr DIMMs is going away.

              Comment

              Working...
              X