Announcement

Collapse
No announcement yet.

Intel Introduces Xeon Max & Data Center GPU Max Series

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Introduces Xeon Max & Data Center GPU Max Series

    Phoronix: Intel Introduces Xeon Max & Data Center GPU Max Series

    With SC2022 kicking off next week and AMD set to unveil their next-generation server processors tomorrow, Intel is using today to announce the Xeon Max Series and the Data Center GPU Max Series.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    It's got a "Rambo" L2 cache 💪 lol

    Comment


    • #3
      They forgot to mention how much the subscription to unlock all those fancy features will be.

      Comment


      • #4
        I'm curious how much faster these will be is you don't have code tuned to use their new accelerators which is to say 99.999% of code that is in use today.

        Comment


        • #5
          Originally posted by bachchain View Post
          They forgot to mention how much the subscription to unlock all those fancy features will be.
          if you have to ask “How much” …..

          Comment


          • #6
            Oh, it would be even more interesting if Intel doubled the HBM stack size of sapphire rapids.

            1GB per core is cool, but 2GB per core is a tipping point where many users *could* consider skipping DDR5 alltogether. And this would hugely cut down motherboard costs and increase the density per rack, not to speak of the per-core performance improvements. Other concerns, like memory copy bandwidth bottlenecks from fast NICs or accelerators, also get thrown out the window.



            I wonder if we will see HBM-focused motherboard designs, that either forgoe DIMMs alltogether or just include a few nominal slots to hold low-priority stuff.
            Last edited by brucethemoose; 09 November 2022, 01:37 PM.

            Comment


            • #7
              Originally posted by willmore View Post
              I'm curious how much faster these will be is you don't have code tuned to use their new accelerators which is to say 99.999% of code that is in use today.
              These CPUs are targeted towards the HPC/Hyperscalers, who have been optimizing their apps (and for that matter their kernel and libraries) for quite some time to squeeze out the last percent of improvements, and will continue to do so.

              Comment


              • #8
                Hey Intel... this is good stuff. Can you make a mobile/desktop CPU that's like... one P-core, four e-cores, 16GB of this HBM2e memory, and a 128 EU Xe GPU? Maybe have a way to add 'extended memory' via a second tier that's over PCIe for OEMs who want to offer that?

                Comment


                • #9
                  Originally posted by CommunityMember View Post

                  These CPUs are targeted towards the HPC/Hyperscalers, who have been optimizing their apps (and for that matter their kernel and libraries) for quite some time to squeeze out the last percent of improvements, and will continue to do so.
                  Yes, but the important Caveat (with an intentional capital "C") is these HPC clusters may be using Intel Xeon CPUs, but they're sure as hell not using an Intel for GPGPU heavy lifting. The GPU is just there to give the support team a local admin console. They're nearly all using Nvidia products with CUDA. Intel has an incredibly tall cliff to climb to unseat Nvidia's HPC entrenched install base. The only way I can think they'll manage this is to offer performance features that Nvidia doesn't already excel at but still have marketability.

                  I think we're coming near full circle in computing history. We started out with specialized processors for specialized tasks. Indeed, the very first electronic computers could often only do one or two things (Colossus, ENIAC, DSPs, etc). They evolved into general purpose processors that could do almost everything in a way that was "good enough" to get by when price was factored in (x86, ARM, MIPS, etc). Now we're cycling back to specialized processors (GPUs, DPUs, security, neural, and network processors, etc) with a centralized general processor (which is also broken down into relatively less specialized components) to direct traffic.

                  Passing thought: I think if I were worried more about data and calculation reliability than raw performance numbers I'd be considering systems with end to end error detection and correction like IBM's Z series instead. IBM and Nvidia announced a partnership to bring Nvidia data center class compute modules to the IBM Z series POWER based systems a few years back.
                  Last edited by stormcrow; 09 November 2022, 11:00 PM.

                  Comment


                  • #10
                    It reminds you of the quote from Mad Max... intel_max2.jpg

                    Comment

                    Working...
                    X