Originally posted by bachchain
View Post
Announcement
Collapse
No announcement yet.
Intel Introduces Xeon Max & Data Center GPU Max Series
Collapse
X
-
Originally posted by onlyLinuxLuvUBack View PostIt reminds you of the quote from Mad Max... intel_max2.jpg
- Likes 1
Leave a comment:
-
-
Originally posted by CommunityMember View Post
These CPUs are targeted towards the HPC/Hyperscalers, who have been optimizing their apps (and for that matter their kernel and libraries) for quite some time to squeeze out the last percent of improvements, and will continue to do so.
I think we're coming near full circle in computing history. We started out with specialized processors for specialized tasks. Indeed, the very first electronic computers could often only do one or two things (Colossus, ENIAC, DSPs, etc). They evolved into general purpose processors that could do almost everything in a way that was "good enough" to get by when price was factored in (x86, ARM, MIPS, etc). Now we're cycling back to specialized processors (GPUs, DPUs, security, neural, and network processors, etc) with a centralized general processor (which is also broken down into relatively less specialized components) to direct traffic.
Passing thought: I think if I were worried more about data and calculation reliability than raw performance numbers I'd be considering systems with end to end error detection and correction like IBM's Z series instead. IBM and Nvidia announced a partnership to bring Nvidia data center class compute modules to the IBM Z series POWER based systems a few years back.Last edited by stormcrow; 09 November 2022, 11:00 PM.
- Likes 2
Leave a comment:
-
Hey Intel... this is good stuff. Can you make a mobile/desktop CPU that's like... one P-core, four e-cores, 16GB of this HBM2e memory, and a 128 EU Xe GPU? Maybe have a way to add 'extended memory' via a second tier that's over PCIe for OEMs who want to offer that?
Leave a comment:
-
Originally posted by willmore View PostI'm curious how much faster these will be is you don't have code tuned to use their new accelerators which is to say 99.999% of code that is in use today.
- Likes 2
Leave a comment:
-
Oh, it would be even more interesting if Intel doubled the HBM stack size of sapphire rapids.
1GB per core is cool, but 2GB per core is a tipping point where many users *could* consider skipping DDR5 alltogether. And this would hugely cut down motherboard costs and increase the density per rack, not to speak of the per-core performance improvements. Other concerns, like memory copy bandwidth bottlenecks from fast NICs or accelerators, also get thrown out the window.
I wonder if we will see HBM-focused motherboard designs, that either forgoe DIMMs alltogether or just include a few nominal slots to hold low-priority stuff.Last edited by brucethemoose; 09 November 2022, 01:37 PM.
- Likes 3
Leave a comment:
-
Originally posted by bachchain View PostThey forgot to mention how much the subscription to unlock all those fancy features will be.
- Likes 1
Leave a comment:
-
I'm curious how much faster these will be is you don't have code tuned to use their new accelerators which is to say 99.999% of code that is in use today.
- Likes 1
Leave a comment:
-
They forgot to mention how much the subscription to unlock all those fancy features will be.
- Likes 7
Leave a comment:
Leave a comment: