Announcement

Collapse
No announcement yet.

Intel Data Center & AI Update 2023: Sierra Forest & Granite Rapids On Track

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by coder View Post
    Better efficiency and aggregate performance than Xeons made from their big cores. No AVX-512, however. Other than that, the E-cores' FPU performance is almost commensurate with their integer performance, which is more than half as fast as a P-core. So, if you had a choice between a 144-core Sierra Forest CPU and a 72-core Granite Rapids, the SF would probably be faster at highly-scalable workloads.

    I assume their main objective is to fend off the threat posed by ARM's N2 cores, and the rumored CPUs featuring like 192 of them. Then again, Intel is probably ready to do just about anything to reclaim the performance and efficiency titles from EPYC.

    BTW, a big version of Sierra Forest is rumored to have more than 320 cores and 12-channel DDR5-8000. I'm guessing the 144-core model will be a single die and have only 8-channel DDR5 support.
    As always thank you Coder for your well written replies and helping me make sense of stuff!

    Comment


    • #12
      Originally posted by kylew77 View Post
      As always thank you Coder for your well written replies and helping me make sense of stuff!
      Thanks, but take it all with a grain of salt. Especially that rumor of a 320+ core Sierra Forest. That's based on extremely sketchy information.

      My current thinking is that 144 cores might indeed be the max configuration. In which case, it'll definitely receive their full LGA 7259 w/ 12-channel memory.

      Hey, did you change jobs recently? Or was that web hosting gig something way in the past?

      Comment


      • #13
        Originally posted by kylew77 View Post
        You would think we used some kind of SAN technology, but in this case you would be wrong. It was just a local array of RAID 10 storage, regular SATA SSDs at that, we did testing with NVMe but it wasn't that much faster, presumably because once the website loaded to RAM it didn't need to read from disk.
        IMO, the main argument for using a SAN (or distributed filesystem) would be to do load-balancing and hot failover, so that you don't lose however many websites a 288-core machine could host, if it fails.

        Comment


        • #14
          Originally posted by coder View Post
          Hey, did you change jobs recently? Or was that web hosting gig something way in the past?
          Yep! I got a systems administrator position in January, thanks! Moved from Michigan to Texas which is closer to my folks.

          Comment

          Working...
          X