Announcement

Collapse
No announcement yet.

AMD Announces Milan-X 3D V-Cache CPUs, Azure Prepares For Great Upgrade

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    What a beast. Both MI200 and Epyc with V-Cache. I want few please.

    Comment


    • #12
      I'm not really worried about AMD. It sincerely looks like Intel went all in and did as much as they could to cook up a venerable Zen competitor over the last couple of years, but the result is still a bit of a let down if you consider how much engineering muscle Intel does have, or rather should have.

      Comment


      • #13
        Originally posted by tildearrow View Post
        I am kind of worried about Zen after the latest Intel launch...
        :-) :-) :-)

        Yes, I'm worried too, especially because the INTEL guy said "AMD’s time is over"...

        Comment


        • #14
          Let's hope for all the cryptocurrency gold-diggers to jump on it.

          Comment


          • #15
            Originally posted by domih View Post

            :-) :-) :-)

            Yes, I'm worried too, especially because the INTEL guy said "AMD’s time is over"...
            Nope, intel is screwed already. They showed NOTHING here. Proper support for this abomination will take months (if not years) while competition will bring much faster generic CPUs. Intel had better be rebranded to Swiss cheese company due to security holes it introduced.

            Comment


            • #16
              Originally posted by numacross View Post

              Heh... it seems that AMD's reaction to Alder Lake will be slapping more L3 cache to Zen 3 while still remaining on the old (AM4, PCIe 4, DDR4) platform. Intel invested in new: platform with PCIe 5 and DDR5, chipset, and two microarchitectures. And don't get me wrong, ADL is impressive especially if power limited in BIOS, but new everything takes its toll.
              Can't wait to see what the AM5 platform will look like
              It's not much of a reaction. The V cache thing was announced months ago. This is just the announcement of the actual product. Zen 4 is going to have PCIE 5 and DDR5.

              Comment


              • #17
                Originally posted by mirmirmir View Post
                Nah, intel with their efficient core, doesn't actually energy-efficient. For game? Yah it's good. But for productively? Let's just say, I'm looking forward to next release from both intel and amd. I'm looking forward to fierce competition between the two.
                Efficiency cores are good when one has got a small core count in all and the power draw of one performance core is still high. But with increasing core counts will the benefits of added efficiency cores shrink. One wants for the performance cores to become more power-efficient under any load, and the cache's power consumption matters, too. When one can turn off 99 out of 100 cores then additional efficiency cores become a waste of die space and just complicate the hardware and software architecture needlessly.

                It is a bit odd for Intel to add efficiency cores, but it may have been done for marketing purposes and out of a lack of better options currently.
                Last edited by sdack; 08 November 2021, 06:34 PM.

                Comment


                • #18
                  Originally posted by Volta View Post

                  Nope, intel is screwed already. They showed NOTHING here. Proper support for this abomination will take months (if not years) while competition will bring much faster generic CPUs. Intel had better be rebranded to Swiss cheese company due to security holes it introduced.
                  Both the OP (my guess) and myself omitted the /humor tag :-) :-) :-) Mmm?

                  Comment


                  • #19
                    Originally posted by sdack View Post
                    Efficiency cores are good when one has got a small core count in all and the power draw of one performance core is still high. But with increasing core counts will the benefits of added efficiency cores shrink. ...

                    It is a bit odd for Intel to add efficiency cores, but it may have been done for marketing purposes and out of a lack of better options currently.
                    Point taken about the added complexity, but I think there actually is a point to efficiency cores in a world of increasing core counts.

                    When a company's business is selling silicon, then compute performance per die area, and power use per die area are really the main equations to making the most money. Intel's "big" cores actually blow out the die area budget to maximize single thread performance. If they used only big cores, they wouldn't be able to make a chip big enough to compete in massively parallel workloads. Hyperthreading helps, but by the time they add threads, and lower clock speeds to keep power usage under control, Intel's newest chips would look poor compared to Apple, AMD, Ampere, etc. Instead, by Intel packing four efficiency cores into the space of a single performance core, they can get back way more performance as a function of die size. Even if that performance is only realized when performing massively parallel workloads, or by offloading non-critical tasks from performance cores, then the trade-off is worth it. Intel is trading complexity of scheduling latency sensitive and single-threaded tasks vs. parallel and background tasks over having to put that complexity into more hyperthreading smarts in each core front-end, plus all the power management circuitry to balance a bunch of high performance cores.

                    It'll be interesting to see how Intel executes on this strategy over the next few generations.

                    Comment


                    • #20
                      MI200 feels odd. It is dual GPU on board card and it is compared vs single chip A100, that on top of that has some unique properties like ability to split into multiple instances. A100 that is already aging, and is single chip and only having between 1.4x to max 2.4x in aplications better performance on newer process node with still subpar CUDA support is .... questionable.

                      Also I don't see information about power draw what in such utilization is very important for example 9 out of top10 in Green500 is Nvidia GPU driven.
                      Last edited by piotrj3; 08 November 2021, 09:04 PM.

                      Comment

                      Working...
                      X