Announcement

Collapse
No announcement yet.

AMD Aims For 30x Energy Efficiency Improvement For AI Training + HPC By 2025

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD Aims For 30x Energy Efficiency Improvement For AI Training + HPC By 2025

    Phoronix: AMD Aims For 30x Energy Efficiency Improvement For AI Training + HPC By 2025

    AMD this morning announced a goal of increasing the energy efficiency of EPYC processors running AI training and high performance workloads by 30x... Within the next four years...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    AMD already has their 2025 products in the pipe in some form. As you said, this isn't a "goal" so much as a hint at what's already cookin.

    I hope some of this AI goodness trickles down to consumer stuff though.

    Comment


    • #3
      Well AMD had something similar not so long ago (their 25x20 initiative) and they succeeded in that, so I can see them managing the 30x efficiency improvement.

      "We set a bold goal in 2014 to deliver at least 25 times more energy efficiency by the year 2020 in our mobile processors ..."


      Comment


      • #4
        One minor note - the blurb said "EPYC processors and Instinct accelerators", ie not just CPUs.

        The article is clear on this, so I'm only commenting for anyone who reads the forum post but not the article
        Test signature

        Comment


        • #5
          They're not very clear about this, but the goal seems to be 30x in 5 years: from 2020 to 2025. So, that means they're probably using Vega 20 and its Rapid Packed Math as the baseline, rather than Arcturus and its Matrix Cores.

          Sad to say, this is probably the minimum they need to do to be competitive with the AI accelerators of 2025.

          Comment


          • #6
            Typo:

            Originally posted by phoronix View Post
            Given that we are almost t0 2022,

            Comment


            • #7
              I wonder why they bothered mentioning EPYC CPUs at all, CPUs aren't used for AI training. You could just put a few tensor cores in a CPU and then claim 30x improvement for AI training, but that wouldn't be much of an achievement. But 30x energy efficiency improvement for the AI accelerator cards would be.

              Comment


              • #8
                Originally posted by david-nk View Post
                ... CPUs aren't used for AI training. ...
                Of course they are used, too. Only GPUs are much better for certain AI tasks like image processing and identification, where you have large amounts of simple data. CPUs are used where the training data is complex and requires more traditional CPU processing than a simple tensor calculation.

                Comment


                • #9
                  Originally posted by StandaSK View Post
                  Well AMD had something similar not so long ago (their 25x20 initiative) and they succeeded in that, so I can see them managing the 30x efficiency improvement.

                  "We set a bold goal in 2014 to deliver at least 25 times more energy efficiency by the year 2020 in our mobile processors ..."



                  AMD massaged the numbers to meet their 25x mobile power efficiency goal. Not that they didn't make big improvements from Kaveri to Renoir, but they leaned on idle power consumption in the calculations. Most of the performance-per-watt increase was in the CPU + GPU performance gain (50/50 average of Cinebench R15 multithreaded and 3D Mark 11) within a given 35 W TDP, which comes out to just over 5x between Kaveri and Renoir. I hope this new goal also gets some proper analysis.

                  More interesting for the home user is that it looks like AMD will include an iGPU in most CPUs to act as a machine learning accelerator. Van Gogh (Steam Deck) and Rembrandt have RDNA2 graphics and roadmaps show support for "CVML". Raphael Zen 4 desktop CPUs are also expected to include an RDNA2 iGPU. Any laptop or desktop with a discrete GPU could use the iGPU solely for machine learning.

                  Comment


                  • #10
                    Originally posted by sdack View Post
                    CPUs are used where the training data is complex and requires more traditional CPU processing than a simple tensor calculation.
                    Can you name some neural network architectures then that are designed to be primarily trained on a CPU instead of a GPU or TPU?

                    Comment

                    Working...
                    X