Announcement

Collapse
No announcement yet.

AMD Aims For 30x Energy Efficiency Improvement For AI Training + HPC By 2025

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • muncrief
    replied
    Lord almighty, I wish people would stop saying artificial intelligence when they mean machine learning, which has absolutely nothing to do with AI at all.

    We could actually have AI now, but over five decades ago companies chose machine learning instead because developing true AI would require understanding the way the human brain works. And that would have taken a coordinated 30 year plan among researchers and industry.

    So they went for the super crap called machine learning instead, which reached its practical limit long ago.

    That's why our "smart" devices can't even pretend to understand compound sentences, or any complex sentence, and never will.

    True AI will come, and there are a handful of incredibly brilliant researchers working on truly understanding the brain, some even in the midst of creating its connectome right now.

    But we're already 50 years behind because of the greed and lack of vision of both industry and far too many scientists.

    Leave a comment:


  • sdack
    replied
    Originally posted by david-nk View Post
    Can you name some neural network architectures then that are designed to be primarily trained on a CPU instead of a GPU or TPU?
    You are asking a stupid question. Start by understanding AI basics before you try to understand architectures. When you need a first example then look at cmix, which is a compressor that achieves higher compression rates than LZMA by applying AI on top of standard compression methods. It shows you how AIs can be used in about every discipline, in about every problem, and why it is important to have AI capabilities within CPUs, too. Learn how AIs work and you will understand how it is not about size, but that these can yield advantages on any scale. It does not need first a GPU to use AIs. To give you another example, so is AMD using an AI logic within the branch prediction of their Zen CPUs. It is very simple and basic, and yet achieves better results than before.
    Last edited by sdack; 29 September 2021, 04:36 PM.

    Leave a comment:


  • david-nk
    replied
    Originally posted by sdack View Post
    CPUs are used where the training data is complex and requires more traditional CPU processing than a simple tensor calculation.
    Can you name some neural network architectures then that are designed to be primarily trained on a CPU instead of a GPU or TPU?

    Leave a comment:


  • jaxa
    replied
    Originally posted by StandaSK View Post
    Well AMD had something similar not so long ago (their 25x20 initiative) and they succeeded in that, so I can see them managing the 30x efficiency improvement.

    "We set a bold goal in 2014 to deliver at least 25 times more energy efficiency by the year 2020 in our mobile processors ..."



    AMD massaged the numbers to meet their 25x mobile power efficiency goal. Not that they didn't make big improvements from Kaveri to Renoir, but they leaned on idle power consumption in the calculations. Most of the performance-per-watt increase was in the CPU + GPU performance gain (50/50 average of Cinebench R15 multithreaded and 3D Mark 11) within a given 35 W TDP, which comes out to just over 5x between Kaveri and Renoir. I hope this new goal also gets some proper analysis.

    More interesting for the home user is that it looks like AMD will include an iGPU in most CPUs to act as a machine learning accelerator. Van Gogh (Steam Deck) and Rembrandt have RDNA2 graphics and roadmaps show support for "CVML". Raphael Zen 4 desktop CPUs are also expected to include an RDNA2 iGPU. Any laptop or desktop with a discrete GPU could use the iGPU solely for machine learning.

    Leave a comment:


  • sdack
    replied
    Originally posted by david-nk View Post
    ... CPUs aren't used for AI training. ...
    Of course they are used, too. Only GPUs are much better for certain AI tasks like image processing and identification, where you have large amounts of simple data. CPUs are used where the training data is complex and requires more traditional CPU processing than a simple tensor calculation.

    Leave a comment:


  • david-nk
    replied
    I wonder why they bothered mentioning EPYC CPUs at all, CPUs aren't used for AI training. You could just put a few tensor cores in a CPU and then claim 30x improvement for AI training, but that wouldn't be much of an achievement. But 30x energy efficiency improvement for the AI accelerator cards would be.

    Leave a comment:


  • tildearrow
    replied
    Typo:

    Originally posted by phoronix View Post
    Given that we are almost t0 2022,

    Leave a comment:


  • coder
    replied
    They're not very clear about this, but the goal seems to be 30x in 5 years: from 2020 to 2025. So, that means they're probably using Vega 20 and its Rapid Packed Math as the baseline, rather than Arcturus and its Matrix Cores.

    Sad to say, this is probably the minimum they need to do to be competitive with the AI accelerators of 2025.

    Leave a comment:


  • bridgman
    replied
    One minor note - the blurb said "EPYC processors and Instinct accelerators", ie not just CPUs.

    The article is clear on this, so I'm only commenting for anyone who reads the forum post but not the article

    Leave a comment:


  • StandaSK
    replied
    Well AMD had something similar not so long ago (their 25x20 initiative) and they succeeded in that, so I can see them managing the 30x efficiency improvement.

    "We set a bold goal in 2014 to deliver at least 25 times more energy efficiency by the year 2020 in our mobile processors ..."


    Leave a comment:

Working...
X