Lord almighty, I wish people would stop saying artificial intelligence when they mean machine learning, which has absolutely nothing to do with AI at all.
We could actually have AI now, but over five decades ago companies chose machine learning instead because developing true AI would require understanding the way the human brain works. And that would have taken a coordinated 30 year plan among researchers and industry.
So they went for the super crap called machine learning instead, which reached its practical limit long ago.
That's why our "smart" devices can't even pretend to understand compound sentences, or any complex sentence, and never will.
True AI will come, and there are a handful of incredibly brilliant researchers working on truly understanding the brain, some even in the midst of creating its connectome right now.
But we're already 50 years behind because of the greed and lack of vision of both industry and far too many scientists.
Announcement
Collapse
No announcement yet.
AMD Aims For 30x Energy Efficiency Improvement For AI Training + HPC By 2025
Collapse
X
-
Originally posted by david-nk View PostCan you name some neural network architectures then that are designed to be primarily trained on a CPU instead of a GPU or TPU?Last edited by sdack; 29 September 2021, 04:36 PM.
Leave a comment:
-
Originally posted by sdack View PostCPUs are used where the training data is complex and requires more traditional CPU processing than a simple tensor calculation.
- Likes 1
Leave a comment:
-
Originally posted by StandaSK View PostWell AMD had something similar not so long ago (their 25x20 initiative) and they succeeded in that, so I can see them managing the 30x efficiency improvement.
"We set a bold goal in 2014 to deliver at least 25 times more energy efficiency by the year 2020 in our mobile processors ..."
AMD massaged the numbers to meet their 25x mobile power efficiency goal. Not that they didn't make big improvements from Kaveri to Renoir, but they leaned on idle power consumption in the calculations. Most of the performance-per-watt increase was in the CPU + GPU performance gain (50/50 average of Cinebench R15 multithreaded and 3D Mark 11) within a given 35 W TDP, which comes out to just over 5x between Kaveri and Renoir. I hope this new goal also gets some proper analysis.
More interesting for the home user is that it looks like AMD will include an iGPU in most CPUs to act as a machine learning accelerator. Van Gogh (Steam Deck) and Rembrandt have RDNA2 graphics and roadmaps show support for "CVML". Raphael Zen 4 desktop CPUs are also expected to include an RDNA2 iGPU. Any laptop or desktop with a discrete GPU could use the iGPU solely for machine learning.
- Likes 3
Leave a comment:
-
Originally posted by david-nk View Post... CPUs aren't used for AI training. ...
- Likes 2
Leave a comment:
-
I wonder why they bothered mentioning EPYC CPUs at all, CPUs aren't used for AI training. You could just put a few tensor cores in a CPU and then claim 30x improvement for AI training, but that wouldn't be much of an achievement. But 30x energy efficiency improvement for the AI accelerator cards would be.
- Likes 1
Leave a comment:
-
-
They're not very clear about this, but the goal seems to be 30x in 5 years: from 2020 to 2025. So, that means they're probably using Vega 20 and its Rapid Packed Math as the baseline, rather than Arcturus and its Matrix Cores.
Sad to say, this is probably the minimum they need to do to be competitive with the AI accelerators of 2025.
- Likes 3
Leave a comment:
-
One minor note - the blurb said "EPYC processors and Instinct accelerators", ie not just CPUs.
The article is clear on this, so I'm only commenting for anyone who reads the forum post but not the article
- Likes 10
Leave a comment:
-
Well AMD had something similar not so long ago (their 25x20 initiative) and they succeeded in that, so I can see them managing the 30x efficiency improvement.
"We set a bold goal in 2014 to deliver at least 25 times more energy efficiency by the year 2020 in our mobile processors ..."
- Likes 5
Leave a comment:
Leave a comment: