Announcement

Collapse
No announcement yet.

AMD Aims For 30x Energy Efficiency Improvement For AI Training + HPC By 2025

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • alcalde
    replied
    It is stupid because you are not familiar with the basics.
    You just keep accusing people you don't know of not knowing things.

    If you did understand how even the simplest AI logic can bring improvements then you would not be talking here about deep learning.
    You sound like one of those people who have worked out their own alternative to relativity theory or something. This has nothing to do with "the simplest AI logic [bringing] improvements"; this has to do with all the massive amount of computing power being expended to train larger and larger neural networks (such as GPT-3). This has to do with a world in which Google's DeepMind says all it needs is massive reinforcement learning to reach general AI.


    Deep learning is driven by marketing from Nvidia, who want you to buy their hardware and not to use CPUs for it. Nvidia is making an aggressive push to get ahead in the business by integrating it into their GPUs. This is alright and it has opened up some interesting new fields, but you are now falling for the hype and think it would require lots of silicon before one can do anything with it. Neural networks, however, or what people now call AIs, have been studied and used for more than 30 years now.
    Deep learning is not driven by "marketing" from Nvidia or anyone else. It's driven by wild successes like self-driving vehicles and image recognition that at times has surpassed humans (one deep learning network achieved slightly better results on street sign recognition than human test subjects). I know the history of neural networks; 30 years ago I was coding backpropagation neural networks, along with BAM (Bidirectional Associate Memory) networks and Kohonen self-organizing feature maps with Turbo Pascal for DOS and plotting results using Lotus 1-2-3 for DOS. But keep telling me I don't know the simplest things about what we're talking about.

    I suggest you get into it and understand how it works and how simple it can be before you want to talk about deep learning.
    Maybe you should email Geoff Hinton and tell him that he was crazy to publish his advances that led to deep learning and he should have stuck to his work on backpropagation.

    https://en.wikipedia.org/wiki/Geoffrey_Hinton

    Leave a comment:


  • sdack
    replied
    Originally posted by alcalde View Post
    It's not a stupid question, you're making an extraordinary claim. Today AI is all about deep learning ...
    It is stupid because you are not familiar with the basics. This is still a website for IT professionals and to talk about things one has not learned the basics about is stupid. If you did understand how even the simplest AI logic can bring improvements then you would not be talking here about deep learning.

    Deep learning is driven by marketing from Nvidia, who want you to buy their hardware and not to use CPUs for it. Nvidia is making an aggressive push to get ahead in the business by integrating it into their GPUs. This is alright and it has opened up some interesting new fields, but you are now falling for the hype and think it would require lots of silicon before one can do anything with it. Neural networks, however, or what people now call AIs, have been studied and used for more than 30 years now.

    We then have seen what happens when people do not understand the basics and act stupid. They use AI face recognition, which was trained with a few hundred thousand faces, and apply it to millions of people.

    I suggest you get into it and understand how it works and how simple it can be before you want to talk about deep learning.

    Leave a comment:


  • Slartifartblast
    replied
    Going down to 3nm as is their roadmap for then would certainly help.

    Leave a comment:


  • Sadhu
    replied
    Originally posted by david-nk View Post
    Can you name some neural network architectures then that are designed to be primarily trained on a CPU instead of a GPU or TPU?
    SLIDE - https://www.cs.rice.edu/~as143/Papers/SLIDE_MLSys.pdf

    Abstract:
    "Deep Learning (DL) algorithms are the central focus of modern machine learning systems. As data volumes keep growing, it has become customary to train large neural networks with hundreds of millions of parameters to maintain enough capacity to memorize these volumes and obtain state-of-the-art accuracy. To get around the costly computations associated with large models and data, the community is increasingly investing in specialized hardware for model training. However, specialized hardware is expensive and hard to generalize to a multitude of tasks. The progress on the algorithmic front has failed to demonstrate a direct advantage over powerful hardware such as NVIDIA-V100 GPUs. This paper provides an exception. We propose SLIDE (Sub-LInear Deep learning Engine) that uniquely blends smart randomized algorithms, with multi-core parallelism and workload optimization. Using just a CPU, SLIDE drastically reduces the computations during both training and inference outperforming an optimized implementation of Tensorflow (TF) on the best available GPU. Our evaluations on industry-scale recommendation datasets, with large fully connected architectures, show that training with SLIDE on a 44 core CPU is more than 3.5 times (1 hour vs. 3.5 hours) faster than the same network trained using TF on Tesla V100 at any given accuracy level. On the same CPU hardware, SLIDE is over 10x faster than TF. We provide codes and scripts for reproducibility"

    Leave a comment:


  • pegasus
    replied
    I'll just leave this here:
    https://www.youtube.com/watch?v=jN9L7TpMxeA

    Leave a comment:


  • arQon
    replied
    All I want is ARM's sane variable-length arrays of variable-sized data. It's the only sensible approach to this, and it works brilliantly. But instead we get Intel's AVX clusterf**k, and apparently AMD is willing to just keep following that shitty braindead design and as a result always be years behind in delivering it to market. I would love to understand how it is that such a competent team can relentlessly keep dropping the ball in this area.

    Leave a comment:


  • alcalde
    replied
    Originally posted by sdack View Post
    You are asking a stupid question. Start by understanding AI basics before you try to understand architectures. When you need a first example then look at cmix, which is a compressor that achieves higher compression rates than LZMA by applying AI on top of standard compression methods. It shows you how AIs can be used in about every discipline, in about every problem, and why it is important to have AI capabilities within CPUs, too. Learn how AIs work and you will understand how it is not about size, but that these can yield advantages on any scale. It does not need first a GPU to use AIs. To give you another example, so is AMD using an AI logic within the branch prediction of their Zen CPUs. It is very simple and basic, and yet achieves better results than before.
    It's not a stupid question, you're making an extraordinary claim. Today AI is all about deep learning, and that requires GPU acceleration for decent performance. Hence, OP is right, no one's really concerned about efficiency improvements for CPUs doing AI training because CPUs aren't doing the AI training today (unless you're like me and AMD decides to drop support for your graphics card in their ROCM stack right after you buy it and it would be cheaper to buy a new car right now than to get a new graphics card).

    Leave a comment:


  • jaxa
    replied
    Originally posted by pal666 View Post
    i hope it wouldn't osborne current hardware
    One man's i7-3770 is another man's treasure.

    Leave a comment:


  • pal666
    replied
    i hope it wouldn't osborne current hardware

    Leave a comment:


  • Teggs
    replied
    Originally posted by brucethemoose View Post
    AMD already has their 2025 products in the pipe in some form. As you said, this isn't a "goal" so much as a hint at what's already cookin.
    I would say that it is a 'goal' if some necessary pieces haven't been locked down yet. That would be the microcode/firmware/userspace programs of 2025 if nothing else. Unless they already meet those numbers with prototype silicon and software, I don't have a problem with allowing them a bit of theatre.

    Leave a comment:

Working...
X