Announcement

Collapse
No announcement yet.

AMD Aims For 30x Energy Efficiency Improvement For AI Training + HPC By 2025

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by alcalde View Post
    It's not a stupid question, you're making an extraordinary claim. Today AI is all about deep learning ...
    It is stupid because you are not familiar with the basics. This is still a website for IT professionals and to talk about things one has not learned the basics about is stupid. If you did understand how even the simplest AI logic can bring improvements then you would not be talking here about deep learning.

    Deep learning is driven by marketing from Nvidia, who want you to buy their hardware and not to use CPUs for it. Nvidia is making an aggressive push to get ahead in the business by integrating it into their GPUs. This is alright and it has opened up some interesting new fields, but you are now falling for the hype and think it would require lots of silicon before one can do anything with it. Neural networks, however, or what people now call AIs, have been studied and used for more than 30 years now.

    We then have seen what happens when people do not understand the basics and act stupid. They use AI face recognition, which was trained with a few hundred thousand faces, and apply it to millions of people.

    I suggest you get into it and understand how it works and how simple it can be before you want to talk about deep learning.

    Comment


    • #22
      It is stupid because you are not familiar with the basics.
      You just keep accusing people you don't know of not knowing things.

      If you did understand how even the simplest AI logic can bring improvements then you would not be talking here about deep learning.
      You sound like one of those people who have worked out their own alternative to relativity theory or something. This has nothing to do with "the simplest AI logic [bringing] improvements"; this has to do with all the massive amount of computing power being expended to train larger and larger neural networks (such as GPT-3). This has to do with a world in which Google's DeepMind says all it needs is massive reinforcement learning to reach general AI.


      Deep learning is driven by marketing from Nvidia, who want you to buy their hardware and not to use CPUs for it. Nvidia is making an aggressive push to get ahead in the business by integrating it into their GPUs. This is alright and it has opened up some interesting new fields, but you are now falling for the hype and think it would require lots of silicon before one can do anything with it. Neural networks, however, or what people now call AIs, have been studied and used for more than 30 years now.
      Deep learning is not driven by "marketing" from Nvidia or anyone else. It's driven by wild successes like self-driving vehicles and image recognition that at times has surpassed humans (one deep learning network achieved slightly better results on street sign recognition than human test subjects). I know the history of neural networks; 30 years ago I was coding backpropagation neural networks, along with BAM (Bidirectional Associate Memory) networks and Kohonen self-organizing feature maps with Turbo Pascal for DOS and plotting results using Lotus 1-2-3 for DOS. But keep telling me I don't know the simplest things about what we're talking about.

      I suggest you get into it and understand how it works and how simple it can be before you want to talk about deep learning.
      Maybe you should email Geoff Hinton and tell him that he was crazy to publish his advances that led to deep learning and he should have stuck to his work on backpropagation.

      Comment


      • #23
        Originally posted by alcalde View Post
        You just keep accusing people you don't know of not knowing things. ...

        You sound like ...

        Maybe you should ...
        When you think you are smarter then prove it by writing less stupid comments. Until then will nobody care for who you are and both AMD and Intel will improve their hardware, whether Mr. Stupid agrees or not.

        Comment


        • #24
          Originally posted by muncrief View Post
          Lord almighty, I wish people would stop saying artificial intelligence when they mean machine learning, which has absolutely nothing to do with AI at all.

          We could actually have AI now, but over five decades ago companies chose machine learning instead because developing true AI would require understanding the way the human brain works. And that would have taken a coordinated 30 year plan among researchers and industry.

          So they went for the super crap called machine learning instead, which reached its practical limit long ago.

          That's why our "smart" devices can't even pretend to understand compound sentences, or any complex sentence, and never will.

          True AI will come, and there are a handful of incredibly brilliant researchers working on truly understanding the brain, some even in the midst of creating its connectome right now.

          But we're already 50 years behind because of the greed and lack of vision of both industry and far too many scientists.
          There is no such thing as AI. They went to "machine learning" because AI is not possible. Only living creatures have acted without programming, and they do that because they have an inherent goal to work towards, survival and propagation. Computers or code do not have needs, so the only things they will do is what we have programmed them to do. There can be no creativity or novelty there, when you are acting under whatever coding constraints or considerations were made by the programmer. Anything he has not anticipated will not be addressed.

          Computers can never do what humans do. Ever try creating a parser generator or a constraints generator? The problem with things like property based testing is that I can tell a computer to generate all sorts of valid/invalid input, but it has no way of knowing what interesting edge cases are supposed to look like. We (programmers) are all well aware of 0, the end of a range, highest value of an int, etc. and why those tend to be the most interesting things to test for, but there is no such thing as "interesting" to a computer. It has no interests. It cannot be "surprised".

          Comment


          • #25
            Originally posted by sdack View Post
            When you think you are smarter then prove it by writing less stupid comments. Until then will nobody care for who you are and both AMD and Intel will improve their hardware, whether Mr. Stupid agrees or not.
            So your only contribution is to start calling people "Mr. Stupid"? Marvin Minsky and Papert derailed neural network research (and funding) in favor of their classical rule-based AI by publishing a paper pointing out several examples the perception couldn't solve (such as XOR). Hinton and company returned neural networks to glory by championing the backpropagation algorithm, showing how it solved Minsky's examples, etc. Neural networks fell out of favor yet again, then Hinton and collaborators achieved another comeback by showing how the technique now called "deep learning" can lead a neural network to form abstract generalizations and achieve orders of magnitude better performance.

            That's not a "stupid comment"; that's history. And it has nothing to do with marketing by Nvidia (or anyone else). Deep learning has set off an explosion of applications such as advanced image recognition (which the deep learning you deride has achieved breakthrough benchmark results in), self-driving vehicles, real-time voice translation, etc. One of my favorite examples:



            All of this requires significant processing power, hence energy efficiency that may be achieved by AMD would be a significant advantage.

            Meanwhile, no one knows what you're on about except you seem to be suggesting that 1980's-era AI is all we really need and deep learning is some type of fad, with no evidence to back up these assertions (and not relevant to the topic of the article). And when people challenge you on this, you deride them, telling them they don't know anything and now giving them insulting nicknames. You're not contributing anything of use to this conversation.

            Comment


            • #26
              Only living creatures have acted without programming
              Are you so sure living creatures act without programming? How do spiders know how to spin webs?


              Computers can never do what humans do.
              You're wrong about this; they do it all the time. They can create music, create art, drive cars, and outfight veteran combat pilots in simulations. They can now diagnose pneumonia better than radiologists and interpret traffic signs better than German drivers.

              We (programmers) are all well aware of 0, the end of a range, highest value of an int, etc. and why those tend to be the most interesting things to test for
              Which you learned, just as an AI can.

              but there is no such thing as "interesting" to a computer. It has no interests. It cannot be "surprised".
              And yet AI has been trained in various acts of anomaly detection.


              Comment


              • #27
                Probably something to do with Xilinx Versal ACAP and Vitis.

                Sources:

                https://www.xilinx.com/products/sili...ap/versal.html - "AI Core Series - Delivers breakthrough AI inference and wireless acceleration with AI Engines that deliver over 100X greater compute performance than today’s server-class CPUs.".

                https://www.xilinx.com/products/desi.../vitis-ai.html - Up to 50x reduction of model complexity.

                An example of a CPU with builtin AI acceleration is the IBM Telum z16: https://www.nextplatform.com/2021/08...with-big-iron/
                Last edited by JustRob; 30 September 2021, 11:58 PM. Reason: Added example CPU.

                Comment


                • #28
                  Originally posted by david-nk View Post
                  Can you name some neural network architectures then that are designed to be primarily trained on a CPU instead of a GPU or TPU?
                  GPUs traditionally don't handle sparsity well, although I'm aware Nvidia claims more recent iterations have some hardware support for it. However, the main example I'd cite is spiking neural networks. I'm aware of at least one startup that made an ASIC to do spiking neural networks, due to their poor performance on GPUs. I think CPUs aren't at quite such a disadvantage.

                  Comment


                  • #29
                    Originally posted by muncrief View Post
                    That's why our "smart" devices can't even pretend to understand compound sentences, or any complex sentence, and never will.

                    Comment


                    • #30
                      Originally posted by alcalde View Post

                      Are you so sure living creatures act without programming? How do spiders know how to spin webs?
                      Yes. Do spiders need to be taught by some external actor how to spin webs? Or what webs even are? No, they just do it.

                      Originally posted by alcade
                      You're wrong about this; they do it all the time. They can create music, create art, drive cars, and outfight veteran combat pilots in simulations. They can now diagnose pneumonia better than radiologists and interpret traffic signs better than German drivers.
                      No they don't. They are following a script written by a human, or pattern matching based on criteria established by a human. Have you ever seen a computer spontaneously do something that it was not programmed to do?

                      Originally posted by alcade
                      Which you learned, just as an AI can.
                      Nobody had to teach me though. An AI cannot know what is "interesting" unless you specifically tell it what to look for.


                      Originally posted by alcade
                      And yet AI has been trained in various acts of anomaly detection.
                      And how did it determine what an anomaly was? Some human gave it parameters to look for and then indicated when a result chosen was a successful match or not. You are dealing with bits at the bottom layer, and to a computer, all bits are the same. It's only we humans that assign meaning to particular patterns of bits, which then become things like audio files, text files, video files... to a computer, there is no meaningful distinction between this sequence of bits and that one. They may be different sequences, but they are still bits.

                      Comment

                      Working...
                      X