Announcement

Collapse
No announcement yet.

AMD Aims For 30x Energy Efficiency Improvement For AI Training + HPC By 2025

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
    coder
    Senior Member

  • coder
    replied
    Originally posted by alcalde View Post
    It turned out that it was using the effect of interference
    This effect also happened to be temperature- or humidity-dependent, IIRC. I think I read the circuit no longer worked properly, the next day.

    You'd need to train the circuit by running it on more than one FPGA chip, in a variety of environmental conditions, to impose selective pressure against such cheats. Still, if a similar cheat can be made robust, then maybe it's not really a cheat.

    Originally posted by alcalde View Post
    In another case, an AI hid information to "cheat" at identifying pictures....
    Not exactly. What it hid was information used to reconstruct the original image. However, it wasn't cheating at all. It just found a more efficient way to do what was asked of it. It's the developers' fault for the way they structured the problem & training.

    The same problem can exist with humans or even companies. If you get their incentives wrong, you're probably going to get a nasty surprise when they find short-cuts that make compromises unacceptable to their bosses or society.

    Leave a comment:

  • alcalde
    Senior Member

  • alcalde
    replied
    Yes. Do spiders need to be taught by some external actor how to spin webs? Or what webs even are? No, they just do it.
    That's my point... it's programmed into their DNA, otherwise called "instinct".
    No they don't. They are following a script written by a human
    No, deep learning and similar network techniques are not "following a script written by a human". When you train a network you present examples and the correct values and it's the network itself that learns how to determine those values. Deep learning networks are capable of learning higher-level concepts to do this.

    Have you ever seen a computer spontaneously do something that it was not programmed to do?
    I built a neural network decades ago that looked at patterns that represented the letters "T" and "C". I didn't write an algorithm about how to tell one from the other; the network devised its own method to do so. More importantly, when I dropped a pixel or two from test patterns or introduced noise it was still able to make the correct determinations; a mechanical algorithm would not have. More impressively, I only gave it training examples of the letters facing left, right or up. After training, when I showed it images of "C"s and "T"s that were upside down, it was still able to determine which letters they were.

    I've seen programs that evolved art - pictures or music. There are Generative Adversarial Networks that can produce photo-realistic images of people that have never existed, after having learned what people look like.

    There was a program that evolved FPGA circuit patterns that eventually produced a design that worked, but seemed to use an impossibly small design. There were parts that weren't even connected, but removing them kept the circuit from working! It turned out that it was using the effect of interference - electricity in one wire creating a radio wave that was picked up by another when they're close together - to send signals from one block to another, in a strange, alien design. Human engineers as a matter of principle design their circuits to avoid this type of interference; the evolving program used it to its advantage instead.

    In another case, an AI hid information to "cheat" at identifying pictures....

    https://techcrunch.com/2018/12/31/th...nted-task/amp/

    A machine learning agent intended to transform aerial images into street maps and back was found to be cheating by hiding information it would need later in “a nearly imperceptible, high-frequency signal.” Clever girl!
    And for that matter, I don't believe I've ever seen a human do something it wasn't programmed to do (it's an argument for another place, but I'm one of those who believe that it's more likely free will is an illusion than reality).

    Nobody had to teach me though. An AI cannot know what is "interesting" unless you specifically tell it what to look for.

    Sure it can... just define the word "interesting". I think a good definition (if not a complete definition) would be "something significantly different than anything seen before". In that case, it's rather easy to create an AI that identifies interesting things. In fact, we have many today that automate anomaly detection in data.

    Leave a comment:

  • krzyzowiec
    Senior Member

  • krzyzowiec
    replied
    Originally posted by alcalde View Post

    Are you so sure living creatures act without programming? How do spiders know how to spin webs?
    Yes. Do spiders need to be taught by some external actor how to spin webs? Or what webs even are? No, they just do it.

    Originally posted by alcade
    You're wrong about this; they do it all the time. They can create music, create art, drive cars, and outfight veteran combat pilots in simulations. They can now diagnose pneumonia better than radiologists and interpret traffic signs better than German drivers.
    No they don't. They are following a script written by a human, or pattern matching based on criteria established by a human. Have you ever seen a computer spontaneously do something that it was not programmed to do?

    Originally posted by alcade
    Which you learned, just as an AI can.
    Nobody had to teach me though. An AI cannot know what is "interesting" unless you specifically tell it what to look for.


    Originally posted by alcade
    And yet AI has been trained in various acts of anomaly detection.
    And how did it determine what an anomaly was? Some human gave it parameters to look for and then indicated when a result chosen was a successful match or not. You are dealing with bits at the bottom layer, and to a computer, all bits are the same. It's only we humans that assign meaning to particular patterns of bits, which then become things like audio files, text files, video files... to a computer, there is no meaningful distinction between this sequence of bits and that one. They may be different sequences, but they are still bits.

    Leave a comment:

  • coder
    Senior Member

  • coder
    replied
    Originally posted by muncrief View Post
    That's why our "smart" devices can't even pretend to understand compound sentences, or any complex sentence, and never will.
    https://en.wikipedia.org/wiki/GPT-3#Reviews

    Leave a comment:

  • coder
    Senior Member

  • coder
    replied
    Originally posted by david-nk View Post
    Can you name some neural network architectures then that are designed to be primarily trained on a CPU instead of a GPU or TPU?
    GPUs traditionally don't handle sparsity well, although I'm aware Nvidia claims more recent iterations have some hardware support for it. However, the main example I'd cite is spiking neural networks. I'm aware of at least one startup that made an ASIC to do spiking neural networks, due to their poor performance on GPUs. I think CPUs aren't at quite such a disadvantage.

    https://en.wikipedia.org/wiki/Spiking_neural_network

    Leave a comment:

  • JustRob
    Phoronix Member

  • JustRob
    replied
    Probably something to do with Xilinx Versal ACAP and Vitis.

    Sources:

    https://www.xilinx.com/products/sili...ap/versal.html - "AI Core Series - Delivers breakthrough AI inference and wireless acceleration with AI Engines that deliver over 100X greater compute performance than today’s server-class CPUs.".

    https://www.xilinx.com/products/desi.../vitis-ai.html - Up to 50x reduction of model complexity.

    An example of a CPU with builtin AI acceleration is the IBM Telum z16: https://www.nextplatform.com/2021/08...with-big-iron/
    JustRob
    Phoronix Member
    Last edited by JustRob; 30 September 2021, 11:58 PM. Reason: Added example CPU.

    Leave a comment:

  • alcalde
    Senior Member

  • alcalde
    replied
    Only living creatures have acted without programming
    Are you so sure living creatures act without programming? How do spiders know how to spin webs?


    Computers can never do what humans do.
    You're wrong about this; they do it all the time. They can create music, create art, drive cars, and outfight veteran combat pilots in simulations. They can now diagnose pneumonia better than radiologists and interpret traffic signs better than German drivers.

    We (programmers) are all well aware of 0, the end of a range, highest value of an int, etc. and why those tend to be the most interesting things to test for
    Which you learned, just as an AI can.

    but there is no such thing as "interesting" to a computer. It has no interests. It cannot be "surprised".
    And yet AI has been trained in various acts of anomaly detection.


    Leave a comment:

  • alcalde
    Senior Member

  • alcalde
    replied
    Originally posted by sdack View Post
    When you think you are smarter then prove it by writing less stupid comments. Until then will nobody care for who you are and both AMD and Intel will improve their hardware, whether Mr. Stupid agrees or not.
    So your only contribution is to start calling people "Mr. Stupid"? Marvin Minsky and Papert derailed neural network research (and funding) in favor of their classical rule-based AI by publishing a paper pointing out several examples the perception couldn't solve (such as XOR). Hinton and company returned neural networks to glory by championing the backpropagation algorithm, showing how it solved Minsky's examples, etc. Neural networks fell out of favor yet again, then Hinton and collaborators achieved another comeback by showing how the technique now called "deep learning" can lead a neural network to form abstract generalizations and achieve orders of magnitude better performance.

    That's not a "stupid comment"; that's history. And it has nothing to do with marketing by Nvidia (or anyone else). Deep learning has set off an explosion of applications such as advanced image recognition (which the deep learning you deride has achieved breakthrough benchmark results in), self-driving vehicles, real-time voice translation, etc. One of my favorite examples:

    https://news.stanford.edu/2017/11/15...ing-pneumonia/

    All of this requires significant processing power, hence energy efficiency that may be achieved by AMD would be a significant advantage.

    Meanwhile, no one knows what you're on about except you seem to be suggesting that 1980's-era AI is all we really need and deep learning is some type of fad, with no evidence to back up these assertions (and not relevant to the topic of the article). And when people challenge you on this, you deride them, telling them they don't know anything and now giving them insulting nicknames. You're not contributing anything of use to this conversation.

    Leave a comment:

  • krzyzowiec
    Senior Member

  • krzyzowiec
    replied
    Originally posted by muncrief View Post
    Lord almighty, I wish people would stop saying artificial intelligence when they mean machine learning, which has absolutely nothing to do with AI at all.

    We could actually have AI now, but over five decades ago companies chose machine learning instead because developing true AI would require understanding the way the human brain works. And that would have taken a coordinated 30 year plan among researchers and industry.

    So they went for the super crap called machine learning instead, which reached its practical limit long ago.

    That's why our "smart" devices can't even pretend to understand compound sentences, or any complex sentence, and never will.

    True AI will come, and there are a handful of incredibly brilliant researchers working on truly understanding the brain, some even in the midst of creating its connectome right now.

    But we're already 50 years behind because of the greed and lack of vision of both industry and far too many scientists.
    There is no such thing as AI. They went to "machine learning" because AI is not possible. Only living creatures have acted without programming, and they do that because they have an inherent goal to work towards, survival and propagation. Computers or code do not have needs, so the only things they will do is what we have programmed them to do. There can be no creativity or novelty there, when you are acting under whatever coding constraints or considerations were made by the programmer. Anything he has not anticipated will not be addressed.

    Computers can never do what humans do. Ever try creating a parser generator or a constraints generator? The problem with things like property based testing is that I can tell a computer to generate all sorts of valid/invalid input, but it has no way of knowing what interesting edge cases are supposed to look like. We (programmers) are all well aware of 0, the end of a range, highest value of an int, etc. and why those tend to be the most interesting things to test for, but there is no such thing as "interesting" to a computer. It has no interests. It cannot be "surprised".

    Leave a comment:

  • sdack
    Senior Member

  • sdack
    replied
    Originally posted by alcalde View Post
    You just keep accusing people you don't know of not knowing things. ...

    You sound like ...

    Maybe you should ...
    When you think you are smarter then prove it by writing less stupid comments. Until then will nobody care for who you are and both AMD and Intel will improve their hardware, whether Mr. Stupid agrees or not.

    Leave a comment:

Working...
X