Announcement

Collapse
No announcement yet.

AMD Aims For 30x Energy Efficiency Improvement For AI Training + HPC By 2025

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Yes. Do spiders need to be taught by some external actor how to spin webs? Or what webs even are? No, they just do it.
    That's my point... it's programmed into their DNA, otherwise called "instinct".
    No they don't. They are following a script written by a human
    No, deep learning and similar network techniques are not "following a script written by a human". When you train a network you present examples and the correct values and it's the network itself that learns how to determine those values. Deep learning networks are capable of learning higher-level concepts to do this.

    Have you ever seen a computer spontaneously do something that it was not programmed to do?
    I built a neural network decades ago that looked at patterns that represented the letters "T" and "C". I didn't write an algorithm about how to tell one from the other; the network devised its own method to do so. More importantly, when I dropped a pixel or two from test patterns or introduced noise it was still able to make the correct determinations; a mechanical algorithm would not have. More impressively, I only gave it training examples of the letters facing left, right or up. After training, when I showed it images of "C"s and "T"s that were upside down, it was still able to determine which letters they were.

    I've seen programs that evolved art - pictures or music. There are Generative Adversarial Networks that can produce photo-realistic images of people that have never existed, after having learned what people look like.

    There was a program that evolved FPGA circuit patterns that eventually produced a design that worked, but seemed to use an impossibly small design. There were parts that weren't even connected, but removing them kept the circuit from working! It turned out that it was using the effect of interference - electricity in one wire creating a radio wave that was picked up by another when they're close together - to send signals from one block to another, in a strange, alien design. Human engineers as a matter of principle design their circuits to avoid this type of interference; the evolving program used it to its advantage instead.

    In another case, an AI hid information to "cheat" at identifying pictures....

    Depending on how paranoid you are, this research from Stanford and Google will be either terrifying or fascinating. A machine learning agent intended to transform aerial images into street maps and back was found to be cheating by hiding information it would need later in "a nearly imperceptible, high-frequency signal." Clever girl!


    A machine learning agent intended to transform aerial images into street maps and back was found to be cheating by hiding information it would need later in “a nearly imperceptible, high-frequency signal.” Clever girl!
    And for that matter, I don't believe I've ever seen a human do something it wasn't programmed to do (it's an argument for another place, but I'm one of those who believe that it's more likely free will is an illusion than reality).

    Nobody had to teach me though. An AI cannot know what is "interesting" unless you specifically tell it what to look for.

    Sure it can... just define the word "interesting". I think a good definition (if not a complete definition) would be "something significantly different than anything seen before". In that case, it's rather easy to create an AI that identifies interesting things. In fact, we have many today that automate anomaly detection in data.

    Comment


    • #32
      Originally posted by alcalde View Post
      It turned out that it was using the effect of interference
      This effect also happened to be temperature- or humidity-dependent, IIRC. I think I read the circuit no longer worked properly, the next day.

      You'd need to train the circuit by running it on more than one FPGA chip, in a variety of environmental conditions, to impose selective pressure against such cheats. Still, if a similar cheat can be made robust, then maybe it's not really a cheat.

      Originally posted by alcalde View Post
      In another case, an AI hid information to "cheat" at identifying pictures....
      Not exactly. What it hid was information used to reconstruct the original image. However, it wasn't cheating at all. It just found a more efficient way to do what was asked of it. It's the developers' fault for the way they structured the problem & training.

      The same problem can exist with humans or even companies. If you get their incentives wrong, you're probably going to get a nasty surprise when they find short-cuts that make compromises unacceptable to their bosses or society.

      Comment

      Working...
      X