Announcement

Collapse
No announcement yet.

NVIDIA Announces "Xavier" AI Supercomputer SoC With Custom ARM64 + Volta GPU

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by schmidtbag View Post
    To elaborate on what cb88 said, deep learning sacrifices accuracy for speed. When it comes to learning AI, it cares more about approximation than precision. The hardware is much simpler, which means you can add more cores in the same area while having them operate more reliably at higher speeds.
    To correct what shmidtbag said, deep learning follows an entirely different paradigm than conventional programming.
    AI algorithms are usually neural networks, which are more like of a "swarm mind" of many low-processing-power nodes acting like neurons (brain cells) in a living being than conventional CPUs.
    Point is, just as with living beings, to get a decent total processing power you need large amounts of these nodes/processors, so a proper deep learning system is a massively parallelized system with thousands of tiny and weak cores. Like a GPU.

    Comment


    • #12
      subscribed, this seems like a very interesting thread

      Comment


      • #13
        Originally posted by starshipeleven View Post
        AI algorithms are usually neural networks, which are more like of a "swarm mind" of many low-processing-power nodes acting like neurons (brain cells) in a living being than conventional CPUs.
        Neural networks behave almost completely unlike actual neurons. The two have almost nothing in common. There was some vague idea early on that neural networks might be useful some sort of analog for the nervous system, but that never panned out. The name stuck, though.

        Comment


        • #14
          Originally posted by TheBlackCat View Post
          Neural networks behave almost completely unlike actual neurons. The two have almost nothing in common.
          Ummm, no. http://www.willamette.edu/~gorr/clas...449/intro.html

          Comment


          • #15
            Originally posted by starshipeleven View Post
            From your link:

            Our goal is to introduce students to a powerful class of model, the Neural Network. In fact, this is a broad term which includes many diverse models and approaches. We will first motivate networks by analogy to the brain. The analogy is loose, but serves to introduce the idea of parallel and distributed computation.
            (emphasis added)

            Comment


            • #16
              Originally posted by TheBlackCat View Post
              From your link:
              yeah, right, how about you read the site instead of trying to find the first thing that MIGHT be read wrong?


              Our basic computational element (model neuron) is often called a node or unit. It receives input from some other units, or perhaps from an external source. Each input has an associated weight w, which can be modified so as to model synaptic learning. The unit computes some function f of the weighted sum of its inputs:

              What does do a neuron? Yeah the same thing. Receives input (excitatory or inhibitory), if the sum is above X it fires its own signal.

              The smarts comes from the how the network is made and from the value of X.

              Comment


              • #17
                Hey wait, have the Phoronix forums ever had a discussion go this long without a flame war erupting? This could be a record.

                Originally posted by TheBlackCat View Post

                From your link:

                (emphasis added)
                I think "the analogy is loose" is only in there because we're using digital processors instead of analog neurons and circuit boards instead of fleshy brain matter and so forth.

                Comment


                • #18
                  Originally posted by starshipeleven View Post
                  What does do a neuron? Yeah the same thing. Receives input (excitatory or inhibitory), if the sum is above X it fires its own signal.
                  No, that is not what neurons do. First, many neurons don't fire spikes at all. For those that do, what determines whether a spike occurs is not a sum of the inputs. There are a ton of factors that determine whether a spike occurs even in the simplest spiking neuron, and they vary enormously from neuron to neuron, but it isn't a sum of the inputs by any stretch of the imagination.

                  Comment


                  • #19
                    Originally posted by Michael_S View Post
                    Hey wait, have the Phoronix forums ever had a discussion go this long without a flame war erupting? This could be a record.
                    I was expecting a Linus middle finger by post three.

                    Comment


                    • #20
                      Originally posted by TheBlackCat View Post
                      No, that is not what neurons do. First, many neurons don't fire spikes at all.
                      Bullshit, all neurons by definition do that.
                      For those that do, what determines whether a spike occurs is not a sum of the inputs.
                      I suggest you read up about the Action Potential. https://en.wikipedia.org/wiki/Action_potential
                      but it isn't a sum of the inputs by any stretch of the imagination.
                      Again, bullshit, read action potential above.

                      Comment

                      Working...
                      X