Announcement

Collapse
No announcement yet.

NVIDIA Announces "Xavier" AI Supercomputer SoC With Custom ARM64 + Volta GPU

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • NVIDIA Announces "Xavier" AI Supercomputer SoC With Custom ARM64 + Volta GPU

    Phoronix: NVIDIA Announces "Xavier" AI Supercomputer SoC With Custom ARM64 + Volta GPU

    NVIDIA announced at GTC Europe today their forthcoming Xavier SoC that will succeed Parker. At least for now, Xavier is super exciting and is aimed to be a "AI supercomputer" SoC...

    http://www.phoronix.com/scan.php?pag...C-Announcement

  • #2
    How are deep-learning operations per second different than normal operations per second?

    Also, I think it is "OPS" ("Operations Per Second"), not "OPs" (plural of "OP")

    Comment


    • #3
      Originally posted by TheBlackCat View Post
      How are deep-learning operations per second different than normal operations per second?

      Also, I think it is "OPS" ("Operations Per Second"), not "OPs" (plural of "OP")
      They're deeper and they're learning. Duh.

      Comment


      • #4
        The deep learning Ops are probably 8 bit precision. There are a lot of papers on low precision Neural network strategies... so instead of only supporting half precision 16Bit ops they also support 8bit ops now.

        For example:
        http://www.jmlr.org/proceedings/papers/v37/gupta15.pdf

        Comment


        • #5
          Originally posted by TheBlackCat View Post
          How are deep-learning operations per second different than normal operations per second?
          To elaborate on what cb88 said, deep learning sacrifices accuracy for speed. When it comes to learning AI, it cares more about approximation than precision. The hardware is much simpler, which means you can add more cores in the same area while having them operate more reliably at higher speeds.

          Comment


          • #6
            What would be nice is to see somebody market a Linux based machine based on these super chips. It is probably a good time for the Linux community to start thinking about AI in their apps and OS.

            Comment


            • #7
              Originally posted by wizard69 View Post
              What would be nice is to see somebody market a Linux based machine based on these super chips. It is probably a good time for the Linux community to start thinking about AI in their apps and OS.
              I'm not sure how that'd be possible. Way too many applications would fail to run properly. Of the ones that would function, many would probably run slower, not faster.

              Comment


              • #8
                Originally posted by wizard69 View Post
                What would be nice is to see somebody market a Linux based machine based on these super chips. It is probably a good time for the Linux community to start thinking about AI in their apps and OS.
                The whole point of deep learning is that it needs enormous amounts of data to learn from. Putting this on a desktop PC, Linux or not, wouldn't help because it wouldn't have that data available.

                Comment


                • #9
                  Originally posted by TheBlackCat View Post

                  The whole point of deep learning is that it needs enormous amounts of data to learn from. Putting this on a desktop PC, Linux or not, wouldn't help because it wouldn't have that data available.
                  packet scanning, input scanning, screen output scanning, files(documents, images, music etc) scanning. There is plenty of data to learn from.

                  Comment


                  • #10
                    Originally posted by TheBlackCat View Post
                    The whole point of deep learning is that it needs enormous amounts of data to learn from. Putting this on a desktop PC, Linux or not, wouldn't help because it wouldn't have that data available.
                    These are not for "learning" from the data. These SoCs are for applying the learned model for autonomous driving. They have DGX-1 or Big Sur for the learning task and their customers are asking for even more performance. "Learning" from data won't happen on an SoC for a long while.

                    Comment

                    Working...
                    X