Announcement

Collapse
No announcement yet.

NVIDIA Announces The Jetson TX2, Powered By NVIDIA's "Denver 2" CPU & Pascal Graphics

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by L_A_G View Post

    It's quite a lot of computing power in an embedded form factor.

    These things are not going to be used to play Crysis, browse the web or other things people use tablets for. What Nvidia themselves are specially touting these things for is as control units for self driving cards, which do actually require quite a lot of computing power to do their job properly. Their decision making process involves real-time processing of large very amounts of data streaming in from sources like LIDARs, up to half a dozen cameras (all of which have to have their output processed by machine vision software) and a myriad of sensors about how the actual car is performing. All of these sensors have to be sampled and processed at 100s if not 1000s of times per second and produce control output at 100s if not 1000s of times per second.
    Meh.I'm sure that car manufacturers will use specialized hardware rather tnan a simple CPU+GPU combo. IIRC TEsla once said that they have allocated something like 100W+ for that job, so it's not like this thing is the answer.

    Enthusiasts have even less use of this, as they don't have access to confidential info and are forced to go classic route- through OpenCL and CUDA and there this thing is barely anythiing more than a micro-PC...


    Comment


    • #12
      Originally posted by Brane215 View Post
      Meh.I'm sure that car manufacturers will use specialized hardware rather tnan a simple CPU+GPU combo. IIRC TEsla once said that they have allocated something like 100W+ for that job, so it's not like this thing is the answer.
      Seeing how Volvo is actually using the predecessor to this for their completely functional self driving cars I'd beg to differ on that.

      Enthusiasts have even less use of this, as they don't have access to confidential info and are forced to go classic route- through OpenCL and CUDA and there this thing is barely anythiing more than a micro-PC...
      As I said, this is intended for embedded applications, where a full x86 PC is either impractical or not even an option due to size and power constraints, and it's just fine for enthusiasts. In this day and age full documentation is usually available to practically anyone who can pay for the hardware and register to a website. Sometimes you don't even have to do that to gain access to full documentation. The only thing enthusiasts don't have full access to is support, but even that's not true all of the time.

      Originally posted by RussianNeuroMancer View Post
      I genuinely curious what nVidia Tegra can do that AMD R-Series Embedded APU can't? I have to remind that AMD embedded APU have HSA and OpenCL 2.0 support right now, with drivers and graphics parts are supported by upstream (Linux and Mesa). I also have to remind that there is two Tensorflow implementations with OpenCL support.
      The thing about R-Series APUs is that (as far as I'm aware) AMD doesn't offer them as ready-to-use solutions with System-on-Modules and carrier cards. You have to design and manufacture your own board before you can get to start developing any applications for them and that obviously adds several months to any product development effort using the things. Furthermore there's also the fact that R-Series APUs have a much different thermal and power envelope than TX-series SoMs.

      Also, don't try to go acting as if CUDA doesn't exist as Nvidia has implemented some pretty good CUDA extensions for TensorFlow quite a while ago.
      Last edited by L_A_G; 08 March 2017, 08:35 AM.

      Comment


      • #13
        It won't be worth a hill o'beans unless all the hardware is supported by the mainline kernel and nouveau supports the GPU fully. I'm certain that won't be the case, just like their previous boards, years after release.

        Comment


        • #14
          Originally posted by boxie View Post
          SHINY!

          ... but can it play games? PLS2BE BENCHING GAMES FOR LOLS :P
          The fact that it's ARM-based means the only games it'll be playing would be on Android or open source ones you can compile for it on Linux.

          Also, the GPU in these things is pretty small, in spite of the marketing hype. It's fast for mobile/embedded, but slower than any of their current desktop GPUs.

          Comment


          • #15
            Originally posted by smitty3268 View Post
            A 'supercomputer module', huh? Good grief, NVidia marketing department.
            I think "module" is the key here. Imagine 100 of these working together. Even disregarding the CPUs, the GPUs alone will give you a lot of computing power.

            Comment


            • #16
              @michael: make sure to focus on the compute (aka CUDA and OpenCL) benchmarks in comparison with standard PC cards. The CPU is just there to deliver the data.

              Comment


              • #17
                Originally posted by marvin42 View Post
                @michael: make sure to focus on the compute (aka CUDA and OpenCL) benchmarks in comparison with standard PC cards. The CPU is just there to deliver the data.
                For Tegra hardware I always test both.
                Michael Larabel
                https://www.michaellarabel.com/

                Comment


                • #18
                  Originally posted by boxie View Post
                  SHINY!

                  ... but can it play games? PLS2BE BENCHING GAMES FOR LOLS :P
                  On Linux - it'll play open source games - there's no Steam for arm processors. For gaming it would be more suited to Android which would have a huge choice

                  Comment


                  • #19
                    I'm still hoping there will be a Ryzen based SOC once they release their APU range

                    Comment


                    • #20
                      Originally posted by L_A_G View Post
                      Also, don't try to go acting as if CUDA doesn't exist as Nvidia has implemented some pretty good CUDA extensions for TensorFlow quite a while ago.
                      Cool, but drivers that doesn't get dropped after couple of years often will make much bigger difference, financially. Suddenly developers will need less hacky workarounds, can work with upstream on resolving issues and have access to most of the source code (and as soon as OpenCL implementation will be open - to all source code). But if they want go hacky way, they can go as far hook into Gallium and skip OpenGL. All of this is pretty much more important than nVidia extensions for Tensorflow.
                      Originally posted by L_A_G View Post
                      The thing about R-Series APUs is that (as far as I'm aware) AMD doesn't offer them as ready-to-use solutions with System-on-Modules and carrier cards.
                      You are correct, however there are boards and industrial solutions from their partners, that cover not all, but much more use cases than Jetson TX1/TX2. With AMD solution there is very wide choice, after all.
                      Originally posted by L_A_G View Post
                      You have to design and manufacture your own board before you can get to start developing any applications for them and that obviously adds several months to any product development effort using the things.
                      I agree that if available solutions doesn't cover some particular use case then design own board around SoC will take time. However, software prototyping could be easily started with available boards with same SoC, or in many cases actually even with available consumer equivalents.
                      Originally posted by L_A_G View Post
                      Furthermore there's also the fact that R-Series APUs have a much different thermal and power envelope than TX-series SoMs.
                      Agree, however this difference actually important only in limited range of use-cases.

                      Comment

                      Working...
                      X