Announcement

Collapse
No announcement yet.

The NVIDIA Jetson TX2 Performance Has Evolved Nicely Since Launch

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by milkylainen View Post
    Denver could easily do x86 translation from the frontend aswell if Nvidia wanted an x86 CPU.
    At a sufficiently high level, you can obviously emulate anything with anything else. However, to be efficient, the backend hardware should be designed for the ISA it's going to be executing. You want to hold the entire architectural state on-chip, so you don't have to spill things to memory. And you want to have the behavior of your hardware implementation match the key details of the ISA, so you don't have to waste extra instructions emulating it.

    Comment


    • #32
      Originally posted by coder View Post
      At a sufficiently high level, you can obviously emulate anything with anything else. However, to be efficient, the backend hardware should be designed for the ISA it's going to be executing. You want to hold the entire architectural state on-chip, so you don't have to spill things to memory. And you want to have the behavior of your hardware implementation match the key details of the ISA, so you don't have to waste extra instructions emulating it.
      Absolutely. I didn't say Denver could do x86 at an acceptable speed in it's current state. But I'd imagine it wouldn't be that far off.
      Either way. This way of doing things has been experimented with earlier and it did not add up to expectations.
      But maybe if they sell enough of them this time, it will stick as a solution that gets funding and development.

      Comment


      • #33
        Originally posted by milkylainen View Post
        This way of doing things has been experimented with earlier and it did not add up to expectations.
        But maybe if they sell enough of them this time, it will stick as a solution that gets funding and development.
        Since this uses their second generation Denver core, their first must've worked well enough to convince them to make another.

        Comment


        • #34
          Funny thing is: they just launched the development kit for Xavier. I was expecting to see Phoronix featuring a news article on it, in fact. However, the current specs manage to sidestep the question of exactly what they used for CPU cores.

          https://developer.nvidia.com/embedde...-xavier-devkit

          According to Wikipedia, it again uses a custom core, this time called Carmel.



          And anyone regarding the TX2 as expensive will find Xavier's price particularly eye-watering:
          Members of the NVIDIA Developer Program are eligible to purchase their first NVIDIA® Jetson Xavier Developer Kit at a special price of $1,299 (USD), discounted from the MSRP of $2,499 (USD).
          Last edited by coder; 31 August 2018, 12:38 AM.

          Comment


          • #35
            Originally posted by coder View Post
            Since this uses their second generation Denver core, their first must've worked well enough to convince them to make another.
            As did Transmeta.
            The little seen Efficeon (TM8600). Also in two incarnations.
            Still not efficient enough.

            Either way. This time the team is backed by a silicon powerhouse with $$ to spare.
            It's very much a different situation than last time.

            Comment


            • #36
              Originally posted by coder View Post
              True, but that deals with the original Denver, from 2014. This has Denver 2, described here (note that Parker is the code name for TX2):

              https://www.anandtech.com/show/10596...parker-details
              Right but I (perhaps wrongly...) assume they used a similar approach with Denver 2, which alas has not been as much described as Denver. What has been described (7-wide, dynamic code optimization, 128K Icache) looks quite similar.

              Anyway, your link didn't work for me. Try this:

              https://www.anandtech.com/show/8701/...xus-9-review/2
              Whoops, fixed it, thanks!

              Comment


              • #37
                Originally posted by coder View Post
                You seen this?



                Here's an Apollo Lake SoC on an embeddable board:

                http://www.up-board.org/upsquared/
                I have seen the embedded TX2 board, but I wasn't aware of the Up products. Thanks again for the suggestion - that seems to be a lot more fit for my needs. I may seriously consider getting it.

                Will it use CUDA, automatically? I thought you had to explicitly use stuff in the cuda namespace.

                https://docs.opencv.org/3.4.0/d1/d1a...v_1_1cuda.html
                To my understanding, the Tegra build of OpenCV will use CUDA automatically, but I haven't looked that deep into it since I don't currently have such a platform, and the OpenCL implementation works just fine.

                Comment


                • #38
                  Originally posted by milkylainen View Post
                  As did Transmeta.
                  The little seen Efficeon (TM8600). Also in two incarnations.
                  Still not efficient enough.
                  Wow, troll much?

                  The difference is that Nvidia's main value-add is their GPUs. If their CPU cores weren't competitive, they could just dump them and go with a standard ARM core, as they had done in various Tegra SoCs.

                  For Transmeta, CPUs was their entire business. They pretty much had to keep at it, as long as they had any hope of surviving.
                  Last edited by coder; 01 September 2018, 12:05 AM.

                  Comment

                  Working...
                  X