Announcement

Collapse
No announcement yet.

NVIDIA Jetson TX2 Linux Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Thanks for the Blender input!
    As for the review, thanks for posting GCC flags. What was "-mtune=native" supposed to do? Is this a custom GCC? They don't list support for Denver or a heterogeneous design with Denver in it. I ask because on the ODROID XU4, adding "-mcpu=cortex-a15.cortex-a7" doubled my performance over just -O3.

    Comment


    • #32
      Originally posted by arcangeli View Post
      I'll buy a TX2 soon, with the start of my little company (as freelancer). Actually, blender is my main software and the Jetson-TX1 my main computer...
      Later, i'll buy more tx2 (the module) and make a render farm with them.
      Why?

      If you're after performance/$ or performance/W/$, then I would think something like a barebones PC + GTX 1070 would deliver the best value.

      Again, it seems to me that the compelling value in the Jetson TX products is for cases where low power and small size/weight are mandatory. Compared to desktop CPUs and GPUs, these really aren't competitive. Especially for their price.

      Comment


      • #33
        [QUOTE=coder;n939316]Why?

        Are you saying that desktop cpu are better than desktop cpu? ARM was a desktop cpu at first. And a really fast one at that time.
        For me, desktop PC (=x86 based) are not better. They offer more power but with more electrical power need. More noise.
        I can use one TX2 for working and 9 for rendering for the electrical need of one medium pc...
        Electricity is not free. When your computer run 24h a day, ARM is really better. The price tag is just a little part of the money you spend in a computer...

        Comment


        • #34
          Originally posted by arcangeli View Post
          Are you saying that desktop cpu are better than desktop cpu? ARM was a desktop cpu at first. And a really fast one at that time.
          I am not trying to have an abstract argument. What I'm saying is that you can take the rendering performance of a low-power x86 PC w/ a reasonably fast GPU, and divide it by the number of watts used. When you do that, I think you'll find that you get more performance per watt than with the TX2. Certainly, more performance per watt per $ of up-front cost.

          All I'm suggesting is that you run the numbers and do the analysis, before investing in a whole farm of these things. If you do this, and want to post up your findings, I'd enjoy hearing what you learn.

          Another way of looking at it is that if these things were really the most cost-effective way to run a render farm, I think it'd be a popular thing to do. I think you'd be hard-pressed to find anyone doing it, though. Go ahead and ask around at 3D artist forums. There might be something in that.

          BTW, while searching for Cinema 4D benchmarks, I recently ran across these guys: https://us.rebusfarm.net/en/
          So, you could focus your time, money, and energy on the creative aspects, and leave the technical infrastructure to a cloud-based render farm operator, like them.

          Comment


          • #35
            Thing are changing every day. Blender is the only 3d package that can run on ARM. And there's only Linux on these ARM. Cinema4D is Windows and OSX only.
            Nvidia is the only one that make ARM with full OpenGL stack.
            How much people now something about the tegra? Only a few.
            Look at the X-Gene 2, they are faster than any Xeon. It's now impossible to buy one. Who know it?
            FB and Google add ARM server in their farm. Why if they are more expensive and less powerfull?
            They are rumour about a port of Mantra on ARM (rendering software of SideFX - Houdini).
            I'm french and live in France. There's only few renderfarm service here and they are expensive.
            A renderfarm is good for final rendering. Not for test rendering. There are good for animation, less for still image.
            Sorry but you are wrong about the perf per watt. ten x2 = 2550 cuda cores + 40 A57 core + 20 Denver2 core for 160W max. Look at the Intel and AMD tdp...

            Comment


            • #36
              Originally posted by arcangeli View Post
              Hi all,
              I can confirm that there's no OpenCL on the Tegra X1/2. Only CUDA.
              For Blender, on my TX1, the blender ubuntu's package do not work. But i've compiled it myself and submited a patch on the blender bug tracker.
              Blender work perfectly and cycles work on cpu AND on GPU.
              Here you can find my modified install_deps.sh script http://ovh.to/TpL1QMA
              Here, the talk on nvidia forum: https://devtalk.nvidia.com/default/t...omment=5078760
              And here, the patch: https://developer.blender.org/rB05df...bf65a2d3c2eae1

              I'll buy a TX2 soon, with the start of my little company (as freelancer). Actually, blender is my main software and the Jetson-TX1 my main computer...
              Later, i'll buy more tx2 (the module) and make a render farm with them.
              Hi , could please run the BMW benchmark ? I know this platform is not meant for that kind of job, but I am very curious for its performance.

              Comment


              • #37
                Interesting points about ARM. I think some of the ARM server platforms could be good for rendering, but Tegra is an embedded platform. For one thing, does it even have enough RAM to be useful for realistic scenes?
                Originally posted by arcangeli View Post
                Sorry but you are wrong about the perf per watt. ten x2 = 2550 cuda cores + 40 A57 core + 20 Denver2 core for 160W max. Look at the Intel and AMD tdp...
                So, what if you take the Blender benchmark results and divide it by watts used. Do that for both Tegra and a modest PC workstation (like i5 + GTX 1070)? Then, take each and divide by cost. I'm not entirely sure which will provide more performance per watt, but I'm confident PC will provide more perf per watt per $.

                Comment


                • #38
                  I've already made test like this. One Jetson TX1 and one Supermicro with 2 xeon x5687 and one gtx 980ti. That was the main reason i've sold the intel based computer. Tegra offer better perf/watt.
                  The TDP of the 1070 is 150W. Almost 20x the tdp of the tx1/tx2 compute module. Add the mother board and a cpu.
                  When the tk1 was announced, i've read an article (can't remember if it was on anandtech or nvidia) that explain why a SoC need less power for the same perf. The pci bus is one that add much power consumption to a computer.
                  I'm really happy when i work on a 15W computer. No PC can beat this. Why using a 200W computer for the same effect?
                  The other advantage with 10 x tx2 for a render farm is that you can use only a fraction of that farm for a simple job (and consume less power than a pc). Your average pc (core i5 + 1070) consume always more than 5 tx2.
                  Electrical bill need to be calculated for one year. I can save money with tegra. I'm using ARM since 1989...

                  Comment

                  Working...
                  X