NVIDIA Jetson AGX Xavier Benchmarks - Incredible Performance On The Edge
Written by Michael Larabel in Computers on 26 December 2018. Page 2 of 6. 18 Comments

First up is a comparison of the Jetson TX2 and Jetson AGX Xavier performance for the TensorRT inference performance with different networks, precisions, and batch sizes. Worth noting is that currently for using the NVIDIA "DLA" deep learning accelerator cores, only FP16 precision is currently supported while INT8 support is forthcoming. Additionally, in the current TensorRT release is not support for running combined on the Volta GPU and DLAs but that too is coming in the future release. So for this initial benchmarking is focusing just on the TensorRT performance with the Volta GPU and tensor cores, which is already a profound difference compared to TX2, and with the expanded DLA support will come even greater performance for Xavier.

The generational performance improvement for TensorRT with AGX Xavier is simply astounding while combined with the DLA performance will be even more dramatic. In many of these tests with the initial TensorRT release is easily 10~20x better performance with Xavier over the previous-generation TX2. Besides the expanded DLA support, it will be interesting to see what other software optimizations NVIDIA achieves for Xavier in the months ahead with forthcoming software library updates.

More of these TensorRT inference benchmarks can be found via this OpenBenchmarking.org result file.



Related Articles
Trending Linux News