Announcement

Collapse
No announcement yet.

NVIDIA's Jetson AGX Xavier Carmel Performance vs. Low-Power x86 Processors

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • NVIDIA's Jetson AGX Xavier Carmel Performance vs. Low-Power x86 Processors

    Phoronix: NVIDIA's Jetson AGX Xavier Carmel Performance vs. Low-Power x86 Processors

    Back in our NVIDIA Jetson AGX Xavier benchmarks from December, besides looking at the incredible Carmel+Volta GPU compute potential for machine learning and other edge computing scenarios, we also looked at the ARMv8 Carmel CPU core performance against various other ARM SoCs on different single board computers. But how do these eight NVIDIA Carmel CPU cores compare to x86_64 low-power processors? Here are some of those benchmarks for those curious about the NVIDIA CPU potential.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Thanks for including redis in the testing.

    Comment


    • #3
      Originally posted by phoronix View Post
      Phoronix: NVIDIA's Jetson AGX Xavier Carmel Performance vs. Low-Power x86 Processors
      This is hardly a fair comparison. The Celeron J3455 was launched 2.5 years ago and you're testing it in a single-channel configuration. Xavier has a 256-bit (essentially quad-channel) LPDDR4 interface.

      https://www.anandtech.com/show/13584...armel-and-more

      ...talk about bringing a pocket knife to a gun fight.

      It'd be great if you had an ODROID H2. Any chance of that happening?

      https://www.phoronix.com/scan.php?pa...-H2-Benchmarks
      Last edited by coder; 09 February 2019, 01:53 PM.

      Comment


      • #4
        Originally posted by coder View Post

        This is hardly a fair comparison. The Celeron J3455 was launched 2.5 years ago and you're testing it in a single-channel configuration. Xavier has a 256-bit (essentially quad-channel) LPDDR4 interface.

        https://www.anandtech.com/show/13584...armel-and-more

        ...talk about bringing a pocket knife to a gun fight.

        It'd be great if you had an ODROID H2. Any chance of that happening?

        https://www.phoronix.com/scan.php?pa...-H2-Benchmarks
        Nope, I don't have any H2. From that article, ". I don't yet have a ODROID-H2 for testing locally within a controlled environment but via a Phoronix reader got remote access for some initial benchmarking for the time being."
        Michael Larabel
        https://www.michaellarabel.com/

        Comment


        • #5
          This system is much too expensive. It's a complete rip-off. Outside of narrow industrial applications, this is a cloud-cuckoo land purchase versus an amd64 instruction set based system. And I've been running a Jetson TX2 for 2 years as a desktop (please Nvidia move to 18.04 Ubuntu) so I know their ecosystem. Sure they're beating the TX2 hands down but it's 2.5x the price! We're in serious-performance territory from X86 even after factoring electricity costs.

          Nvidia has basically decided that it doesn't want to compete against amd64, and so loads this thing up with shite that 95% of people will never use. And ramped the price to "corporate" levels.

          A lovely piece of hardware it if was 300 dollars. At 1200 USD plus it's a complete joke.

          Comment


          • #6
            Originally posted by vegabook View Post
            This system is much too expensive. It's a complete rip-off. Outside of narrow industrial applications, this is a cloud-cuckoo land purchase versus an amd64 instruction set based system. And I've been running a Jetson TX2 for 2 years as a desktop (please Nvidia move to 18.04 Ubuntu) so I know their ecosystem. Sure they're beating the TX2 hands down but it's 2.5x the price! We're in serious-performance territory from X86 even after factoring electricity costs.

            Nvidia has basically decided that it doesn't want to compete against amd64, and so loads this thing up with shite that 95% of people will never use. And ramped the price to "corporate" levels.

            A lovely piece of hardware it if was 300 dollars. At 1200 USD plus it's a complete joke.
            You're completely missing the point.

            They didn't build this as a low-cost workstation replacement, or a high-end tablet SoC, like the Tegra X1 (which is used in Nintendo Switch, BTW). They made this SoC for self-driving and robotics applications. The tensor cores, that "95% of people will never use" are for 100% of its intended user base.

            If you are doing AI-intensive embedded work, you won't find a faster or more power-efficient (at this performance level) solution. Considering that and its size (9 billion transistors; 350 mm^2 die), the price is not unreasonable.

            Comment


            • #7
              Originally posted by vegabook View Post
              This system is much too expensive. It's a complete rip-off. Outside of narrow industrial applications, this is a cloud-cuckoo land purchase versus an amd64 instruction set based system. And I've been running a Jetson TX2 for 2 years as a desktop (please Nvidia move to 18.04 Ubuntu) so I know their ecosystem. Sure they're beating the TX2 hands down but it's 2.5x the price! We're in serious-performance territory from X86 even after factoring electricity costs.

              Nvidia has basically decided that it doesn't want to compete against amd64, and so loads this thing up with shite that 95% of people will never use. And ramped the price to "corporate" levels.

              A lovely piece of hardware it if was 300 dollars. At 1200 USD plus it's a complete joke.
              Yes, I completely agree with this.

              While the performance is great, at this price it is completely non-competitive.

              It is nice that in multi-threaded non-AVX benchmarks Jetson Xavier was able to easily beat mid-range Kaby Lake's, but the reality is that now you can buy at half the price of Xavier (even after adding the price of memory) computers with i7 Coffee Lake U or with i7 Whiskey Lake U, which will have a single-threaded performance higher by almost 50% than in these benchmarks and a multi-threaded performance almost 3 times higher, leaving Xavier to bite the dust.


              I design embedded computers, mostly with ARM processors. I always am annoyed when customers frequently want some obsolete Atom processors instead of more suitable ARM processors, but when you want maximum performance at that size and power consumption, unfortunately ARM processors are not a solution, because they are either slow and obsolete, e.g. RK3399 (the best you can find for a decent price), or if they are fast, e.g. modern smartphone processors (with Cortex-A75 or with Cortex-A76) or NVIDIA Xavier, then they are much more expensive than x86 processors.


              Xavier is only useful if you absolutely need a CUDA application or a complex OpenGL application dependent on the excellent NVIDIA OpenGL drivers.


              If you can use OpenCL or your OpenGL is not very pretentious, so the worse AMD drivers are OK, you can buy for half this price an Intel NUC with a 512-core Polaris GPU (either the current Crimson Canyon with a 3.2 GHz Cannon Lake CPU that is better than the benchmarked Kaby Lake's, or its successor to be launched later this year, with a faster Whiskey Lake CPU).

              Comment


              • #8
                Originally posted by vegabook View Post
                This system is much too expensive. It's a complete rip-off. Outside of narrow industrial applications, this is a cloud-cuckoo land purchase versus an amd64 instruction set based system. And I've been running a Jetson TX2 for 2 years as a desktop (please Nvidia move to 18.04 Ubuntu) so I know their ecosystem. Sure they're beating the TX2 hands down but it's 2.5x the price! We're in serious-performance territory from X86 even after factoring electricity costs.

                Nvidia has basically decided that it doesn't want to compete against amd64, and so loads this thing up with shite that 95% of people will never use. And ramped the price to "corporate" levels.

                A lovely piece of hardware it if was 300 dollars. At 1200 USD plus it's a complete joke.
                Umm. This type of system will never be a replacement for your desktop x86 processor.
                It is meant as a trial and play platform for MIPI video-streamers with neural network. Think self driving stuff.
                As such it has much lower volume than any x86 platform. That coupled with an specialized ARM with a Volta GPU makes it rather expensive.
                You're seriously under-using the hardware (your 95% crap comment) if you mean to replace the desktop with it.

                I think it is still pretty affordable regarding the hardware it provides.
                Show me anything that comes close to this pricetag with an octal core ARMv8.2, a Volta derivative GPU, 16x MIPI CS-2, 8x SLVS a PCIe Gen 4.0 (Yes, 4.0).

                Comment


                • #9
                  Very nice study.
                  Impressive how well these Nvidia Chips compare.
                  It would have been nice with some performance/watt numbers also.

                  Comment


                  • #10
                    Originally posted by vegabook View Post
                    Nvidia has basically decided that it doesn't want to compete against amd64
                    More like it can't, because it uses a shit architecture, like anything RISC based.

                    Comment

                    Working...
                    X