Announcement

Collapse
No announcement yet.

NVIDIA's Jetson AGX Xavier Carmel Performance vs. Low-Power x86 Processors

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Weasel View Post
    More like it can't, because it uses a shit architecture, like anything RISC based.
    Since the Pentium Pro, Intel chips have used a RISC microarchitecture internally and decode the x86 CISC instructions into groups of RISC instructions before executing them.

    Comment


    • #12
      Originally posted by milkylainen View Post

      Umm. This type of system will never be a replacement for your desktop x86 processor.
      It is meant as a trial and play platform for MIPI video-streamers with neural network. Think self driving stuff.
      As such it has much lower volume than any x86 platform. That coupled with an specialized ARM with a Volta GPU makes it rather expensive.
      You're seriously under-using the hardware (your 95% crap comment) if you mean to replace the desktop with it.

      I think it is still pretty affordable regarding the hardware it provides.
      Show me anything that comes close to this pricetag with an octal core ARMv8.2, a Volta derivative GPU, 16x MIPI CS-2, 8x SLVS a PCIe Gen 4.0 (Yes, 4.0).

      I am sorry, but nothing that you say justifies the price demanded by NVIDIA.

      The fact that the board is intended for automotive applications does not justify the high price, it just explains why NVIDIA will succeed to sell Xavier, because their intended customers are used to pay excessive prices, even for so-called "automotive" devices that do not really have any reliability-enhancing features that can cause a price double than normal.


      > "Show me anything that comes close to this pricetag"

      The CPU part is easy to match and exceed at a price many times lower.
      There are a large number of computers with sizes from pico-ITX up to NUC, which have Intel processors and which are much faster and much cheaper (2 to 4 times cheaper) than Xavier.


      What is of course more difficult to match is the 512-core Volta GPU. I am not aware of any alternative with NVIDIA GPUs, but there are several choices with sizes from NUC up to Nano-ITX, which have comparable AMD Vega or Polaris GPUs at prices 2 to 3 times lower than Xavier.

      NVIDIA Volta has a better performance per watt than the AMD alternatives, likely higher by about 20% to 50%, but this is not enough to justify a double price.


      The only real reason why NVIDIA will succeed to sell Xavier at this price is that an AMD solution would require a much greater software development effort because it could not use the large number of mature CUDA-only libraries that exist for NVIDIA GPUs.

      Someone who had competent enough software developers could spend much less on the hardware, otherwise they will be forced to use the expensive NVIDIA Xavier.


      On the other hand for anyone who is not dependent on a GPU CUDA application, NVIDIA Xavier is completely useless because of its ridiculous price, despite its nice & fast ARM processors.

      Comment


      • #13
        Still wrongly configured TX2, mine is twice faster. Makes me wonder if Xavier is properly setup.

        Comment


        • #14
          Originally posted by Weasel View Post
          More like it can't, because it uses a shit architecture, like anything RISC based.
          Yeah Power 9 is such a shit architecture. I can only see three explanations for such a comment: you're trolling for the fun, you don't know what you're talking about, you're drunk

          Comment


          • #15
            Originally posted by AdrianBc View Post


            I am sorry, but nothing that you say justifies the price demanded by NVIDIA.
            Yes. The system is much too expensive for your apparent type of use. So don't buy it? Nobody is forcing you.
            You still seem to think that the target audience is Generic Joe for his desktop replacement needs.
            "Devboards" like this can run several thousand dollars.

            Show me ONE system with 16x MIPI CS2, PCIe 4.0, a cutting edge embedded CPU and GPU with this form factor and TDP for less than $1000.
            Heck even a Matrox with a couple of SDI interfaces is several thousand dollars.
            Or show me a x86 solution with the same TDP and form factor that will beat this solution costing your claimed "third of the price".

            Comment


            • #16
              Originally posted by ssokolow View Post

              Since the Pentium Pro, Intel chips have used a RISC microarchitecture internally and decode the x86 CISC instructions into groups of RISC instructions before executing them.
              Actually it isn't. It isn't CISC nor RISC. Modern internal microarchitectures can't be classified as either.

              Comment


              • #17
                Originally posted by milkylainen View Post

                Actually it isn't. It isn't CISC nor RISC. Modern internal microarchitectures can't be classified as either.
                Fair enough. How about "Since the Pentium Pro, Intel chips have used an internal microarchitecture which is more RISC-like than CISC-like"?

                Comment


                • #18
                  Originally posted by ssokolow View Post

                  Fair enough. How about "Since the Pentium Pro, Intel chips have used an internal microarchitecture which is more RISC-like than CISC-like"?
                  Don't know really... Maybe the first stage decode is into something pretty constant in size. So in that regard, sure.
                  But then it gets replaced and reworked with micro/macro ops etc.
                  It's more like "Whatever makes it go fast(er) at the current state of the art solution + engineering efforts".
                  Modern CPUs have evolved into engineering monsters with millions of engineering hours put behind the current state.
                  I still marvel at the fact that we can buy a bazillion transistors built in the current state of the art ASIC factories for peanuts.

                  Comment


                  • #19
                    Originally posted by milkylainen View Post

                    Yes. The system is much too expensive for your apparent type of use. So don't buy it? Nobody is forcing you.
                    You still seem to think that the target audience is Generic Joe for his desktop replacement needs.
                    "Devboards" like this can run several thousand dollars.

                    Show me ONE system with 16x MIPI CS2, PCIe 4.0, a cutting edge embedded CPU and GPU with this form factor and TDP for less than $1000.
                    Heck even a Matrox with a couple of SDI interfaces is several thousand dollars.
                    Or show me a x86 solution with the same TDP and form factor that will beat this solution costing your claimed "third of the price".


                    Maybe I was not clear enough, but I have never said anything about the suitability of NVIDIA Xavier as as a desktop computer, because I believe that it is more than obvious that at such a price Xavier is unsuitable for desktop use.

                    I was saying that at this price Xavier is also inappropriate for any embedded use, unless the software development is captive by the CUDA environment, so that the hardware price does not matter any more.


                    I agree that for now there is no computer at this size that offers a PCIe 4.0 interface.
                    This might become important in the next years, but for the moment there exists nothing useful that could be connected to it.

                    The only thing remotely useful that could be connected in a not distant future to the x8 PCIe 4.0 interface of Xavier would be an 100 Gb/s Ethernet interface.

                    I cannot imagine why someone would want to do that and I am quite sure that the CPU cores of Xavier would not be able to feed such a so fast communication link.

                    It is more likely that NVIDIA plans to use this PCIe 4.0 interface to allow the Volta GPUs from several Xavier modules to communicate between themselves, to be able to distribute the GPU computation.


                    This could be useful, but if you would need a so large computational power that you would need to aggregate more Xavier modules, you could obtain the same speed and power consumption in about the same volume and at a much lower price using a mini-ITX board and a discrete GPU, both which cost much less than a single Xavier module.


                    You are also right that having 16 camera inputs is a unique feature as most small computer boards have only between 1 and 4 camera inputs.
                    This still does not make Xavier worthwhile.

                    If we take a much faster computer at half price, I (being a designer of embedded computer boards) am certain that I could design an add-in board (probably on PCIe), which would add the 16 camera inputs at a small fraction of the price difference between Xavier and that computer.


                    So even if x8 PCIe 4.0 & 16 camera inputs are indeed unique features for now, they are not enough to make Xavier competitive with the alternative solutions.



                    > "Show me ... a cutting edge embedded CPU and GPU with this form factor and TDP for less than $1000"

                    After arguing that the PCIe 4.0 and the camera inputs are not worth the at least 500 USD needed for Xavier over anything else (actually at least almost 2000 USD over anything else when not covered by the special developer initial offer), it is enough to give examples of cheap computers without these 2 features.

                    Form factor: the Xavier module has the size of an Intel NUC, but it also requires a carrier board, so a complete computer will have at least the Nano-ITX size or the 3.5" size, if not larger.

                    Power consumption: not public, but equal or larger than that of an Intel NUC.

                    An Intel Bean Canyon NUC, e.g. the most expensive i7 variant at $450 barebone (NUC8i7BEH), after adding DRAM & SSD would be e.g. $650 ... $700. This NUC variant has a much faster CPU than Xavier, probably about twice faster at both single-threaded and multi-threaded tasks and probably about 4 times faster for tasks where AVX can be used. The GPU has only 384 cores vs. 512 cores of Xavier, only an 128 MB dedicated RAM and has a less efficient structure than Volta so it might have only half of the Volta speed. Nevertheless, its performance per watt is very good and the performance should be good enough for many applications. Many such applications are already done today using much weaker GPUs, e.g those from ARM processors from other vendors than NVIDIA.

                    Who needs a faster GPU, can use either a Crimson Canyon NUC with a 512-core Polaris GPU. Cost for a complete computer a little above $500. The CPU is faster than the i5 Kaby Lake's benchmarked here, especiallly for tasks where AVX-512 can be used, where it is much faster. The GPU can use up to 25 W, but the power limit can be configured depending on the desired application.

                    There also many products from various companies (Sapphire, UDOO, etc.) with AMD Embedded V1000 APUs, with Vega GPUs, at sizes around Nano-ITX.

                    For any of these computers, if there would be a need, it would not be difficult to design an add-in board, e.g. for an M.2 connector, with 16 camera inputs.


                    Someone who needs both the fastest CPU and the fastest GPU, should use the NUC8i7BEH computer with a Thunderbolt to PCIe adapter and with a NVIDIA discrete GPU, e.g. with a Turing card, preferably in the mobile variant, for a lower power consumption.

                    Even this combination would be much cheaper than NVIDIA Xavier.
                    Even cheaper would be to connect the discrete GPU with an adapter to the M.2 connector and to connect the SSD using a Thunderbolt adapter.

                    When using a mobile MXM GPU, the size would be about the same or smaller than that of a complete Xavier system with its carrier board.




























































                    Comment


                    • #20
                      Originally posted by AdrianBc View Post
                      I am sorry, but nothing that you say justifies the price demanded by NVIDIA.
                      It's designed to be an upgrade for those already locked in the NVIDIA ecosystem, and if we are talking of AI and computing, then this is a lot of the market.
                      If you software expects CUDA and Tensor Cores, then nothing can beat this thing.

                      Note that this isn't just "automotive". Any random iMx8 SoC can do automotive just fine, this is for AI in robotics and automotive. Which is where CUDA and Tensor Cores are required.

                      Comment

                      Working...
                      X