Announcement

Collapse
No announcement yet.

NVIDIA Delivering CUDA To Linux On Arm For HPC/Servers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • NVIDIA Delivering CUDA To Linux On Arm For HPC/Servers

    Phoronix: NVIDIA Delivering CUDA To Linux On Arm For HPC/Servers

    NVIDIA announced this morning for ISC 2019 that they are bringing CUDA to Arm beyond their work already for supporting GPU computing with lower-power Tegra SoCs...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    No fun being Nvidia. You own high-end GPU, yet you depend on your two mortal enemies for the CPU host, and both are competing with you in your core activity.

    No surprise that they're doing ARM with such commitment. Actually it's a good thing - the Nano and (newly half price) Xavier are awesome price-performance little machines. I for one am giving them a second look. The Nano in particular is a fantastic little board, at a very un-Nvidia price point. If Nvidia plays this right, they could be the ones who finally make ARM on the desktop/server go mainstream.
    Last edited by vegabook; 17 June 2019, 02:20 PM.

    Comment


    • #3
      Originally posted by vegabook View Post
      No fun being Nvidia. You own high-end GPU, yet you depend on your two mortal enemies for the CPU host, and both are competing with you in your core activity.
      There's also IBM's POWER which even supports NVLink natively. But that is probably only used at the higher end for multi-GPU nodes.

      Comment


      • #4
        Nvidia is also pursuing the RISC-V route, first for internal use but long-term, this could be their bet for their CPU future.

        Comment


        • #5
          Makes a lot of sense since their jetson line is ARM.

          Originally posted by ms178 View Post
          Nvidia is also pursuing the RISC-V route, first for internal use but long-term, this could be their bet for their CPU future.
          RISC-V is pretty much a requirement for exascale, as 128 bit virtual addressing will be needed (64-bit object + 64-bit offset). So I would not doubt that nVidia is very much pursuing the RISC-V route if they want to be present in the exascale realm.

          Comment


          • #6
            This approach is not too surprising as they have to be extra careful where they spend that R&D. The big value proposition they are bringing is CUDA which up to now was not cross platform. So rather than try to compete with the larger dogs (and flop like Qualcomm Centriq did), they will bring their best asset forward and make it work with everyone elses.

            Comment

            Working...
            X