Announcement

Collapse
No announcement yet.

NVIDIA Dropping 32-bit Linux Support For CUDA

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by TheBlackCat View Post
    If you are trying to use 32bit for high-performance computing in the first place you are doing it wrong.
    Compare OpenCL bitcoin miners. How much faster are the 64-bit ones?

    Comment


    • #12
      Originally posted by RealNC View Post
      Compare OpenCL bitcoin miners. How much faster are the 64-bit ones?
      Bitcoin rely entirely on a very small simple operation (SHA256^2.. well actually that's two operation in a row, but still)
      as such a bitcoin miner runs *ENTIRELY* on the GPU in OpenCL (or Cuda, but Nvidia cards have much worse performances)
      the code running on the main CPU is only in charge of setting everything up and then fetch work from pool and send it to miner worker on GPU, so it doesn't have much impact on the mining process and thus 32 vs. 64 doesn't matter much.

      But in fact most miners used a lot of GFX cards in parallel, which required as much PCIe slots as possible, which in turn meant high-end motherboards, which always mean putting in a modern 64bits CPU.
      And as Linux is trivial to install and use in 64 bits flavor.
      (On the other hand, I don't know about mining rigs running on Windows (for drivers reason, maybe ?), perhaps 32bits is still popular there ?)

      Comment


      • #13
        Great!

        If you're doing GPGPU you shouldn't be using 32-bit anyways.
        Removing CUDA from NVidia driver makes the Nvidia driver smaller, that's good!
        Also maybe it leads to more focus on OpenCL.

        Comment


        • #14
          Originally posted by uid313 View Post
          If you're doing GPGPU you shouldn't be using 32-bit anyways.
          Well depends on the work load.
          On a heavily parallelisable problem, where moste of the comuptation happens on the GPU,
          you might go for a light-weight solution on the CPU side
          (that's the current most popular setup for bitcoin mining, by the way:
          heavy lifting done by a specific ASIC, while the driving CPU is usually just a Raspberry Pi,
          that's rather easy to setup as ASIC usually simply use an USB interconnect.
          CUDA is more demanding as you need a PCIe interconnect)

          As far as I know, the only ultra-light-weigth ARM setup for CUDA (i.e.: the only ARM board which I've ever heard boasting a PCIe slot) is based around Nvidia's own Tegra3 which it-self is still a 32bits (even Tegra 4 is still a 32bits big.little).

          Thus 32bits CUDA still makes sense if you have a purely GPU workload that is usable together with a minimalistic ARM CPU for the CPU side.
          (And thus, as [power for GFX cards] >>> [tiny power draw of ARM] you only spend ~[power usage of GFX card] per computational node, instead of [power usage of GFX card] + [honking power consumption of full blown x86_64 which basically stay idle while the GFX cards does the computation])

          If your carefully read Nvidia's release, they are specifically saying that they are killing the x86 32bits edition of CUDA. CUDA for ARM 32bits is still available.



          Removing CUDA from NVidia driver makes the Nvidia driver smaller, that's good!
          Also maybe it leads to more focus on OpenCL.
          But on the other hand, it removes a critical piece of technology that is used by some physics package in some games. Thus most game which, for exemple, use PhysX-over-CUDA will need to pack in Cuda as well, which could introduce incompatibilities between the cuda driver and the kernel driver.

          On the other hand, I'm all for OpenCL and such open cross-compatible standards.

          (With CUDA, your code only works for Nvidia. With OpenCL, even if you heavily optimised everything for AMD, your code still runs for Intel or Nvidia, although slower).

          Comment

          Working...
          X