Originally posted by TheBlackCat
View Post
Announcement
Collapse
No announcement yet.
NVIDIA Dropping 32-bit Linux Support For CUDA
Collapse
X
-
Originally posted by RealNC View PostCompare OpenCL bitcoin miners. How much faster are the 64-bit ones?
as such a bitcoin miner runs *ENTIRELY* on the GPU in OpenCL (or Cuda, but Nvidia cards have much worse performances)
the code running on the main CPU is only in charge of setting everything up and then fetch work from pool and send it to miner worker on GPU, so it doesn't have much impact on the mining process and thus 32 vs. 64 doesn't matter much.
But in fact most miners used a lot of GFX cards in parallel, which required as much PCIe slots as possible, which in turn meant high-end motherboards, which always mean putting in a modern 64bits CPU.
And as Linux is trivial to install and use in 64 bits flavor.
(On the other hand, I don't know about mining rigs running on Windows (for drivers reason, maybe ?), perhaps 32bits is still popular there ?)
Comment
-
Originally posted by uid313 View PostIf you're doing GPGPU you shouldn't be using 32-bit anyways.
On a heavily parallelisable problem, where moste of the comuptation happens on the GPU,
you might go for a light-weight solution on the CPU side
(that's the current most popular setup for bitcoin mining, by the way:
heavy lifting done by a specific ASIC, while the driving CPU is usually just a Raspberry Pi,
that's rather easy to setup as ASIC usually simply use an USB interconnect.
CUDA is more demanding as you need a PCIe interconnect)
As far as I know, the only ultra-light-weigth ARM setup for CUDA (i.e.: the only ARM board which I've ever heard boasting a PCIe slot) is based around Nvidia's own Tegra3 which it-self is still a 32bits (even Tegra 4 is still a 32bits big.little).
Thus 32bits CUDA still makes sense if you have a purely GPU workload that is usable together with a minimalistic ARM CPU for the CPU side.
(And thus, as [power for GFX cards] >>> [tiny power draw of ARM] you only spend ~[power usage of GFX card] per computational node, instead of [power usage of GFX card] + [honking power consumption of full blown x86_64 which basically stay idle while the GFX cards does the computation])
If your carefully read Nvidia's release, they are specifically saying that they are killing the x86 32bits edition of CUDA. CUDA for ARM 32bits is still available.
Removing CUDA from NVidia driver makes the Nvidia driver smaller, that's good!
Also maybe it leads to more focus on OpenCL.
On the other hand, I'm all for OpenCL and such open cross-compatible standards.
(With CUDA, your code only works for Nvidia. With OpenCL, even if you heavily optimised everything for AMD, your code still runs for Intel or Nvidia, although slower).
Comment
Comment