Announcement
Collapse
No announcement yet.
AMD's GPUOpen HIP Project Made Progress Over The Summer
Collapse
X
-
Originally posted by pal666 View Postyour knowledge is incorrect
Comment
-
Originally posted by schmidtbag View PostCUDA, to my knowledge, is now open-sourced.
WAT?!?!?!111!
Cuda the language and the spec is of course open because they want people to use it, but the compiler isn't. No way in hell you can compile CUDA to work on other GPUs, currently.
Comment
-
Originally posted by starshipeleven View PostWAT?
WAT?!?!?!111!
Cuda the language and the spec is of course open because they want people to use it, but the compiler isn't. No way in hell you can compile CUDA to work on other GPUs, currently.
I'm well aware you cannot currently get existing CUDA applications to work on other GPUs. The exact purpose of my original post was to suggest that people create drivers so other GPUs can be CUDA-compatible. Calm down and read more carefully, next time.
I understand that not the entire CUDA spec is open source, so I understand creating drivers for other GPUs could be tricky. Regardless, I think a compatibility layer for CUDA to OpenCL would likely be more beneficial than this HIP project (in the same sense of wine running Windows programs on Linux).
Comment
-
Originally posted by schmidtbag View PostThe compiler is in fact open-sourced.
I understand that not the entire CUDA spec is open source, so I understand creating drivers for other GPUs could be tricky. Regardless, I think a compatibility layer for CUDA to OpenCL would likely be more beneficial than this HIP project (in the same sense of wine running Windows programs on Linux).
I think they went with this because that's the only realistic way to get same or near-same performance reliably, adding a layer of redirection on GPU code is bad, real bad.
Comment
-
Originally posted by starshipeleven View PostThat's only a mere detail. They must have moved all useful stuff in their blob first. The main fact you are wrong about (and the reason I'm reacting like that) is that you can even think NVIDIA has left open the door for anyone to use CUDA on their non-NVIDIA GPUs. That's madness.
You mean that people in computing will be interested in a project that will allow them to run *SOME* few select programs like meh, quite a few like crap, and most not at all?
I think they went with this because that's the only realistic way to get same or near-same performance reliably, adding a layer of redirection on GPU code is bad, real bad.
Comment
-
Originally posted by schmidtbag View PostNot really... Nvidia contributes to open source drivers for Tegra.
Every once in a while, Nvidia pitches in a little bit toward nouveau.
What use does Nvidia have of open-sourcing the CUDA compiler if that doesn't open doors for other GPUs to utilize it?
but that useful stuff is probably specific to Nvidia hardware.
CUDA as-is probably doesn't really reveal much about Nvidia's architectures - that's what they actually care about.
It would be against the whole point of having their own different implementation for computing to just let everyone use it on their cards.
They are going the same way with physx, down to the point of disabling it if the driver detects a non-NVIDIA GPU in the system (I assume they ignore Intel stuff).
You mean that people would prefer to be forced to re-compile something so they can run it on their machine?
Most people aren't willing to do that.
had it not occurred to you that not all CUDA applications are open source?
Do you really think the devs of the closed-source applications are going to want to support 2 builds?
I would much rather have an AMD GPU emulate CUDA at half it's potential performance, than to use strictly the CPU or be forced to buy a new GPU.
Companies don't really care about hardware costs. Of course they don't buy 200 new GPUs every saturday, but when they change their systems, the cost of the hardware isn't a factor.
Again - it's better than no GPU support at all. Would you rather have great performance in a limited selection of applications, or "decent" performance in all applications?
Making a shim so that a bunch of workstation guys can run CUDA applications that on average also run on OpenCL too does not make economic sense.
Comment
-
Originally posted by starshipeleven View PostDo you run a cluster of GPUs for computing? Those don't usually like the idea of running very expensive AMD hardware at half capacity for lulz, they prefer getting NVIDIA cards at the same price to get full performance.
Companies don't really care about hardware costs. Of course they don't buy 200 new GPUs every saturday, but when they change their systems, the cost of the hardware isn't a factor.
What I wanted to say here is that the costs of the cards themselves aren't terribly relevant, but the running costs and the performance of the system are.
So yeah, they prefer vendor lock-in to slashing performance or introducing instability or (the horror) errors in the API translation leading to calculation errors.
Comment
Comment