Originally posted by Dragonlord
View Post
Announcement
Collapse
No announcement yet.
When Will UT3 For Linux Be Released?
Collapse
X
-
-
Originally posted by deanjo View PostNvidia has no intention of killing Cuda. They have been very clear on that. Cuda already enjoys hundreds of universities teaching Cuda and still has several advantages to using it on Nvidia hardware over openCL. Plus there are many Cuda apps already out there (not the consumer side but on the academic/scientific side). Cuda has had enough of a nice lead in the area that it is going to be hard for openCL to kill it especially when there is no current plans on bringing openCL support to windows which is still the 10,000 pound gorrilla.
The same it was said about 3dfx's Glide. At the time it was the first consumer-apps 3D API available (some called it a distilled version of OpenGL and implemented in hardware [Voodoo architecture]). The way CUDA stands right now, is exactly that... Despite the momentum and advantage they had to be first to arrive to the market, see what fate did the API (and the company) had in the end. OpenGL out-grew Glide, simply because it was easier to extend the API and then wait for the hardware to catch up, since Glide was so intrinsically tied to the hardware 3dfx had much more trouble catching up. I know this won't necessarily be the same case for nVidia and CUDA, but the fact that the API is "proprietary" meaning that no other capable hardware can run it, will in the end (just like Glide's fate was) be its Achilles ankle. DirectX 11 (more so than OpenCL, I must admit) are the big problems CUDA will have to face, and knowing how scientific applications work, it is more likely that they will adopt OpenCL over CUDA or even DirectX 11, simply due to multiplatform (software and hardware) support. Still GPGPU is in its infancy by comparison (much like it was consumer 3D back in 1996 when the first consumer 3D accelerators appeared on the market), so what will it happen, we can only guess and speculate. Maybe CUDA grows itself to be a cross-platform (hardware) standard, and nVidia licenses it (like Intel did with the x86 microcode, MMX extensions, and other SIMD instructions) so that other manufacturers can implement it, and make it a standard API... Knowing their track record, that (today) seems highly unlikely, though.
Comment
-
Originally posted by Thetargos View Post*ring**ring* Glide, anyone?
The same it was said about 3dfx's Glide. At the time it was the first consumer-apps 3D API available (some called it a distilled version of OpenGL and implemented in hardware [Voodoo architecture]). The way CUDA stands right now, is exactly that... Despite the momentum and advantage they had to be first to arrive to the market, see what fate did the API (and the company) had in the end. OpenGL out-grew Glide, simply because it was easier to extend the API and then wait for the hardware to catch up, since Glide was so intrinsically tied to the hardware 3dfx had much more trouble catching up. I know this won't necessarily be the same case for nVidia and CUDA, but the fact that the API is "proprietary" meaning that no other capable hardware can run it, will in the end (just like Glide's fate was) be its Achilles ankle. DirectX 11 (more so than OpenCL, I must admit) are the big problems CUDA will have to face, and knowing how scientific applications work, it is more likely that they will adopt OpenCL over CUDA or even DirectX 11, simply due to multiplatform (software and hardware) support. Still GPGPU is in its infancy by comparison (much like it was consumer 3D back in 1996 when the first consumer 3D accelerators appeared on the market), so what will it happen, we can only guess and speculate. Maybe CUDA grows itself to be a cross-platform (hardware) standard, and nVidia licenses it (like Intel did with the x86 microcode, MMX extensions, and other SIMD instructions) so that other manufacturers can implement it, and make it a standard API... Knowing their track record, that (today) seems highly unlikely, though.
Comment
-
You have to keep in mind that we have nowadays a couple of strong parties to satisfy. Especially we have 2 major graphics card chip produces: nVidia and ATI. I don't know the exact numbers of usage of those but it should be half-half ( between the two, of course there is Intel and others but I neglect those for this post ). Having a standard that works only on nVidia and only on elected platforms and OpenCL which works on all ( due to not being proprietary ) then I would bet my money on OpenCL to win. Furthermore it's based like the other OpenX products ( OpenGL, OpenAL and now OpenCL ) on the same architecture. That's a huge plus to not have learning a new structure as the basic mechanics learned for OpenGL for example spill over. In my opinion a huge plus.
Comment
-
Originally posted by Dragonlord View PostYou have to keep in mind that we have nowadays a couple of strong parties to satisfy. Especially we have 2 major graphics card chip produces: nVidia and ATI. I don't know the exact numbers of usage of those but it should be half-half ( between the two, of course there is Intel and others but I neglect those for this post ). Having a standard that works only on nVidia and only on elected platforms and OpenCL which works on all ( due to not being proprietary ) then I would bet my money on OpenCL to win. Furthermore it's based like the other OpenX products ( OpenGL, OpenAL and now OpenCL ) on the same architecture. That's a huge plus to not have learning a new structure as the basic mechanics learned for OpenGL for example spill over. In my opinion a huge plus.
Also just because something has a open alternative it does not guarantee that it will be the new defacto standard. We have seen how openGL has deteriorated in it's support over the years in favor of closed (and sometimes propriatary) solutions.
Most CAD type applications which were once the stronghold of openGL for example offer DirectX support and most of the time is even the default renderer. PS3 and Wii systems are openGL capable but game devs still prefer using their propriatary solutions such as libgsm instead. Samethings goes with physics libraries. There have been opensource alternatives for quite some time now but your industry leaders would still have to be Physx and Havoc in commercially supported API's or a in house solution. Physx has been steadily gaining support over the last little while with some big game publishers signing up. That's not usually a sign of a quick to be dead tech.
Like I said I'm not saying C for Cuda will not die, it just won't be as fast to happen as you might expect. When you have a small number of competitors it's not enough to just equal each other efforts but one has to greatly exceed it's competeition to become the new standard. openCL still has a good 1.5 to 2 years of development before going toe to toe with the other established solutions, perhaps longer in any other OS outside of OS X. EAX has proven that for years a propriatary solution can enjoy market dominance. openAL has been available for years but didn't really start enjoying a solid backing until recently when Microsoft killed off sound acceleration in DX10 and even then EAX support is still going strong.Last edited by deanjo; 24 February 2009, 09:11 PM.
Comment
Comment