Well, you're asking too *soon* anyways. Wait until the product launches at least
Announcement
Collapse
No announcement yet.
Looking At The OpenCL Performance Of ATI & NVIDIA On Linux
Collapse
X
-
Originally posted by brent View PostMichael, please keep in mind that SmallPtGPU contains a bug/incompatibility that seriously limits performance on NVidia hardware, especially pre-Fermi.
Here's a diff that fixes it. This improves performance more than ten-fold on G80/GT200.
Comment
-
Originally posted by Veerappan View PostI'm probably missing something here, but the patch that you linked only seems to correct things for Mac OS (#ifdef __APPLE__). The tests Michael ran were all in Ubuntu
Comment
-
Originally posted by brent View PostNVidia's implementation defines __APPLE__ on all OSes, for... whatever... reasons. I think the workaround is not needed on OS X anymore, either. Removing it completely should be fine.
Please, note, defining __APPLE__ under Linux is a (huge) NVIDIA's bug.
BTW, I'm the author of SmallPtGPU, MandelGPU, etc.; I have a 2x5870, 1x5850 and a 5770, let me know if you need to run the benchmarks on any of the above hardware.
Michael, you may be interested to check http://www.luxrender.net/wiki/index.php?title=SLG
It is a larger/more complex OpenCL application than the small demos (i.e. SmallPtGPU, ecc.) and it may provide more real-world numbers.
You can find small demo video about SLG here: http://vimeo.com/14290797
Comment
-
Originally posted by Qaridariumin that point Apple and Nvidia are the Good ones ;-) and intel is the Evil.
Comment
-
Originally posted by Qaridarium"Intel do doubt saw that allowing Havok FX to live would mean giving more substance to the value of a GPU over a CPU, a market they still can't really compete in."
intel is evil i know...
i never buy an intel cpu or product in the last 12 years.
and in the future i will never buy any product of this company.
but in my point of view nvidia fails on physX because an open standart like openCL and bulledphysik are much better also for nvidia thats because if there is more usage for an GPU nvidia will sell more GPUs and intel will lose more and more because no one need an fast CPU anymore.
on an open Standard like openCL only intel is the loser.
nvidia waste there time on CUDA and PhysX.
Comment
-
Originally posted by Qaridarium"What the developers use for a physics engine is up to the developer."
i don't think so.. nvidia pay for using physX---
developers just use what brings the most cash..
"Nobody is blocking them from doing so."
i think nvidia is blocking.
and they pay for max damange on other companys.
Here you go spouting off wild speculative theories again with out any basis.
"Nvidia also contributes to openCL and probably has the best implementation of it out there along with some of the best documentation."
documentation? can you give me the spec of a gtx480 ?
"They are not forcing anybody to use Cuda or Physx,"
they Pay this is a kind of force the force of the Profit and money-
"that is the choice of the developer."
the choice? i don't think so no other company pays for bullshit like nivida.
this choice just hurts entusers and force them to buy nvidia hardware.
? Technical engagement, including referenceable source code, and access to game builds for competing vendors
? Developer tools
? Product development
? Lab testing
? Marketing programs
? Support and integration with our partners
Same as nvidia.
Comment
-
One thing as well, where nvidia is concerned, OpenCL is a layer on top of the CUDA driver interface. Without Cuda, no openCL on nvidia, C for CUDA and OpenCL are simply two ways of accessing the capabilities as is DirectCompute, another vender specific, albeit software vender this time and AMD supports it as well. So it looks like AMD is "feeding" and other "evil empire" and supporting it as well.
Comment
Comment