Announcement
Collapse
No announcement yet.
PhysX SDK Support Comes Back To Linux
Collapse
X
-
The article from readworld was originally an analysis because nvidia (or at least its marketing deparment) was up to some dirty tricks - they claimed that a gpu would increase physics performance quite a good deal over that of a cpu. What they didn't state was how misleading the test was, and that the comparison used was incredibly biased in the gpu's favour.
-
Originally posted by blackshard View PostSSE instructions just appeared on Intel Pentium III processor, and then on Athlon XP processor.
Where's talking about 1999.]
Now, 11 years later, we still have no SSE optimized. It is clear that if you bundle a software product with an hardware product, you're interested in selling the hardware product too.
That's the same reason nvidia pays game developers with its The Way Its Meant To Be Played (TWIMTBP) program.
It is funny to see that upcoming 3.0 SDK has SSE and multithreading enabled, it is even more funny to see that SDK 3.0 was announced when Ageia was not yet property of Nvidia...
However, I will judge when the new SDK will be out.
Nothing to say here, except the fact that I already said above: you want your software run crap except on your proprietary hardware.
In fact we are not talking about crippling anything.
We're just talking about nvidia dirty approach.
This is completely pointless. Where's talking about different hardware (do you know that a PS3 has a graphic processor that is roughly comparable to a very old 7600GT? You have to render all the physics effects, not just calculate them...), about proprietary code and about different operating systems and architectures. You're comparing apples with oranges.
Also I may say that physx effects on PC games are scaled up because any recent PC has much much more horsepower than any console. It's a matter of different point of view.
Again, about Cell optimizations, nvidia did fast in optimizing their PhysX implementation for such a new architecture like Cell (its first commercial implementation is in Playstation 3) which is heavily multithreaded and relies on data streaming, and can't do the same with an old and established x86 instruction set like SSE and multicore processors. Hmmm...
Leave a comment:
-
Originally posted by deanjo View PostThe link was to show that the next SDK has sse enabled and multithreading by default. However it is nice of you to prove that nVidia did not cripple the code. Also the Agiea PPU came after many years of development on Physx was already in development without any PPU hardware. During that time SSE or multipcore was not as prevalent on CPU's.
Now, 11 years later, we still have no SSE optimized. It is clear that if you bundle a software product with an hardware product, you're interested in selling the hardware product too.
That's the same reason nvidia pays game developers with its The Way Its Meant To Be Played (TWIMTBP) program.
It is funny to see that upcoming 3.0 SDK has SSE and multithreading enabled, it is even more funny to see that SDK 3.0 was announced when Ageia was not yet property of Nvidia...
However, I will judge when the new SDK will be out.
Originally posted by deanjo View PostSure nVidia could have done some deep hacking in of SSE into the existing code base however they had no real reason to develop that as it didn't make a difference to any of their products. It's no different then any other company out there when it comes to concentrating effort on features that benefit their product. It's not like the SDK was ported to Cuda then it was done, development continues on Physx and first priority, like any other company would, concentrates on their income makers.
Originally posted by deanjo View PostLack of development in one area is very different then purposefully crippling the competitor, like intel has in the past with their compilers (now however intel can no longer cripple the compiler, however it does not have to optimize them for competitor products either such as supporting SSE 4a, 3dNow, etc). Even with a 4x-5x boost that is expected in some Physx effects that would occur with SSE optimization it would still far short of matching a GPU as more effects are applied.
We're just talking about nvidia dirty approach.
Originally posted by deanjo View PostEven with optimization for Cell the effects fall short of matching a GPU's performance. This is why the various Physx SDK for the Power based products still falls well behind a GPU impementation requiring sacrifices to be made on their respective platform with less effects. A perfect example of this is compare Bionic Commando, Brother in Arms, Frontlines, Rise of the Argonauts, etc which do utilize Physx on the various platforms and you will see that the effects have been scaled back quite a bit on the consoles to maintain acceptable performance.
Also I may say that physx effects on PC games are scaled up because any recent PC has much much more horsepower than any console. It's a matter of different point of view.
Again, about Cell optimizations, nvidia did fast in optimizing their PhysX implementation for such a new architecture like Cell (its first commercial implementation is in Playstation 3) which is heavily multithreaded and relies on data streaming, and can't do the same with an old and established x86 instruction set like SSE and multicore processors. Hmmm...
Leave a comment:
-
Originally posted by blackshard View PostAnd so? Ageia PPU was a single precision device, it was not told users that it can "significantly deviate from correct results", even because significant is a relative word.
Also note this:
"PhysX is certainly not using x87 because of the advantages of extended precision. The original PPU hardware only had 32-bit single precision floating point, not even 64-bit double precision, let alone the extended 80-bit precision of x87. In fact, PhysX probably only uses single precision on the GPU, since it is accelerated on the G80, which has no double precision. The evidence all suggests that PhysX only needs single precision."
Source: http://www.realworldtech.com/page.cf...0510142143&p=4
Leave a comment:
-
Originally posted by deanjo View Post
Also note this:
"PhysX is certainly not using x87 because of the advantages of extended precision. The original PPU hardware only had 32-bit single precision floating point, not even 64-bit double precision, let alone the extended 80-bit precision of x87. In fact, PhysX probably only uses single precision on the GPU, since it is accelerated on the G80, which has no double precision. The evidence all suggests that PhysX only needs single precision."
Source: http://www.realworldtech.com/page.cf...0510142143&p=4
Leave a comment:
-
Originally posted by blackshard View PostNvidia converted Physx from Ageia PPU to CUDA in a few weeks.
Now they can't convert Physx from x87 to SSE. It is obvious, they won't convert it because they're not interested in making things efficient, they're just interested in selling more cards and filling customers' eyes with smoke.
Leave a comment:
-
Originally posted by yogi_berra View Post"Free" software is free R&D for large corporations, nothing more and nothing less, no matter how much RMS wants to convince people otherwise.
Unless, of course, you are willing to believe that IBM, the corporation that helped streamline the killing of people in the WWII concentration camps, has a conscience or Intel, AMD, Red Hat, and Novell are running charities.
If I would ignor that and would buy the proprietary stuff the other company would learns the oposite.
So there is no point to question the reason for right behaver. You can even question doing selfless stuff for people you are loving, because your body gives you a good druck as reward and makes you happy with that. The result, what happend is relevant, not the reasons.
Leave a comment:
-
Originally posted by deanjo View PostThen there is also another factor you have to consider. The Physx codebase dates back to 2002. Back then SSE was found only on a handful of processors in use. When Nvidia aquired them back in 2008 they of course wanted to get it running on their product as effeciently as possible as well as the huge markets of consoles whose licensing agreements dwarf PC's. So are they going to take extra time to convert legacy code on something they make very little on or put those efforts towards revenue makers. All is for not anyways as the next version will have SSE optimization in it as well.
Now they can't convert Physx from x87 to SSE. It is obvious, they won't convert it because they're not interested in making things efficient, they're just interested in selling more cards and filling customers' eyes with smoke.
Leave a comment:
-
Originally posted by md1032 View Postnvidia pointed out that it was up to the application to use more than one core, and that game developers specifically asked them to make sure it worked on CPUs without SIMD instructions.
So now they are saying that game developers are "bad and ugly" because they deliberately choose to bring down performances of their software? It looks to me they are just trying to shift the blame away themselves.
It's even more interesting that you can find the most fantasious excuses dealing with this physx story, eg. from "physics require 80-bit fpu precision (lol?)" to "we're shipping physx 3.0 with automatic multicore and sse support (lol again! here)"...
Leave a comment:
-
Originally posted by deanjo View PostSure they did, it is a far less daunting task then trying to get it running on a CPU even with SSE. Cell based systems have the advantage of SPE's which are self multitasking. This is the reason why other simular applications (such as folding@home or many other multitasking apps) see huge gains over a x86.
Leave a comment:
Leave a comment: