Originally posted by deanjo
View Post
Announcement
Collapse
No announcement yet.
PhysX SDK Support Comes Back To Linux
Collapse
X
-
Originally posted by deanjo View PostYa because Cell/PPC does SSE so well....
so.. for a fucking console they could put it in - but on the pc platform they had to take it out?
Shouldn't that give you something to think about?
Also - do you run x86 code on powerpc? No, you don't. So there is no reason not to use SSE/SSE2 instructions when compiling.
Except when you want to cripple the cpu performance.
Conclusion:
Nvidia is a lying bag of shit and some people fall for it.
Comment
-
-
Originally posted by deanjo View PostOr in the Xenon found in the Xbox 360 you use VMX128 which is not completely backwards compatible with Altivec (the enhancements BTW were done specifically for game physics).
Comment
-
Originally posted by energyman View Postfunny because they support altivec...
so.. for a fucking console they could put it in - but on the pc platform they had to take it out?
Shouldn't that give you something to think about?
Also - do you run x86 code on powerpc? No, you don't. So there is no reason not to use SSE/SSE2 instructions when compiling.
Except when you want to cripple the cpu performance.
Conclusion:
Nvidia is a lying bag of shit and some people fall for it.
Comment
-
Then there is also another factor you have to consider. The Physx codebase dates back to 2002. Back then SSE was found only on a handful of processors in use. When Nvidia aquired them back in 2008 they of course wanted to get it running on their product as effeciently as possible as well as the huge markets of consoles whose licensing agreements dwarf PC's. So are they going to take extra time to convert legacy code on something they make very little on or put those efforts towards revenue makers. All is for not anyways as the next version will have SSE optimization in it as well.
Comment
-
@deanjo - this would make sense if there was some huge port necessary to support SSE, but that's the whole point. There isn't.
Developers with access to the source code are able to tell the compiler to allow SSE and everything gets magically faster. Lots of them have done exactly that, and shipped it in games.
NVidia just has to flip a compiler flag, but they refuse to do so.
Comment
-
Originally posted by deanjo View PostSure they did, it is a far less daunting task then trying to get it running on a CPU even with SSE. Cell based systems have the advantage of SPE's which are self multitasking. This is the reason why other simular applications (such as folding@home or many other multitasking apps) see huge gains over a x86.
Comment
-
Originally posted by md1032 View Postnvidia pointed out that it was up to the application to use more than one core, and that game developers specifically asked them to make sure it worked on CPUs without SIMD instructions.
So now they are saying that game developers are "bad and ugly" because they deliberately choose to bring down performances of their software? It looks to me they are just trying to shift the blame away themselves.
It's even more interesting that you can find the most fantasious excuses dealing with this physx story, eg. from "physics require 80-bit fpu precision (lol?)" to "we're shipping physx 3.0 with automatic multicore and sse support (lol again! here)"...
Comment
Comment