Announcement

Collapse
No announcement yet.

PhysX SDK Support Comes Back To Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • mirv
    replied
    The article from readworld was originally an analysis because nvidia (or at least its marketing deparment) was up to some dirty tricks - they claimed that a gpu would increase physics performance quite a good deal over that of a cpu. What they didn't state was how misleading the test was, and that the comparison used was incredibly biased in the gpu's favour.

    Leave a comment:


  • deanjo
    replied
    Originally posted by blackshard View Post
    SSE instructions just appeared on Intel Pentium III processor, and then on Athlon XP processor.
    Where's talking about 1999.]

    Now, 11 years later, we still have no SSE optimized. It is clear that if you bundle a software product with an hardware product, you're interested in selling the hardware product too.
    That's the same reason nvidia pays game developers with its The Way Its Meant To Be Played (TWIMTBP) program.
    Exactly, they just appeared. Having support for new instruction sets delayed well after instruction sets appear is nothing new nor unique. For example there are litterally thousands of applications out there still being actively developed that could benefit from SSE 4+ but the applications still don't take advantage of it. Of course development is going to be concentrated on their revenue stream.


    It is funny to see that upcoming 3.0 SDK has SSE and multithreading enabled, it is even more funny to see that SDK 3.0 was announced when Ageia was not yet property of Nvidia...

    However, I will judge when the new SDK will be out.
    Ya, so once again, Agiea started to address SSE in the 3.0 SDK proving yet again this was not nVidia's doing.

    Nothing to say here, except the fact that I already said above: you want your software run crap except on your proprietary hardware.
    You can say that about any software that supports special functions of a particular product. Again that is not limited to nvidia. Havok apps for example could have been made to take advantage of GPU's as well but that plan got squashed as well because the owner of the IP doesn't have a GPU that could exploit it. When Larabee got killed so did the efforts behind Havok FX.

    In fact we are not talking about crippling anything.
    We're just talking about nvidia dirty approach.
    There is nothing dirty about it. It's about allocating your R&D to take advantage of your product that brings in revenue.

    This is completely pointless. Where's talking about different hardware (do you know that a PS3 has a graphic processor that is roughly comparable to a very old 7600GT? You have to render all the physics effects, not just calculate them...), about proprietary code and about different operating systems and architectures. You're comparing apples with oranges.
    Physx isn't even done on the gpu of the PS3. It's done entirely on the processors as is it is done one the XBox 360, Wii, and iPod and at a much lower scale then a dedicated GPU solution.

    Also I may say that physx effects on PC games are scaled up because any recent PC has much much more horsepower than any console. It's a matter of different point of view.
    When it comes to handling massively parallel calculations even those old Power chips still have a very large advantage over x86 architecture. The fastest x86 CPU's available today for example even in multiprocessor setups can't keep up to other parallel projects. Take a look at any of your mass computing projects and the PS3 with it's Cells whoops ass on every CPU only setup and those projects are usually optimized to take advantage of the latest instruction sets that CPU's offer.

    Again, about Cell optimizations, nvidia did fast in optimizing their PhysX implementation for such a new architecture like Cell (its first commercial implementation is in Playstation 3) which is heavily multithreaded and relies on data streaming, and can't do the same with an old and established x86 instruction set like SSE and multicore processors. Hmmm...
    Nothing special here as well. Their revenue stream from those consoles dwarfs any revenue stream that could be accomplished by optimizing for products that produce next to no revenue for them. Every company out there concentrates their efforts on their sources of revenue. The more revenue they get out of a market is going to be the market that they concentrate their efforts on.

    Leave a comment:


  • blackshard
    replied
    Originally posted by deanjo View Post
    The link was to show that the next SDK has sse enabled and multithreading by default. However it is nice of you to prove that nVidia did not cripple the code. Also the Agiea PPU came after many years of development on Physx was already in development without any PPU hardware. During that time SSE or multipcore was not as prevalent on CPU's.
    SSE instructions just appeared on Intel Pentium III processor, and then on Athlon XP processor. Where's talking about 1999.
    Now, 11 years later, we still have no SSE optimized. It is clear that if you bundle a software product with an hardware product, you're interested in selling the hardware product too.
    That's the same reason nvidia pays game developers with its The Way Its Meant To Be Played (TWIMTBP) program.

    It is funny to see that upcoming 3.0 SDK has SSE and multithreading enabled, it is even more funny to see that SDK 3.0 was announced when Ageia was not yet property of Nvidia...

    However, I will judge when the new SDK will be out.

    Originally posted by deanjo View Post
    Sure nVidia could have done some deep hacking in of SSE into the existing code base however they had no real reason to develop that as it didn't make a difference to any of their products. It's no different then any other company out there when it comes to concentrating effort on features that benefit their product. It's not like the SDK was ported to Cuda then it was done, development continues on Physx and first priority, like any other company would, concentrates on their income makers.
    Nothing to say here, except the fact that I already said above: you want your software run crap except on your proprietary hardware.

    Originally posted by deanjo View Post
    Lack of development in one area is very different then purposefully crippling the competitor, like intel has in the past with their compilers (now however intel can no longer cripple the compiler, however it does not have to optimize them for competitor products either such as supporting SSE 4a, 3dNow, etc). Even with a 4x-5x boost that is expected in some Physx effects that would occur with SSE optimization it would still far short of matching a GPU as more effects are applied.
    In fact we are not talking about crippling anything.
    We're just talking about nvidia dirty approach.

    Originally posted by deanjo View Post
    Even with optimization for Cell the effects fall short of matching a GPU's performance. This is why the various Physx SDK for the Power based products still falls well behind a GPU impementation requiring sacrifices to be made on their respective platform with less effects. A perfect example of this is compare Bionic Commando, Brother in Arms, Frontlines, Rise of the Argonauts, etc which do utilize Physx on the various platforms and you will see that the effects have been scaled back quite a bit on the consoles to maintain acceptable performance.
    This is completely pointless. Where's talking about different hardware (do you know that a PS3 has a graphic processor that is roughly comparable to a very old 7600GT? You have to render all the physics effects, not just calculate them...), about proprietary code and about different operating systems and architectures. You're comparing apples with oranges.

    Also I may say that physx effects on PC games are scaled up because any recent PC has much much more horsepower than any console. It's a matter of different point of view.

    Again, about Cell optimizations, nvidia did fast in optimizing their PhysX implementation for such a new architecture like Cell (its first commercial implementation is in Playstation 3) which is heavily multithreaded and relies on data streaming, and can't do the same with an old and established x86 instruction set like SSE and multicore processors. Hmmm...

    Leave a comment:


  • deanjo
    replied
    Originally posted by blackshard View Post
    And so? Ageia PPU was a single precision device, it was not told users that it can "significantly deviate from correct results", even because significant is a relative word.

    Also note this:

    "PhysX is certainly not using x87 because of the advantages of extended precision. The original PPU hardware only had 32-bit single precision floating point, not even 64-bit double precision, let alone the extended 80-bit precision of x87. In fact, PhysX probably only uses single precision on the GPU, since it is accelerated on the G80, which has no double precision. The evidence all suggests that PhysX only needs single precision."

    Source: http://www.realworldtech.com/page.cf...0510142143&p=4
    The link was to show that the next SDK has sse enabled and multithreading by default. However it is nice of you to prove that nVidia did not cripple the code. Also the Agiea PPU came after many years of development on Physx was already in development without any PPU hardware. During that time SSE or multipcore was not as prevalent on CPU's. Sure nVidia could have done some deep hacking in of SSE into the existing code base however they had no real reason to develop that as it didn't make a difference to any of their products. It's no different then any other company out there when it comes to concentrating effort on features that benefit their product. It's not like the SDK was ported to Cuda then it was done, development continues on Physx and first priority, like any other company would, concentrates on their income makers. Lack of development in one area is very different then purposefully crippling the competitor, like intel has in the past with their compilers (now however intel can no longer cripple the compiler, however it does not have to optimize them for competitor products either such as supporting SSE 4a, 3dNow, etc). Even with a 4x-5x boost that is expected in some Physx effects that would occur with SSE optimization it would still far short of matching a GPU as more effects are applied. Even with optimization for Cell the effects fall short of matching a GPU's performance. This is why the various Physx SDK for the Power based products still falls well behind a GPU impementation requiring sacrifices to be made on their respective platform with less effects. A perfect example of this is compare Bionic Commando, Brother in Arms, Frontlines, Rise of the Argonauts, etc which do utilize Physx on the various platforms and you will see that the effects have been scaled back quite a bit on the consoles to maintain acceptable performance.

    Leave a comment:


  • blackshard
    replied
    And so? Ageia PPU was a single precision device, it was not told users that it can "significantly deviate from correct results", even because significant is a relative word.

    Also note this:

    "PhysX is certainly not using x87 because of the advantages of extended precision. The original PPU hardware only had 32-bit single precision floating point, not even 64-bit double precision, let alone the extended 80-bit precision of x87. In fact, PhysX probably only uses single precision on the GPU, since it is accelerated on the G80, which has no double precision. The evidence all suggests that PhysX only needs single precision."

    Source: http://www.realworldtech.com/page.cf...0510142143&p=4

    Leave a comment:


  • deanjo
    replied
    Originally posted by blackshard View Post
    Nvidia converted Physx from Ageia PPU to CUDA in a few weeks.
    Now they can't convert Physx from x87 to SSE. It is obvious, they won't convert it because they're not interested in making things efficient, they're just interested in selling more cards and filling customers' eyes with smoke.

    Leave a comment:


  • blackiwid
    replied
    Originally posted by yogi_berra View Post
    "Free" software is free R&D for large corporations, nothing more and nothing less, no matter how much RMS wants to convince people otherwise.

    Unless, of course, you are willing to believe that IBM, the corporation that helped streamline the killing of people in the WWII concentration camps, has a conscience or Intel, AMD, Red Hat, and Novell are running charities.
    The Reasons for big big companys are surely questionable, but that?s not the Point, if from 2 companys one do the right stuff for the wrong reasons, I prefer that company over the other. So I reward that, and the company lerns: "ok what I did was right (to make money), so I go on with it"

    If I would ignor that and would buy the proprietary stuff the other company would learns the oposite.

    So there is no point to question the reason for right behaver. You can even question doing selfless stuff for people you are loving, because your body gives you a good druck as reward and makes you happy with that. The result, what happend is relevant, not the reasons.

    Leave a comment:


  • blackshard
    replied
    Originally posted by deanjo View Post
    Then there is also another factor you have to consider. The Physx codebase dates back to 2002. Back then SSE was found only on a handful of processors in use. When Nvidia aquired them back in 2008 they of course wanted to get it running on their product as effeciently as possible as well as the huge markets of consoles whose licensing agreements dwarf PC's. So are they going to take extra time to convert legacy code on something they make very little on or put those efforts towards revenue makers. All is for not anyways as the next version will have SSE optimization in it as well.
    Nvidia converted Physx from Ageia PPU to CUDA in a few weeks.
    Now they can't convert Physx from x87 to SSE. It is obvious, they won't convert it because they're not interested in making things efficient, they're just interested in selling more cards and filling customers' eyes with smoke.

    Leave a comment:


  • blackshard
    replied
    Originally posted by md1032 View Post
    nvidia pointed out that it was up to the application to use more than one core, and that game developers specifically asked them to make sure it worked on CPUs without SIMD instructions.
    Yeah yeah... it is shocking nvidia did not say that donkeys are flying...

    So now they are saying that game developers are "bad and ugly" because they deliberately choose to bring down performances of their software? It looks to me they are just trying to shift the blame away themselves.

    It's even more interesting that you can find the most fantasious excuses dealing with this physx story, eg. from "physics require 80-bit fpu precision (lol?)" to "we're shipping physx 3.0 with automatic multicore and sse support (lol again! here)"...

    Leave a comment:


  • smitty3268
    replied
    Originally posted by deanjo View Post
    Sure they did, it is a far less daunting task then trying to get it running on a CPU even with SSE. Cell based systems have the advantage of SPE's which are self multitasking. This is the reason why other simular applications (such as folding@home or many other multitasking apps) see huge gains over a x86.
    And i have to call BS on this. It's well known that SPE's are extremely limited in what they do and getting decent performance out of them takes a huge amount of optimization work. Using the much more general purpose SSE hardware is much simpler, although I'm sure it's less important to NVidia's bottom line than getting it working on the consoles.

    Leave a comment:

Working...
X