Announcement

Collapse
No announcement yet.

PhysX SDK Support Comes Back To Linux

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #61
    Originally posted by deanjo View Post
    Then there is also another factor you have to consider. The Physx codebase dates back to 2002. Back then SSE was found only on a handful of processors in use. When Nvidia aquired them back in 2008 they of course wanted to get it running on their product as effeciently as possible as well as the huge markets of consoles whose licensing agreements dwarf PC's. So are they going to take extra time to convert legacy code on something they make very little on or put those efforts towards revenue makers. All is for not anyways as the next version will have SSE optimization in it as well.
    Nvidia converted Physx from Ageia PPU to CUDA in a few weeks.
    Now they can't convert Physx from x87 to SSE. It is obvious, they won't convert it because they're not interested in making things efficient, they're just interested in selling more cards and filling customers' eyes with smoke.

    Comment


    • #62
      Originally posted by yogi_berra View Post
      "Free" software is free R&D for large corporations, nothing more and nothing less, no matter how much RMS wants to convince people otherwise.

      Unless, of course, you are willing to believe that IBM, the corporation that helped streamline the killing of people in the WWII concentration camps, has a conscience or Intel, AMD, Red Hat, and Novell are running charities.
      The Reasons for big big companys are surely questionable, but that?s not the Point, if from 2 companys one do the right stuff for the wrong reasons, I prefer that company over the other. So I reward that, and the company lerns: "ok what I did was right (to make money), so I go on with it"

      If I would ignor that and would buy the proprietary stuff the other company would learns the oposite.

      So there is no point to question the reason for right behaver. You can even question doing selfless stuff for people you are loving, because your body gives you a good druck as reward and makes you happy with that. The result, what happend is relevant, not the reasons.

      Comment


      • #63
        Originally posted by blackshard View Post
        Nvidia converted Physx from Ageia PPU to CUDA in a few weeks.
        Now they can't convert Physx from x87 to SSE. It is obvious, they won't convert it because they're not interested in making things efficient, they're just interested in selling more cards and filling customers' eyes with smoke.
        http://www.linksalpha.com/discuss?id...-sdk-30-434377

        Comment


        • #64
          And so? Ageia PPU was a single precision device, it was not told users that it can "significantly deviate from correct results", even because significant is a relative word.

          Also note this:

          "PhysX is certainly not using x87 because of the advantages of extended precision. The original PPU hardware only had 32-bit single precision floating point, not even 64-bit double precision, let alone the extended 80-bit precision of x87. In fact, PhysX probably only uses single precision on the GPU, since it is accelerated on the G80, which has no double precision. The evidence all suggests that PhysX only needs single precision."

          Source: http://www.realworldtech.com/page.cf...0510142143&p=4

          Comment


          • #65
            Originally posted by blackshard View Post
            And so? Ageia PPU was a single precision device, it was not told users that it can "significantly deviate from correct results", even because significant is a relative word.

            Also note this:

            "PhysX is certainly not using x87 because of the advantages of extended precision. The original PPU hardware only had 32-bit single precision floating point, not even 64-bit double precision, let alone the extended 80-bit precision of x87. In fact, PhysX probably only uses single precision on the GPU, since it is accelerated on the G80, which has no double precision. The evidence all suggests that PhysX only needs single precision."

            Source: http://www.realworldtech.com/page.cf...0510142143&p=4
            The link was to show that the next SDK has sse enabled and multithreading by default. However it is nice of you to prove that nVidia did not cripple the code. Also the Agiea PPU came after many years of development on Physx was already in development without any PPU hardware. During that time SSE or multipcore was not as prevalent on CPU's. Sure nVidia could have done some deep hacking in of SSE into the existing code base however they had no real reason to develop that as it didn't make a difference to any of their products. It's no different then any other company out there when it comes to concentrating effort on features that benefit their product. It's not like the SDK was ported to Cuda then it was done, development continues on Physx and first priority, like any other company would, concentrates on their income makers. Lack of development in one area is very different then purposefully crippling the competitor, like intel has in the past with their compilers (now however intel can no longer cripple the compiler, however it does not have to optimize them for competitor products either such as supporting SSE 4a, 3dNow, etc). Even with a 4x-5x boost that is expected in some Physx effects that would occur with SSE optimization it would still far short of matching a GPU as more effects are applied. Even with optimization for Cell the effects fall short of matching a GPU's performance. This is why the various Physx SDK for the Power based products still falls well behind a GPU impementation requiring sacrifices to be made on their respective platform with less effects. A perfect example of this is compare Bionic Commando, Brother in Arms, Frontlines, Rise of the Argonauts, etc which do utilize Physx on the various platforms and you will see that the effects have been scaled back quite a bit on the consoles to maintain acceptable performance.

            Comment


            • #66
              Originally posted by deanjo View Post
              The link was to show that the next SDK has sse enabled and multithreading by default. However it is nice of you to prove that nVidia did not cripple the code. Also the Agiea PPU came after many years of development on Physx was already in development without any PPU hardware. During that time SSE or multipcore was not as prevalent on CPU's.
              SSE instructions just appeared on Intel Pentium III processor, and then on Athlon XP processor. Where's talking about 1999.
              Now, 11 years later, we still have no SSE optimized. It is clear that if you bundle a software product with an hardware product, you're interested in selling the hardware product too.
              That's the same reason nvidia pays game developers with its The Way Its Meant To Be Played (TWIMTBP) program.

              It is funny to see that upcoming 3.0 SDK has SSE and multithreading enabled, it is even more funny to see that SDK 3.0 was announced when Ageia was not yet property of Nvidia...

              However, I will judge when the new SDK will be out.

              Originally posted by deanjo View Post
              Sure nVidia could have done some deep hacking in of SSE into the existing code base however they had no real reason to develop that as it didn't make a difference to any of their products. It's no different then any other company out there when it comes to concentrating effort on features that benefit their product. It's not like the SDK was ported to Cuda then it was done, development continues on Physx and first priority, like any other company would, concentrates on their income makers.
              Nothing to say here, except the fact that I already said above: you want your software run crap except on your proprietary hardware.

              Originally posted by deanjo View Post
              Lack of development in one area is very different then purposefully crippling the competitor, like intel has in the past with their compilers (now however intel can no longer cripple the compiler, however it does not have to optimize them for competitor products either such as supporting SSE 4a, 3dNow, etc). Even with a 4x-5x boost that is expected in some Physx effects that would occur with SSE optimization it would still far short of matching a GPU as more effects are applied.
              In fact we are not talking about crippling anything.
              We're just talking about nvidia dirty approach.

              Originally posted by deanjo View Post
              Even with optimization for Cell the effects fall short of matching a GPU's performance. This is why the various Physx SDK for the Power based products still falls well behind a GPU impementation requiring sacrifices to be made on their respective platform with less effects. A perfect example of this is compare Bionic Commando, Brother in Arms, Frontlines, Rise of the Argonauts, etc which do utilize Physx on the various platforms and you will see that the effects have been scaled back quite a bit on the consoles to maintain acceptable performance.
              This is completely pointless. Where's talking about different hardware (do you know that a PS3 has a graphic processor that is roughly comparable to a very old 7600GT? You have to render all the physics effects, not just calculate them...), about proprietary code and about different operating systems and architectures. You're comparing apples with oranges.

              Also I may say that physx effects on PC games are scaled up because any recent PC has much much more horsepower than any console. It's a matter of different point of view.

              Again, about Cell optimizations, nvidia did fast in optimizing their PhysX implementation for such a new architecture like Cell (its first commercial implementation is in Playstation 3) which is heavily multithreaded and relies on data streaming, and can't do the same with an old and established x86 instruction set like SSE and multicore processors. Hmmm...

              Comment


              • #67
                Originally posted by blackshard View Post
                SSE instructions just appeared on Intel Pentium III processor, and then on Athlon XP processor.
                Where's talking about 1999.]

                Now, 11 years later, we still have no SSE optimized. It is clear that if you bundle a software product with an hardware product, you're interested in selling the hardware product too.
                That's the same reason nvidia pays game developers with its The Way Its Meant To Be Played (TWIMTBP) program.
                Exactly, they just appeared. Having support for new instruction sets delayed well after instruction sets appear is nothing new nor unique. For example there are litterally thousands of applications out there still being actively developed that could benefit from SSE 4+ but the applications still don't take advantage of it. Of course development is going to be concentrated on their revenue stream.


                It is funny to see that upcoming 3.0 SDK has SSE and multithreading enabled, it is even more funny to see that SDK 3.0 was announced when Ageia was not yet property of Nvidia...

                However, I will judge when the new SDK will be out.
                Ya, so once again, Agiea started to address SSE in the 3.0 SDK proving yet again this was not nVidia's doing.

                Nothing to say here, except the fact that I already said above: you want your software run crap except on your proprietary hardware.
                You can say that about any software that supports special functions of a particular product. Again that is not limited to nvidia. Havok apps for example could have been made to take advantage of GPU's as well but that plan got squashed as well because the owner of the IP doesn't have a GPU that could exploit it. When Larabee got killed so did the efforts behind Havok FX.

                In fact we are not talking about crippling anything.
                We're just talking about nvidia dirty approach.
                There is nothing dirty about it. It's about allocating your R&D to take advantage of your product that brings in revenue.

                This is completely pointless. Where's talking about different hardware (do you know that a PS3 has a graphic processor that is roughly comparable to a very old 7600GT? You have to render all the physics effects, not just calculate them...), about proprietary code and about different operating systems and architectures. You're comparing apples with oranges.
                Physx isn't even done on the gpu of the PS3. It's done entirely on the processors as is it is done one the XBox 360, Wii, and iPod and at a much lower scale then a dedicated GPU solution.

                Also I may say that physx effects on PC games are scaled up because any recent PC has much much more horsepower than any console. It's a matter of different point of view.
                When it comes to handling massively parallel calculations even those old Power chips still have a very large advantage over x86 architecture. The fastest x86 CPU's available today for example even in multiprocessor setups can't keep up to other parallel projects. Take a look at any of your mass computing projects and the PS3 with it's Cells whoops ass on every CPU only setup and those projects are usually optimized to take advantage of the latest instruction sets that CPU's offer.

                Again, about Cell optimizations, nvidia did fast in optimizing their PhysX implementation for such a new architecture like Cell (its first commercial implementation is in Playstation 3) which is heavily multithreaded and relies on data streaming, and can't do the same with an old and established x86 instruction set like SSE and multicore processors. Hmmm...
                Nothing special here as well. Their revenue stream from those consoles dwarfs any revenue stream that could be accomplished by optimizing for products that produce next to no revenue for them. Every company out there concentrates their efforts on their sources of revenue. The more revenue they get out of a market is going to be the market that they concentrate their efforts on.

                Comment


                • #68
                  The article from readworld was originally an analysis because nvidia (or at least its marketing deparment) was up to some dirty tricks - they claimed that a gpu would increase physics performance quite a good deal over that of a cpu. What they didn't state was how misleading the test was, and that the comparison used was incredibly biased in the gpu's favour.

                  Comment


                  • #69
                    Originally posted by deanjo View Post
                    Exactly, they just appeared. Having support for new instruction sets delayed well after instruction sets appear is nothing new nor unique. For example there are litterally thousands of applications out there still being actively developed that could benefit from SSE 4+ but the applications still don't take advantage of it. Of course development is going to be concentrated on their revenue stream.
                    Oh yeah, is this your defense?
                    As I said in the above post, it is 11 (eleven!) years SSEs are out. First SSE version is enough to do single precision float addition and multiplications.

                    Originally posted by deanjo View Post
                    Ya, so once again, Agiea started to address SSE in the 3.0 SDK proving yet again this was not nVidia's doing.
                    Will see when SDK 3.0 will be out. NVidia can say what it want, I need proofs and not marketing small talk.

                    Originally posted by deanjo View Post
                    You can say that about any software that supports special functions of a particular product. Again that is not limited to nvidia. Havok apps for example could have been made to take advantage of GPU's as well but that plan got squashed as well because the owner of the IP doesn't have a GPU that could exploit it. When Larabee got killed so did the efforts behind Havok FX.
                    Yep, in fact when Havok become Intel property there is the real danger that it becomes processor biased (read the Intel compiler fact).

                    Originally posted by deanjo View Post
                    There is nothing dirty about it. It's about allocating your R&D to take advantage of your product that brings in revenue.
                    You can also use slaves to brings in revenue, do you think there will be nothing wrong about it?

                    Originally posted by deanjo View Post
                    Physx isn't even done on the gpu of the PS3. It's done entirely on the processors as is it is done one the XBox 360, Wii, and iPod and at a much lower scale then a dedicated GPU solution.
                    Are you kidding me? It's obvious Physx doesn't run on G70. It has no unified shaders and very little computational power.
                    I tought I was clear, but I intended to say that it is *useless* to compute physics if you don't *render* the effect. To render the additional effects, you need a faster graphics processor. So it is useless computing lots of physics data if then you can't render such all data. This is a reply to your assertion about the fact that console games have reduced physics effects.

                    Originally posted by deanjo View Post
                    When it comes to handling massively parallel calculations even those old Power chips still have a very large advantage over x86 architecture. The fastest x86 CPU's available today for example even in multiprocessor setups can't keep up to other parallel projects. Take a look at any of your mass computing projects and the PS3 with it's Cells whoops ass on every CPU only setup and those projects are usually optimized to take advantage of the latest instruction sets that CPU's offer.
                    On the other hand Cell is really difficult to program and has just one general purpose unit.
                    We can discuss for days about pros and cons of processor architectures... but it's not the point.

                    Originally posted by deanjo View Post
                    Nothing special here as well. Their revenue stream from those consoles dwarfs any revenue stream that could be accomplished by optimizing for products that produce next to no revenue for them. Every company out there concentrates their efforts on their sources of revenue. The more revenue they get out of a market is going to be the market that they concentrate their efforts on.
                    Or maybe, since PS3/Xbox360 is not hardware coming from nvidia, nvidia is interested in making it's physics engine as fast as it is possible because there is more competition on that specific platform?

                    Comment


                    • #70
                      Originally posted by blackshard View Post
                      Or maybe, since PS3/Xbox360 is not hardware coming from nvidia, nvidia is interested in making it's physics engine as fast as it is possible because there is more competition on that specific platform?
                      Oh maybe, you refuse to follow Occam's razor and think every half baked conspiracy theory is the truth.

                      Comment

                      Working...
                      X