Announcement

Collapse
No announcement yet.

PhysX SDK Support Comes Back To Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Sorry for doube post

    Comment


    • #32
      Originally posted by blackiwid View Post
      Say what you want, you got no representive Site thats shows that most Linuxers (not only geeks) use Nvidia GPUs, and even if you take that smolt-results. There are only 0.7% users in this statistik who uses Nvidia GPUs, there are 0.9% that use Intel or AMD solutions, so if you ignore the other stuff Nvidia has less than 50% market share. (from this share there are surely also a few nviddia igps or older cards that also dont support physix.
      Nobody ever said they were the majority share holder. They are, however the largest piece of the pie. The sites that have stats on the subject coincide. The burden, of proving otherwise is up to you. Otherwise your evaluation of the subject is backed by unfounded speculation on your part with no supporting evidence.

      Comment


      • #33
        Also you can look at the steam hardware survey as well. This does represent gamers of course, but that is exactly what Physx is for now isn't it so the stats are somewhat relative. Of all that hardware a vast majority of those Nvidia cards support linux. Lets face it, even in linux people do not go out an buy a intel IGP for gaming.



        Physx isn't used for openssl, it isn't used for websurfing, it isn't used for email, etc etc. It's used for gaming, therefore you have to look at the hardware that gamers use. Target product for a target audience.

        Comment


        • #34
          Originally posted by blackshard View Post
          All things that a modern quad core processor can't do... can it?
          Care to point out an existing software physics solution that works just as well (Cloth, explosions, etc.)?

          Comment


          • #35
            The problem with physics on a graphics card is two-fold:
            a) the graphics card must also do graphics (funny that)
            b) it's really only useful for eye-candy. Actual physics that affect the game summer from a performance bottleneck with reading results back from the graphics card. This was one reason that games would often run slower when "hardware accelerated" physics was enabled (this was when ageia owned it).
            Bullet and Havok are more than capable of being used instead of PhysX - naturally I'd recommend Bullet due to its open source nature.

            Comment


            • #36
              Originally posted by mirv View Post
              The problem with physics on a graphics card is two-fold:
              a) the graphics card must also do graphics (funny that)
              b) it's really only useful for eye-candy. Actual physics that affect the game summer from a performance bottleneck with reading results back from the graphics card. This was one reason that games would often run slower when "hardware accelerated" physics was enabled (this was when ageia owned it).
              Bullet and Havok are more than capable of being used instead of PhysX - naturally I'd recommend Bullet due to its open source nature.
              One thing you have to remember though as a scene gets more complex with multiple effects you do quickly start running out of threads on a CPU. The developers of Trials HD for example spent many many many hours tweaking trying to get acceptable performance on the Xbox 360 using Bulllet just to get acceptable game play out of the 360's 6 threads. A good point to see where even Bullet slows down is when using it with one of the many 3d rendering apps out there. It does bog a system down quite handily.

              Comment


              • #37
                Originally posted by yogi_berra View Post
                Care to point out an existing software physics solution that works just as well (Cloth, explosions, etc.)?
                Havok?



                (Please view in High Definition) This demo shows Havok Cloth in action, simulating 25 dancers, each one wearing a high resolution (7000 polygons) red dress. ...


                And, AFAIK, Havok has no GPU-accelerated version.


                Bullet Physics:

                Comment


                • #38
                  Originally posted by deanjo View Post
                  One thing you have to remember though as a scene gets more complex with multiple effects you do quickly start running out of threads on a CPU. The developers of Trials HD for example spent many many many hours tweaking trying to get acceptable performance on the Xbox 360 using Bulllet just to get acceptable game play out of the 360's 6 threads. A good point to see where even Bullet slows down is when using it with one of the many 3d rendering apps out there. It does bog a system down quite handily.
                  I quite agree - the nature of physics calculations lend themselves nicely to the highly parallel environment of a gpu, but will also slow down and suffer in complex scenes if the data is required for anything more than eye candy. Which of course just means leave the eye candy parts for the gpu in order to free the load from the cpu.
                  The open physics initiative should hopefully improve Bullet's standing. It will be interesting to see where that goes.

                  Comment


                  • #39
                    Originally posted by blackshard View Post
                    Havok?



                    (Please view in High Definition) This demo shows Havok Cloth in action, simulating 25 dancers, each one wearing a high resolution (7000 polygons) red dress. ...


                    And, AFAIK, Havok has no GPU-accelerated version.
                    AMD shows that Havok physics effects can be accelerated on GPUs via OpenCL, and Nvidia says it would be happy to work with Havok.



                    See my above comment about bullet. Bottom line is the more complex the scene gets the faster you approach the CPU's limitations.

                    Comment


                    • #40
                      mmmh, I don't know what Intel thinks about AMD accelerating its software (Havok is Intel property):



                      Originally posted by deanjo View Post
                      See my above comment about bullet. Bottom line is the more complex the scene gets the faster you approach the CPU's limitations.
                      Ok, I agree. But since we have powerful CPUs with 4 cores, and actually games barely use 2 of them, and since physics is a perfect streaming application, couldn't be a good idea to use SIMDs and parallel execution on actual x86 multicore cpu's?

                      Back to PhysX and for all PhysX aficionados, it is proved that use no more than 1 cpu/core (bad) and use no SIMDs at all (very bad):

                      PhysX is a key application that Nvidia uses to showcase the advantages of GPU computing (GPGPU) for consumers. PhysX executing on an Nvidia GPU an improve performance by 2-4X compared to running on a CPU from Intel or AMD. We investigated and discovered that CPU PhysX exclusively uses x87 rather than the faster SSE instructions. This hobbles the performance of CPUs, calling into question the real benefits of PhysX on a GPU.


                      The most prominent reason is to let nvidia graphics card shine against full software mode.

                      Comment

                      Working...
                      X