Announcement

Collapse
No announcement yet.

PhysX SDK Support Comes Back To Linux

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Sorry for doube post

    Comment


    • #32
      Originally posted by blackiwid View Post
      Say what you want, you got no representive Site thats shows that most Linuxers (not only geeks) use Nvidia GPUs, and even if you take that smolt-results. There are only 0.7% users in this statistik who uses Nvidia GPUs, there are 0.9% that use Intel or AMD solutions, so if you ignore the other stuff Nvidia has less than 50% market share. (from this share there are surely also a few nviddia igps or older cards that also dont support physix.
      Nobody ever said they were the majority share holder. They are, however the largest piece of the pie. The sites that have stats on the subject coincide. The burden, of proving otherwise is up to you. Otherwise your evaluation of the subject is backed by unfounded speculation on your part with no supporting evidence.

      Comment


      • #33
        Also you can look at the steam hardware survey as well. This does represent gamers of course, but that is exactly what Physx is for now isn't it so the stats are somewhat relative. Of all that hardware a vast majority of those Nvidia cards support linux. Lets face it, even in linux people do not go out an buy a intel IGP for gaming.

        http://store.steampowered.com/hwsurvey/

        Physx isn't used for openssl, it isn't used for websurfing, it isn't used for email, etc etc. It's used for gaming, therefore you have to look at the hardware that gamers use. Target product for a target audience.

        Comment


        • #34
          Originally posted by blackshard View Post
          All things that a modern quad core processor can't do... can it?
          Care to point out an existing software physics solution that works just as well (Cloth, explosions, etc.)?

          Comment


          • #35
            The problem with physics on a graphics card is two-fold:
            a) the graphics card must also do graphics (funny that)
            b) it's really only useful for eye-candy. Actual physics that affect the game summer from a performance bottleneck with reading results back from the graphics card. This was one reason that games would often run slower when "hardware accelerated" physics was enabled (this was when ageia owned it).
            Bullet and Havok are more than capable of being used instead of PhysX - naturally I'd recommend Bullet due to its open source nature.

            Comment


            • #36
              Originally posted by mirv View Post
              The problem with physics on a graphics card is two-fold:
              a) the graphics card must also do graphics (funny that)
              b) it's really only useful for eye-candy. Actual physics that affect the game summer from a performance bottleneck with reading results back from the graphics card. This was one reason that games would often run slower when "hardware accelerated" physics was enabled (this was when ageia owned it).
              Bullet and Havok are more than capable of being used instead of PhysX - naturally I'd recommend Bullet due to its open source nature.
              One thing you have to remember though as a scene gets more complex with multiple effects you do quickly start running out of threads on a CPU. The developers of Trials HD for example spent many many many hours tweaking trying to get acceptable performance on the Xbox 360 using Bulllet just to get acceptable game play out of the 360's 6 threads. A good point to see where even Bullet slows down is when using it with one of the many 3d rendering apps out there. It does bog a system down quite handily.

              Comment


              • #37
                Originally posted by yogi_berra View Post
                Care to point out an existing software physics solution that works just as well (Cloth, explosions, etc.)?
                Havok?

                http://www.havok.com/index.php?page=havok-cloth

                http://www.youtube.com/watch?v=daZoXzBGea0

                And, AFAIK, Havok has no GPU-accelerated version.


                Bullet Physics:

                http://bulletphysics.org/wordpress/

                Comment


                • #38
                  Originally posted by deanjo View Post
                  One thing you have to remember though as a scene gets more complex with multiple effects you do quickly start running out of threads on a CPU. The developers of Trials HD for example spent many many many hours tweaking trying to get acceptable performance on the Xbox 360 using Bulllet just to get acceptable game play out of the 360's 6 threads. A good point to see where even Bullet slows down is when using it with one of the many 3d rendering apps out there. It does bog a system down quite handily.
                  I quite agree - the nature of physics calculations lend themselves nicely to the highly parallel environment of a gpu, but will also slow down and suffer in complex scenes if the data is required for anything more than eye candy. Which of course just means leave the eye candy parts for the gpu in order to free the load from the cpu.
                  The open physics initiative should hopefully improve Bullet's standing. It will be interesting to see where that goes.

                  Comment


                  • #39
                    Originally posted by blackshard View Post
                    http://www.bit-tech.net/news/hardwar...ysics-at-gdc/1


                    See my above comment about bullet. Bottom line is the more complex the scene gets the faster you approach the CPU's limitations.

                    Comment


                    • #40
                      mmmh, I don't know what Intel thinks about AMD accelerating its software (Havok is Intel property):

                      http://en.wikipedia.org/wiki/Havok_%28software%29

                      Originally posted by deanjo View Post
                      See my above comment about bullet. Bottom line is the more complex the scene gets the faster you approach the CPU's limitations.
                      Ok, I agree. But since we have powerful CPUs with 4 cores, and actually games barely use 2 of them, and since physics is a perfect streaming application, couldn't be a good idea to use SIMDs and parallel execution on actual x86 multicore cpu's?

                      Back to PhysX and for all PhysX aficionados, it is proved that use no more than 1 cpu/core (bad) and use no SIMDs at all (very bad):

                      http://www.realworldtech.com/page.cf...0510142143&p=4

                      The most prominent reason is to let nvidia graphics card shine against full software mode.

                      Comment


                      • #41
                        so if you think you will develop programms only for nvidia (linux) users and donīt care about negative social effects that you produce. Go on, free software is about freedom not about profit or the better solution. In mid-/long term mostly that produces the better solution, too. And mostly in long term the other solutions die. If you want to rip-off some people with such proprietary solutions go on, but donīt hope that the biggest part of linux users will buy use your software. Because even the opensource guys mostly got the idea behind free software and they make maybe compromisses (that are bad in my opinion) but as soon as there is a other programm that do the same or nearly the same than your solution most linuxers will go on using that over your solution.

                        So freesoftware or opensource is giving and taking, not forcing people to a vendor. So yes I donīt tell you what to do, but I say what I think about it.

                        But even without this free software idealogies -> I <- donīt get it to use a solution that works only with cards from one vendor either this vendor has 30, 40 oder 60% market share.

                        Comment


                        • #42
                          Originally posted by blackshard View Post
                          mmmh, I don't know what Intel thinks about AMD accelerating its software (Havok is Intel property):

                          http://en.wikipedia.org/wiki/Havok_%28software%29
                          AMD licensed Havoc quite a few years ago already from intel.

                          Comment


                          • #43
                            More references to it.

                            http://www.theinquirer.net/inquirer/...-havok-physics

                            http://www.xbitlabs.com/news/multime...echnology.html

                            Comment


                            • #44
                              Originally posted by blackshard View Post
                              Back to PhysX and for all PhysX aficionados, it is proved that use no more than 1 cpu/core (bad) and use no SIMDs at all (very bad):

                              http://www.realworldtech.com/page.cf...0510142143&p=4

                              The most prominent reason is to let nvidia graphics card shine against full software mode.
                              nvidia pointed out that it was up to the application to use more than one core, and that game developers specifically asked them to make sure it worked on CPUs without SIMD instructions.

                              Comment


                              • #45
                                Originally posted by md1032 View Post
                                nvidia pointed out that it was up to the application to use more than one core, and that game developers specifically asked them to make sure it worked on CPUs without SIMD instructions.
                                Sources please.

                                Comment

                                Working...
                                X