Announcement

Collapse
No announcement yet.

PhysX SDK Support Comes Back To Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by leech View Post
    Beware the ninja midgets!

    Oddly, I think out of this entire thread of "nVidia is Evil and AMD is our friend" you have the most sane post.

    nVidia supports Linux and everyone cries foul. What the hell is wrong with people?

    Shadowgrounds for Linux, by the way supports Physx, and I'd like to see wine support it better (I have Ghost Recon Advanced Warfighter working almost perfectly under Wine, but it doesn't utilize physx on my nVidia card.)

    I do own an AMD/ATI 3200HD on my HP touchsmart. Guess what? They still suck at making drivers. The open source ones don't work with anything but Compiz or other open source software, and the fglrx most of the time can't release a version that supports the current version of Xorg.

    Let's face it, if you play games, and use Linux, or even want dual head to work properly, you use nVidia.

    Probably the main reason that the fedora site shows only .7% nvidia and .9% for Intel and ATI are because I have NEVER seen a server with nVidia for a GPU. They are almost always either Intel or those ATI ES1000 chips, though I do have one HP server that had a Matrox G200 in it. Thought that was cool.

    I never had an nVidia nor ATI card until after my Matrox G400, I had a radeon 7200 after that, for a short time, 'til the drivers annoyed me so much and the Matrox Parhelia came out. That card was AWESOME, but then performance pretty much sucked for anything newer than when it was released, so I ended up getting an nVidia 6800GT and have been mostly getting nVidia since. If the person I am building a computer for wants to game, I always recommend them.

    Sadly the market, along with AMD and Intel, have forced people who want stability and performance to choose nVidia for Linux gaming, or even workstations. Seriously, I have given AMD more than enough chances with my HP touchsmart tablet to make me know that it'll be another decade before I try them again.

    So kudos (cuda?) to nVidia to (re)releasing the Physx SDK for Linux.
    Well, nice straight OT defending post for nvidia, they really need it at the moment.

    It also nice that you rant about fglrx because "fglrx most of the time can't release a version that supports the current version of Xorg.", it's a pretty argumented point.
    And then you rant against FOSS drivers too, trying to say that they (ATI/AMD) don't do anything good, expecially for Linux gaming. It would be nice to understand what kind of gaming you were expecting from a HD3200 integrated graphics...

    Comment


    • Originally posted by yogi_berra View Post
      nVidia hires ninja midgets to castrate anyone that doesn't use PhysX. So yes, you will be made sterile.

      Seriously though, it means the difference between waiting days for your cloth simulation to finish calculating and having it done in real time. That is a huge difference.
      and you can do that simulation even better without physx.

      Wow.

      Comment


      • Originally posted by blackshard View Post
        Well, nice straight OT defending post for nvidia, they really need it at the moment.

        It also nice that you rant about fglrx because "fglrx most of the time can't release a version that supports the current version of Xorg.", it's a pretty argumented point.
        And then you rant against FOSS drivers too, trying to say that they (ATI/AMD) don't do anything good, expecially for Linux gaming. It would be nice to understand what kind of gaming you were expecting from a HD3200 integrated graphics...
        Well, when the fglrx driver actually makes it so that some games I ran on it actually did more than create a gray or black screen, then yes they are still not opening all of the specs, which means that it's still a half-assed attempt. By the way, the 3200HD actually does perform decently on Windows 7. I mean I won't be playing Crysis at full settings on it, but it does play GRAW semi-ok (oddly it tries to enable PhysX, but the performance is kind of slow)

        It wasn't an off topic defending post for nVidia. The actual topic at hand is the PhysX SDK support is coming back to Linux. I defended nVidia in such a light. I'm not saying they aren't scumbags, most corporations are. But unfortunately, like most scumbags in the IT world (Adobe, Oracle, Google, etc) we still have to deal with their crap. Fortunately because of Linux, we don't have to deal with some other scumbags (like Microsoft). Now am I saying they are scumbags because they won't open source their products? No. They are scumbags for creating crappy products that somehow become standard fair so there is a high requirement for people to be able to use things like Flash, Java, Google with spam.

        I put my money where my mouth is. nVidia works, though occasionally has weird issues (Dual-head works on one driver, then broken on the next, tuning resolution for HDTV works in Windows, but they took forever to release it for Linux, etc.) but they still have better support, even with only providing a closed source driver, than AMD does. That is the plain and simple truth.

        AMD's GPU offering also tends to be tuned more for DirectX performance than OpenGL, nVidia was one of the first GPU manufacturer's to provide OpenGL acceleration.

        I'm not even defending them, I was presenting experience. As of this current date, fglrx will not work with Xorg 1.9. nVidia just released a beta that does, not to mention you can easily get the current driver to work with the ignoreabi option.

        By the way, I still like Matrox, it's just their lack of performance and their slow development of Linux drivers that eventually forced me to pick between nVidia and ATI, and because of all the reviews I read had told me to go with nVidia if I wanted a NEW video card (ATI's older, original Radeons worked with most things but were crash prone) go with nVidia. I would still stay by that until the open source drivers for (now) AMD video cards work without major suckage.

        Comment


        • Originally posted by energyman View Post
          and you can do that simulation even better without physx.

          Wow.
          Oh really? What other GPU enabled physics SDK is there out there? Can you do cloth simulation on a CPU? Yes of course. Can you do it better and at equal or greater complexity? Keep dreaming.

          Comment


          • Originally posted by deanjo View Post
            Oh really? What other GPU enabled physics SDK is there out there? Can you do cloth simulation on a CPU? Yes of course. Can you do it better and at equal or greater complexity? Keep dreaming.
            Just so long as the physics is non-interactive (i.e eye candy) to a game, or some other application that doesn't require very high degrees of accuracy (such as industrial and/or scientific stuff), GPU is fine.
            I'm just saying that there are some cases where the CPU is actually better suited for particular physics tasks, even if the majority are much faster on a GPU, and even physx keeps some parts on the CPU rather than offload them (cloth physics does not fall under this category as far as I know).

            Comment


            • Originally posted by mirv View Post
              Just so long as the physics is non-interactive (i.e eye candy) to a game, or some other application that doesn't require very high degrees of accuracy (such as industrial and/or scientific stuff), GPU is fine.
              Umm a GPU can be every bit as accurate as a CPU. In fact this is a key area for GPGPU. Accuracy of a GPU has never been in question. In fact Fermi based GPU's also follow the IEEE 754-2008 standard which is the exact same standard as CPU's. There is no "mythical property" that makes a CPU any more accurate. The "non-interactive" distinction is a bunch of bull as one of the whole ideas behind Physx is it's interactivity. The Physx effects are triggered by such interactive means.

              [quote]
              I'm just saying that there are some cases where the CPU is actually better suited for particular physics tasks,[/uote]

              The only place really where this would happen is on a very limited data set (read one particle at a time) with a very limited set of applied laws of physics applied to them where clock speed would become king. As soon as that data set becomes more complex the mass parallelism of GPU's start pulling away.

              even if the majority are much faster on a GPU, and even physx keeps some parts on the CPU rather than offload them (cloth physics does not fall under this category as far as I know).
              Sure Physx puts some less demanding tasks on the CPU, net result though is still that utilizing the GPU as well is a far more efficient means. Lets get one thing straight here, Physx has never claimed nor will it ever likely do so to be a 100% accurate model of applied physics. It has always been eyecandy and marketed as such. At best it could be use for "movie quality" physics. This is not a limitation by any means on the GPU. It is a limitation of being able to produce acceptable effects realtime for the end users enjoyment. You won't see Physx being used for intermolecular interaction fluid analysis as it was never intended to be as such. You can however have every bit as accurate of a model done on GPU with huge gains over a pure CPU setup again because of it's massive parallel capabilities vs the relatively small parallel capabilities of a CPU if you wish to take the time and code the more accurate equations into your application. Same results as CPU, just more of them calculated in the same period of time.

              This isn't just a limitation of Physx but also of Bullet, Havok, etc etc as they are not meant to be 100% real life interaction replication. They are all there for eye candy, nothing more, nothing less. It just happens to be a fact that as that eye candy gets more complex, LIKE the more accurate realworld interaction physics calculation applications out there, a GPU's massive amount of parallelism will thump a CPU pretty much every time if coded correctly.


              Physx is for gaming. Nothing more nothing less but a GPU isn't limited to Physx.

              Comment


              • I was trying to separate game-physics from high-accuracy physics. Only recently has GPGPU been able to match CPU level accuracy - this is due to the floating point units of the hardware. GPU physics is not some magic that makes all & every physics calculation better, it's just highly suited to most of the algorithms (most, not all).
                Interactivity problems is not a bunch of bull as you seem to be forgetting one little detail: the data must be moved to and from the GPU. This takes time and is why gpu physics is mostly limited to eye-candy. If this wasn't an issue, then I assure you that CPU physics would be everywhere in games right now.
                There's also the issue of course of a GPU not being limited to physics - it was never designed for physics in the first place. It has to do (gosh) graphics too. Where to perform what calculations is as much a balancing act in games as anything else.

                Comment


                • Originally posted by deanjo View Post
                  Can you do it better and at equal or greater complexity in real time?
                  FTFY.

                  Of course, the real kicker in the argument is that if Ageia had been purchased by AMD, would he then say that "PhysX was the greatest thing since sliced bread?"

                  Politics are grand.

                  Comment

                  Working...
                  X