Announcement

Collapse
No announcement yet.

PhysX SDK Support Comes Back To Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by deanjo View Post
    Ya because Cell/PPC does SSE so well....
    On this Platform you use the SPE/Altivec for SIMD.

    Comment


    • #52
      Originally posted by deanjo View Post
      Sound only for an excuse with the SSE.

      Comment


      • #53
        Originally posted by deanjo View Post
        Ya because Cell/PPC does SSE so well....
        funny because they support altivec...

        so.. for a fucking console they could put it in - but on the pc platform they had to take it out?

        Shouldn't that give you something to think about?

        Also - do you run x86 code on powerpc? No, you don't. So there is no reason not to use SSE/SSE2 instructions when compiling.

        Except when you want to cripple the cpu performance.

        Conclusion:

        Nvidia is a lying bag of shit and some people fall for it.

        Comment


        • #54
          Originally posted by Nille View Post
          On this Platform you use the SPE/Altivec for SIMD.
          Or in the Xenon found in the Xbox 360 you use VMX128 which is not completely backwards compatible with Altivec (the enhancements BTW were done specifically for game physics).

          Comment


          • #55
            Originally posted by deanjo View Post
            Or in the Xenon found in the Xbox 360 you use VMX128 which is not completely backwards compatible with Altivec (the enhancements BTW were done specifically for game physics).
            And? But this has nothing todo with the x86 SSE institutions. and for the xbox you optimize the code different as the code for the PS3 ( both PPC based ) eg on the PS3 you use as an addition the SPEs

            Comment


            • #56
              Originally posted by energyman View Post
              funny because they support altivec...

              so.. for a fucking console they could put it in - but on the pc platform they had to take it out?

              Shouldn't that give you something to think about?

              Also - do you run x86 code on powerpc? No, you don't. So there is no reason not to use SSE/SSE2 instructions when compiling.

              Except when you want to cripple the cpu performance.

              Conclusion:

              Nvidia is a lying bag of shit and some people fall for it.
              Sure they did, it is a far less daunting task then trying to get it running on a CPU even with SSE. Cell based systems have the advantage of SPE's which are self multitasking. This is the reason why other simular applications (such as folding@home or many other multitasking apps) see huge gains over a x86.

              Comment


              • #57
                Then there is also another factor you have to consider. The Physx codebase dates back to 2002. Back then SSE was found only on a handful of processors in use. When Nvidia aquired them back in 2008 they of course wanted to get it running on their product as effeciently as possible as well as the huge markets of consoles whose licensing agreements dwarf PC's. So are they going to take extra time to convert legacy code on something they make very little on or put those efforts towards revenue makers. All is for not anyways as the next version will have SSE optimization in it as well.

                Comment


                • #58
                  @deanjo - this would make sense if there was some huge port necessary to support SSE, but that's the whole point. There isn't.

                  Developers with access to the source code are able to tell the compiler to allow SSE and everything gets magically faster. Lots of them have done exactly that, and shipped it in games.

                  NVidia just has to flip a compiler flag, but they refuse to do so.

                  Comment


                  • #59
                    Originally posted by deanjo View Post
                    Sure they did, it is a far less daunting task then trying to get it running on a CPU even with SSE. Cell based systems have the advantage of SPE's which are self multitasking. This is the reason why other simular applications (such as folding@home or many other multitasking apps) see huge gains over a x86.
                    And i have to call BS on this. It's well known that SPE's are extremely limited in what they do and getting decent performance out of them takes a huge amount of optimization work. Using the much more general purpose SSE hardware is much simpler, although I'm sure it's less important to NVidia's bottom line than getting it working on the consoles.

                    Comment


                    • #60
                      Originally posted by md1032 View Post
                      nvidia pointed out that it was up to the application to use more than one core, and that game developers specifically asked them to make sure it worked on CPUs without SIMD instructions.
                      Yeah yeah... it is shocking nvidia did not say that donkeys are flying...

                      So now they are saying that game developers are "bad and ugly" because they deliberately choose to bring down performances of their software? It looks to me they are just trying to shift the blame away themselves.

                      It's even more interesting that you can find the most fantasious excuses dealing with this physx story, eg. from "physics require 80-bit fpu precision (lol?)" to "we're shipping physx 3.0 with automatic multicore and sse support (lol again! here)"...

                      Comment

                      Working...
                      X