Announcement

Collapse
No announcement yet.

PhysX SDK Support Comes Back To Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • deanjo
    replied
    Originally posted by nanonyme View Post
    Not the same capabilities. Part of what the software does is handle overclocking of AMD CPU's and ATi GPU's. This obviously can't be done if you have Intel CPU and Intel/nVidia GPU.
    LMFAO. Bullshit. NVCC and Overdrive make the EXACT same calls to the processor.

    Leave a comment:


  • nanonyme
    replied
    Not the same capabilities. Part of what the software does is handle overclocking of AMD CPU's and ATi GPU's. This obviously can't be done if you have Intel CPU and Intel/nVidia GPU.

    Leave a comment:


  • deanjo
    replied
    Originally posted by nanonyme View Post
    While this thread is spinning to offtopic land (apparently now it's venturing into Windows software), alacritypc supposedly works for this kind of stuff. You don't need to rely on software provided by your CPU/GPU manufacturer for shutting down processes.
    Oh I could probably provide you proof of 20 or more so applications just off the top of my head where other companies have actually crippled support for solutions outside their own. I'm not talking about lack of development for the competition but purposely crippling the software to make it run exclusively on their hardware despite the competition having the same capabilities with 0 needed code changes other then removing the checking of to see if their hardware is installed.

    Leave a comment:


  • nanonyme
    replied
    While this thread is spinning to offtopic land (apparently now it's venturing into Windows software), alacritypc supposedly works for this kind of stuff. You don't need to rely on software provided by your CPU/GPU manufacturer for shutting down processes.

    Leave a comment:


  • deanjo
    replied
    Here is another great example of AMD crippling their software.

    http://www.tomshardware.com/news/AMD...ntel,6883.html

    This isn't even an example of not updating code. This is a hard restriction purposely put in there.

    Leave a comment:


  • deanjo
    replied
    Originally posted by energyman View Post
    Renaming old cards.
    AMD does this too. As well as Intel.

    Drivers crippling performance on certain configurations.
    See above about Overdrive.

    Putting hard work into PhysX to make it as slow as possible on a modern CPU.
    There is no benefit for them to do so so why would they. Does Intel optimize any of their products for AMD procs? Nope Does AMD optimize for Intel Procs, nope. Does ATI opimize for Nvidia cards. Nope.


    Why do people still give money to this corporation?
    Because they make products that work.

    Leave a comment:


  • deanjo
    replied
    Seriously, think about it. There are millions of examples where companies don't develop for their competition. Look at AMD for example with their overdrive utility. Can you use it on anything but a AMD platform? Nope. Does it have the capability, you betcha.

    Leave a comment:


  • energyman
    replied
    some people are fanboys.

    and some people see the truth. The truth is for the last couple of years Nvidia tried everything to shaft people. Renaming old cards. Bumpgate. Drivers crippling performance on certain configurations. Putting hard work into PhysX to make it as slow as possible on a modern CPU.

    Why do people still give money to this corporation?

    Leave a comment:


  • deanjo
    replied
    Originally posted by blackshard View Post
    Or maybe, since PS3/Xbox360 is not hardware coming from nvidia, nvidia is interested in making it's physics engine as fast as it is possible because there is more competition on that specific platform?
    Oh maybe, you refuse to follow Occam's razor and think every half baked conspiracy theory is the truth.

    Leave a comment:


  • blackshard
    replied
    Originally posted by deanjo View Post
    Exactly, they just appeared. Having support for new instruction sets delayed well after instruction sets appear is nothing new nor unique. For example there are litterally thousands of applications out there still being actively developed that could benefit from SSE 4+ but the applications still don't take advantage of it. Of course development is going to be concentrated on their revenue stream.
    Oh yeah, is this your defense?
    As I said in the above post, it is 11 (eleven!) years SSEs are out. First SSE version is enough to do single precision float addition and multiplications.

    Originally posted by deanjo View Post
    Ya, so once again, Agiea started to address SSE in the 3.0 SDK proving yet again this was not nVidia's doing.
    Will see when SDK 3.0 will be out. NVidia can say what it want, I need proofs and not marketing small talk.

    Originally posted by deanjo View Post
    You can say that about any software that supports special functions of a particular product. Again that is not limited to nvidia. Havok apps for example could have been made to take advantage of GPU's as well but that plan got squashed as well because the owner of the IP doesn't have a GPU that could exploit it. When Larabee got killed so did the efforts behind Havok FX.
    Yep, in fact when Havok become Intel property there is the real danger that it becomes processor biased (read the Intel compiler fact).

    Originally posted by deanjo View Post
    There is nothing dirty about it. It's about allocating your R&D to take advantage of your product that brings in revenue.
    You can also use slaves to brings in revenue, do you think there will be nothing wrong about it?

    Originally posted by deanjo View Post
    Physx isn't even done on the gpu of the PS3. It's done entirely on the processors as is it is done one the XBox 360, Wii, and iPod and at a much lower scale then a dedicated GPU solution.
    Are you kidding me? It's obvious Physx doesn't run on G70. It has no unified shaders and very little computational power.
    I tought I was clear, but I intended to say that it is *useless* to compute physics if you don't *render* the effect. To render the additional effects, you need a faster graphics processor. So it is useless computing lots of physics data if then you can't render such all data. This is a reply to your assertion about the fact that console games have reduced physics effects.

    Originally posted by deanjo View Post
    When it comes to handling massively parallel calculations even those old Power chips still have a very large advantage over x86 architecture. The fastest x86 CPU's available today for example even in multiprocessor setups can't keep up to other parallel projects. Take a look at any of your mass computing projects and the PS3 with it's Cells whoops ass on every CPU only setup and those projects are usually optimized to take advantage of the latest instruction sets that CPU's offer.
    On the other hand Cell is really difficult to program and has just one general purpose unit.
    We can discuss for days about pros and cons of processor architectures... but it's not the point.

    Originally posted by deanjo View Post
    Nothing special here as well. Their revenue stream from those consoles dwarfs any revenue stream that could be accomplished by optimizing for products that produce next to no revenue for them. Every company out there concentrates their efforts on their sources of revenue. The more revenue they get out of a market is going to be the market that they concentrate their efforts on.
    Or maybe, since PS3/Xbox360 is not hardware coming from nvidia, nvidia is interested in making it's physics engine as fast as it is possible because there is more competition on that specific platform?

    Leave a comment:

Working...
X