Announcement

Collapse
No announcement yet.

Looking At The OpenCL Performance Of ATI & NVIDIA On Linux

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Looking At The OpenCL Performance Of ATI & NVIDIA On Linux

    Phoronix: Looking At The OpenCL Performance Of ATI & NVIDIA On Linux

    Recently we provided the first Linux-based review of the NVIDIA GeForce GTX 460 graphics card. Overall, this Fermi-based graphics card was a great performer for selling around $200 USD and is complemented by great video playback capabilities with VDPAU acceleration and great proprietary driver support. In that review we primarily looked at the OpenGL performance under Linux, but with NVIDIA's Fermi architecture bringing great GPGPU advancements for CUDA and OpenCL users too, in this article we are looking more closely at the Open Computing Language performance of this GF104 graphics card as well as other NVIDIA and ATI graphics cards.

    http://www.phoronix.com/vr.php?view=15257

  • Beiruty
    replied
    Still there is o state trcker for OpenCL on Gallium3d?

    Leave a comment:


  • Syke
    replied
    Originally posted by Syke View Post
    Can someone post a short walkthrough on how to install the dependencies so I can run these Phoronix tests?
    On Ubuntu 10.04 that is.

    Leave a comment:


  • Syke
    replied
    Can someone post a short walkthrough on how to install the dependencies so I can run these Phoronix tests?

    Leave a comment:


  • deanjo
    replied
    One thing as well, where nvidia is concerned, OpenCL is a layer on top of the CUDA driver interface. Without Cuda, no openCL on nvidia, C for CUDA and OpenCL are simply two ways of accessing the capabilities as is DirectCompute, another vender specific, albeit software vender this time and AMD supports it as well. So it looks like AMD is "feeding" and other "evil empire" and supporting it as well.

    Leave a comment:


  • deanjo
    replied
    Originally posted by Qaridarium View Post
    "What the developers use for a physics engine is up to the developer."

    i don't think so.. nvidia pay for using physX---

    developers just use what brings the most cash..


    "Nobody is blocking them from doing so."

    i think nvidia is blocking.

    and they pay for max damange on other companys.

    Here you go spouting off wild speculative theories again with out any basis.

    "Nvidia also contributes to openCL and probably has the best implementation of it out there along with some of the best documentation."

    documentation? can you give me the spec of a gtx480 ?
    Do you know what openCL even is? It's an API. Every single item that needs to be known on how to use openCL is freely available for all and documented.

    "They are not forcing anybody to use Cuda or Physx,"

    they Pay this is a kind of force the force of the Profit and money-

    "that is the choice of the developer."

    the choice? i don't think so no other company pays for bullshit like nivida.

    this choice just hurts entusers and force them to buy nvidia hardware.
    Nvidia has a team that helps the developer implement it though their "TWIMTBP" program. The same thing AMD has with their "Gaming Evolved" program which entails:

    ? Technical engagement, including referenceable source code, and access to game builds for competing vendors
    ? Developer tools
    ? Product development
    ? Lab testing
    ? Marketing programs
    ? Support and integration with our partners

    Same as nvidia.

    Leave a comment:


  • Qaridarium
    replied
    Originally posted by deanjo View Post
    What the developers use for a physics engine is up to the developer. If a developer is willing to go through the "growing pains" of getting another physics engine going on GPU then they still have that option. Nobody is blocking them from doing so. Nvidia also contributes to openCL and probably has the best implementation of it out there along with some of the best documentation. They are not forcing anybody to use Cuda or Physx, that is the choice of the developer. If you don't like the developer using Physx then complain to the developer.
    "What the developers use for a physics engine is up to the developer."

    i don't think so.. nvidia pay for using physX---

    developers just use what brings the most cash..

    "Nobody is blocking them from doing so."

    i think nvidia is blocking.

    and they pay for max damange on other companys.

    "Nvidia also contributes to openCL and probably has the best implementation of it out there along with some of the best documentation."

    documentation? can you give me the spec of a gtx480 ?


    "They are not forcing anybody to use Cuda or Physx,"

    they Pay this is a kind of force the force of the Profit and money-

    "that is the choice of the developer."

    the choice? i don't think so no other company pays for bullshit like nivida.

    this choice just hurts entusers and force them to buy nvidia hardware.

    Leave a comment:


  • deanjo
    replied
    Originally posted by Qaridarium View Post
    "Intel do doubt saw that allowing Havok FX to live would mean giving more substance to the value of a GPU over a CPU, a market they still can't really compete in."

    intel is evil i know...
    i never buy an intel cpu or product in the last 12 years.
    and in the future i will never buy any product of this company.

    but in my point of view nvidia fails on physX because an open standart like openCL and bulledphysik are much better also for nvidia thats because if there is more usage for an GPU nvidia will sell more GPUs and intel will lose more and more because no one need an fast CPU anymore.
    on an open Standard like openCL only intel is the loser.
    nvidia waste there time on CUDA and PhysX.
    What the developers use for a physics engine is up to the developer. If a developer is willing to go through the "growing pains" of getting another physics engine going on GPU then they still have that option. Nobody is blocking them from doing so. Nvidia also contributes to openCL and probably has the best implementation of it out there along with some of the best documentation. They are not forcing anybody to use Cuda or Physx, that is the choice of the developer. If you don't like the developer using Physx then complain to the developer.

    Leave a comment:


  • Qaridarium
    replied
    Originally posted by deanjo View Post
    Intel is also the ones that killed Havok FX after having both nVidia and ATi demoing it on their hardware some 2 years earlier (GDC2006) then nVidia purchasing Physx. Chances are if intel didn't purchase and kill Havok FX nVidia would have never purchased Agiea a few years later to provide their own solution.
    "Intel do doubt saw that allowing Havok FX to live would mean giving more substance to the value of a GPU over a CPU, a market they still can't really compete in."

    intel is evil i know...
    i never buy an intel cpu or product in the last 12 years.
    and in the future i will never buy any product of this company.

    but in my point of view nvidia fails on physX because an open standart like openCL and bulledphysik are much better also for nvidia thats because if there is more usage for an GPU nvidia will sell more GPUs and intel will lose more and more because no one need an fast CPU anymore.
    on an open Standard like openCL only intel is the loser.
    nvidia waste there time on CUDA and PhysX.

    Leave a comment:


  • deanjo
    replied
    Intel do doubt saw that allowing Havok FX to live would mean giving more substance to the value of a GPU over a CPU, a market they still can't really compete in.

    Leave a comment:

Working...
X