Announcement

Collapse
No announcement yet.

Looking At The OpenCL Performance Of ATI & NVIDIA On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Well, you're asking too *soon* anyways. Wait until the product launches at least
    Test signature

    Comment


    • #22
      Your test is not fair, where is at lest 5850?

      Comment


      • #23
        Originally posted by brent View Post
        Michael, please keep in mind that SmallPtGPU contains a bug/incompatibility that seriously limits performance on NVidia hardware, especially pre-Fermi.

        Here's a diff that fixes it. This improves performance more than ten-fold on G80/GT200.
        I'm probably missing something here, but the patch that you linked only seems to correct things for Mac OS (#ifdef __APPLE__). The tests Michael ran were all in Ubuntu

        Comment


        • #24
          Originally posted by Veerappan View Post
          I'm probably missing something here, but the patch that you linked only seems to correct things for Mac OS (#ifdef __APPLE__). The tests Michael ran were all in Ubuntu
          NVidia's implementation defines __APPLE__ on all OSes, for... whatever... reasons. I think the workaround is not needed on OS X anymore, either. Removing it completely should be fine.

          Comment


          • #25
            Originally posted by brent View Post
            NVidia's implementation defines __APPLE__ on all OSes, for... whatever... reasons. I think the workaround is not needed on OS X anymore, either. Removing it completely should be fine.
            You can just run the executable compiled with ATI OpenCL SDK on any NVIDIA hardware so you will be sure to the benchmark exactly in the same condition.

            Please, note, defining __APPLE__ under Linux is a (huge) NVIDIA's bug.

            BTW, I'm the author of SmallPtGPU, MandelGPU, etc.; I have a 2x5870, 1x5850 and a 5770, let me know if you need to run the benchmarks on any of the above hardware.

            Michael, you may be interested to check http://www.luxrender.net/wiki/index.php?title=SLG
            It is a larger/more complex OpenCL application than the small demos (i.e. SmallPtGPU, ecc.) and it may provide more real-world numbers.

            You can find small demo video about SLG here: http://vimeo.com/14290797

            Comment


            • #26
              Originally posted by Qaridarium
              in that point Apple and Nvidia are the Good ones ;-) and intel is the Evil.
              Intel is also the ones that killed Havok FX after having both nVidia and ATi demoing it on their hardware some 2 years earlier (GDC2006) then nVidia purchasing Physx. Chances are if intel didn't purchase and kill Havok FX nVidia would have never purchased Agiea a few years later to provide their own solution.

              Comment


              • #27
                Intel do doubt saw that allowing Havok FX to live would mean giving more substance to the value of a GPU over a CPU, a market they still can't really compete in.

                Comment


                • #28
                  Originally posted by Qaridarium
                  "Intel do doubt saw that allowing Havok FX to live would mean giving more substance to the value of a GPU over a CPU, a market they still can't really compete in."

                  intel is evil i know...
                  i never buy an intel cpu or product in the last 12 years.
                  and in the future i will never buy any product of this company.

                  but in my point of view nvidia fails on physX because an open standart like openCL and bulledphysik are much better also for nvidia thats because if there is more usage for an GPU nvidia will sell more GPUs and intel will lose more and more because no one need an fast CPU anymore.
                  on an open Standard like openCL only intel is the loser.
                  nvidia waste there time on CUDA and PhysX.
                  What the developers use for a physics engine is up to the developer. If a developer is willing to go through the "growing pains" of getting another physics engine going on GPU then they still have that option. Nobody is blocking them from doing so. Nvidia also contributes to openCL and probably has the best implementation of it out there along with some of the best documentation. They are not forcing anybody to use Cuda or Physx, that is the choice of the developer. If you don't like the developer using Physx then complain to the developer.

                  Comment


                  • #29
                    Originally posted by Qaridarium
                    "What the developers use for a physics engine is up to the developer."

                    i don't think so.. nvidia pay for using physX---

                    developers just use what brings the most cash..


                    "Nobody is blocking them from doing so."

                    i think nvidia is blocking.

                    and they pay for max damange on other companys.

                    Here you go spouting off wild speculative theories again with out any basis.

                    "Nvidia also contributes to openCL and probably has the best implementation of it out there along with some of the best documentation."

                    documentation? can you give me the spec of a gtx480 ?
                    Do you know what openCL even is? It's an API. Every single item that needs to be known on how to use openCL is freely available for all and documented.

                    "They are not forcing anybody to use Cuda or Physx,"

                    they Pay this is a kind of force the force of the Profit and money-

                    "that is the choice of the developer."

                    the choice? i don't think so no other company pays for bullshit like nivida.

                    this choice just hurts entusers and force them to buy nvidia hardware.
                    Nvidia has a team that helps the developer implement it though their "TWIMTBP" program. The same thing AMD has with their "Gaming Evolved" program which entails:

                    ? Technical engagement, including referenceable source code, and access to game builds for competing vendors
                    ? Developer tools
                    ? Product development
                    ? Lab testing
                    ? Marketing programs
                    ? Support and integration with our partners

                    Same as nvidia.

                    Comment


                    • #30
                      One thing as well, where nvidia is concerned, OpenCL is a layer on top of the CUDA driver interface. Without Cuda, no openCL on nvidia, C for CUDA and OpenCL are simply two ways of accessing the capabilities as is DirectCompute, another vender specific, albeit software vender this time and AMD supports it as well. So it looks like AMD is "feeding" and other "evil empire" and supporting it as well.

                      Comment

                      Working...
                      X