Announcement

Collapse
No announcement yet.

AMD Catalyst vs. NVIDIA OpenCL Performance

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD Catalyst vs. NVIDIA OpenCL Performance

    Phoronix: AMD Catalyst vs. NVIDIA OpenCL Performance

    In the middle of doing a large AMD/Intel/NVIDIA multi-way closed-source vs. open-source Fedora-based Linux OpenGL performance comparison, I also ran a fresh round of OpenCL benchmarks on the GPUs backed by CL-capable proprietary drivers...

    http://www.phoronix.com/vr.php?view=MTQwMDI

  • #2
    Not that surprising. AMD has always been geared a bit more towards GPGPU. That's why people use them for Bitcoin mining.

    Comment


    • #3
      Originally posted by blackout23 View Post
      Not that surprising. AMD has always been geared a bit more towards GPGPU. That's why people use them for Bitcoin mining.
      Yeah, amd radeons are really popular for that. So perhaps amd delivers a better product than nvidia. I know this is regarded as heresy on these forums, but still.

      Comment


      • #4
        Originally posted by Wilfred View Post
        Yeah, amd radeons are really popular for that. So perhaps amd delivers a better product than nvidia. I know this is regarded as heresy on these forums, but still.
        Opencl? I only use cuda, for me opencl is irrelevant and the nvidia driver is more fast in games more stable on DE's etc etc etc

        Comment


        • #5
          no APU again?
          i guess OpenCl will gain in importance once the new kaveri APUs are released with their hUMA memory arch.

          Comment


          • #6
            Originally posted by pandev92 View Post
            Opencl? I only use cuda, for me opencl is irrelevant and the nvidia driver is more fast in games more stable on DE's etc etc etc
            there are also people that still use cobol, this does not make it look any better.

            opencl's api might be more difficult to learn but it is way better structured and has more potential in gpu than what nvidia is offering.
            also it is oss which makes it more agile to changes and more attractive than closed ones.

            now that opencl is gaining support in big commercial suites like adobe's you can say that it is a matter of time to hear about the death of it, like what happened to physx.

            Comment


            • #7
              The article doesn't mention whether this is doing double precision FP or single precision. This is a glaring omission. Please update the article, as many people care a lot about single precision and it's well known that NVidia cripples double precision FP on their consumer cards. Right now, we don't have a clue what we're actually looking at with those charts.

              Comment


              • #8
                LuxMark and LuxRender do not use double precision arithmetic. It is ray tracing program.

                Comment


                • #9
                  Originally posted by Yorgos View Post
                  there are also people that still use cobol, this does not make it look any better.

                  opencl's api might be more difficult to learn but it is way better structured and has more potential in gpu than what nvidia is offering.
                  also it is oss which makes it more agile to changes and more attractive than closed ones.

                  now that opencl is gaining support in big commercial suites like adobe's you can say that it is a matter of time to hear about the death of it, like what happened to physx.
                  I got a hearty laugh.

                  Comment


                  • #10
                    I've always been a fan of ATI, but the Nvidia GT660 TI has been a dream with Wine compared to my ATI 3870.

                    Comment


                    • #11
                      Originally posted by Wilfred View Post
                      Yeah, amd radeons are really popular for that. So perhaps amd delivers a better product than nvidia. I know this is regarded as heresy on these forums, but still.
                      People spend money on graphics cards to play games. Only a few people actually care about Bitcoin Mining or Folding@Home etc. Picking a niche application where it performns better and calling it a better product because of that is a bit exaggerated. Let's not forget the better drivers, graphics performance and power consumption NVIDIA has to offer.

                      Comment


                      • #12
                        Originally posted by blackout23 View Post
                        People spend money on graphics cards to play games.
                        It doesn't look any better with DirectCompute either. Games are picking that up now, and AMD outperforms NVidia there too. And I'm not talking silly stuff like "TressFX" hair in Tomb Raider, but rather good stuff like DX11 global illumination, which uses DirectCompute.

                        The scale is shifting towards AMD this year. They are now performing better in the latest DX11 games because of superior GPGPU performance.

                        Comment


                        • #13
                          The larger problem I have is that NVIDIAs blob doesn't support powersave in dual monitor mode.

                          Comment


                          • #14
                            Originally posted by RealNC View Post
                            It doesn't look any better with DirectCompute either. Games are picking that up now, and AMD outperforms NVidia there too. And I'm not talking silly stuff like "TressFX" hair in Tomb Raider, but rather good stuff like DX11 global illumination, which uses DirectCompute.

                            The scale is shifting towards AMD this year. They are now performing better in the latest DX11 games because of superior GPGPU performance.
                            If we had better Linux drivers that would be a point.

                            Comment


                            • #15
                              Originally posted by Thaodan View Post
                              The larger problem I have is that NVIDIAs blob doesn't support powersave in dual monitor mode.
                              They do if you have the same resolution and refresh rates, or if you have a Kepler card.

                              Comment

                              Working...
                              X