Announcement

Collapse
No announcement yet.

Linux OpenCL Performance With The Newest AMD & NVIDIA Drivers

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by cutterjohn View Post
    Yep, given the way that nVidia INTENTIONALLY gimps gpgpu capabilities of their consumer cards I was incredibly surprised to see 780 TI perf so close to the R9 290X and exceeding even the more modest ATI cards.
    Cuda performs better than Opencl on Nvidia cards, and when you try more complex kernels surpass amd for a good margin, at least until AMD decides to fix the compiler... but after three years broken I'm not hopping for a real fix anymore.

    Comment


    • #17
      Originally posted by 0xBADCODE View Post
      I can propose couple of another benchmarks:
      1) bfgminer --scrypt --benchmark (https://github.com/luke-jr/bfgminer.git) - massively parallel computations with incredibly heavy GPU memory demands. GPU should both be good at parallel computations and provide fast memory. Can be tricky a bit in sense that best results are obtained after tuning parameters for particular GPU.
      2) clpeak utility (https://github.com/krrishnarraj/clpeak.git). GPU VRAM speed benchmark. While it sounds simple, it depends on both GPU and drivers so it can be quite interesting thing to compare. This one also known good way to crash MESA+LLVM OpenCL stack, at least on AMD cards .
      I also suggest "John The Ripper" with jumbo patches: https://github.com/magnumripper/JohnTheRipper

      Comment


      • #18
        Originally posted by drSeehas View Post
        What is the rationale behind testing HD 7850 AND R9 270X?
        It is the same chip (Pitcairn)!
        Instead why not test a card with a Cape Verde chip to complement the 740?
        The cards were tested for what I had in my possession...
        Michael Larabel
        http://www.michaellarabel.com/

        Comment


        • #19
          These are craptastic Linux OpenCL performance results and still AMD wins out.

          The Windows optimized OpenCL tests shows AMD blowing the doors off of Nvidia.

          Comment


          • #20
            Originally posted by Michael View Post
            The cards were tested for what I had in my possession...
            I know, but why don't you simply buy a card with a Cape Verde chip (e.g. HD 7750, some R7 250)? These are not very expensive.

            Originally posted by Marc Driftmeyer View Post
            ... craptastic ...
            What's "craptastic"?

            Comment


            • #21
              Originally posted by johnc View Post
              Is there a danger that GPU-based computing (OpenCL, CUDA, HSA, etc.) is going to be replaced by FPGAs? Probably certainly not in the consumer space (where these technologies are rare anyway), but in HPC, which could lead to these technologies, in time, withering on the vine.

              I'm just thinking about how quickly GPU mining collapsed based on a market need to go further than what GPUs can do. Would the same pressures apply to typical HPC markets today?

              Some half-interesting viewpoints from an incorrigible crank: http://semiaccurate.com/2014/06/20/i...e-desperation/
              Things are heading into the direction for professional purposes. Altera and I hear Xilinx are followers of OpenCL for the kernels and are making it easier to use these devices most likely both due to response to the market pressure of the GPGPUs are giving them in scientific, military, and embedded markets as well as in financial sector. FPGAS give higher performance per watt and have much better latency than GPUs and about an order of magnitude less power draw. So they're a no brainier for everyone with specialized tasks that they're the right way to go about acceleration if they could be made easier to use - which is what opencl is addressing (finally!).

              They've been creeping into the big data sector for a while now to the point that we're finally getting them paired up at intel - interesting to contrast to amd's APUs:
              http://www.extremetech.com/extreme/1...formance-boost

              I think this place also speaks volumes:
              http://picocomputing.com/
              The thought that within a year a small machine the size of one of the longer ITX media pcs running at 600 watts or so could have 60 teraflops is pretty amazing/salivating.

              ps. Finally some benchmarks showing the 290s awesomeness over the bloated and inefficient titan! Now do some benchmarks with double precision floating point and you'll discover how gimped Nvidia makes their hardware.

              Comment


              • #22
                Originally posted by johnc View Post
                Is there a danger that GPU-based computing (OpenCL, CUDA, HSA, etc.) is going to be replaced by FPGAs? Probably certainly not in the consumer space (where these technologies are rare anyway), but in HPC, which could lead to these technologies, in time, withering on the vine.

                I'm just thinking about how quickly GPU mining collapsed based on a market need to go further than what GPUs can do. Would the same pressures apply to typical HPC markets today?

                Some half-interesting viewpoints from an incorrigible crank: http://semiaccurate.com/2014/06/20/i...e-desperation/
                Repost of this as first got swallowed by the forums...

                Things are heading into the direction for professional purposes. Altera and I hear Xilinx are followers of OpenCL for the kernels and are making it easier to use these devices most likely both due to response to the market pressure of the GPGPUs are giving them in scientific, military, and embedded markets as well as in the financial sector. FPGAS give higher performance per watt and have much better latency than GPUs. So they're a no brainier for everyone with specialized tasks that they're the right way to go about acceleration if they could be made easier to use - which is what opencl is addressing (finally!).

                They've been creeping into the big data sector for a while now to the point that we're finally getting them paired up at intel - interesting contrast to amd's APUs:
                http://www.extremetech.com/extreme/1...formance-boost

                I think this place also speaks volumes:
                http://picocomputing.com/
                The thought that within a year a small machine the size of one of the longer ITX media pcs running at 600 watts or so could have 60 teraflops is pretty amazing/salivating.

                Finally we have some benchmarks showing the 290s over the bloated and inefficient titan. Now do some benchmarks with double precision floating point and you'll discover how gimped Nvidia makes their non enterprise hardware - it's about stratifying products, not die savings IMO which is where it gets murky with the titan-z IIRC. Hmm.. I do remember nvidia gimped their opencl implementation too and choose to never update it/fix it's problems - most likely to make people use proprietary cuda for good performance so it'd win the language wars... if we take that into consideration that implies that at least the titans have more room to go up in ops/s based on these plots.

                Comment

                Working...
                X