Page 3 of 3 FirstFirst 123
Results 21 to 22 of 22

Thread: Linux OpenCL Performance With The Newest AMD & NVIDIA Drivers

  1. #21
    Join Date
    Dec 2010
    Posts
    9

    Default

    Quote Originally Posted by johnc View Post
    Is there a danger that GPU-based computing (OpenCL, CUDA, HSA, etc.) is going to be replaced by FPGAs? Probably certainly not in the consumer space (where these technologies are rare anyway), but in HPC, which could lead to these technologies, in time, withering on the vine.

    I'm just thinking about how quickly GPU mining collapsed based on a market need to go further than what GPUs can do. Would the same pressures apply to typical HPC markets today?

    Some half-interesting viewpoints from an incorrigible crank: http://semiaccurate.com/2014/06/20/i...e-desperation/
    Things are heading into the direction for professional purposes. Altera and I hear Xilinx are followers of OpenCL for the kernels and are making it easier to use these devices most likely both due to response to the market pressure of the GPGPUs are giving them in scientific, military, and embedded markets as well as in financial sector. FPGAS give higher performance per watt and have much better latency than GPUs and about an order of magnitude less power draw. So they're a no brainier for everyone with specialized tasks that they're the right way to go about acceleration if they could be made easier to use - which is what opencl is addressing (finally!).

    They've been creeping into the big data sector for a while now to the point that we're finally getting them paired up at intel - interesting to contrast to amd's APUs:
    http://www.extremetech.com/extreme/1...formance-boost

    I think this place also speaks volumes:
    http://picocomputing.com/
    The thought that within a year a small machine the size of one of the longer ITX media pcs running at 600 watts or so could have 60 teraflops is pretty amazing/salivating.

    ps. Finally some benchmarks showing the 290s awesomeness over the bloated and inefficient titan! Now do some benchmarks with double precision floating point and you'll discover how gimped Nvidia makes their hardware.

  2. #22
    Join Date
    Dec 2010
    Posts
    9

    Default

    Quote Originally Posted by johnc View Post
    Is there a danger that GPU-based computing (OpenCL, CUDA, HSA, etc.) is going to be replaced by FPGAs? Probably certainly not in the consumer space (where these technologies are rare anyway), but in HPC, which could lead to these technologies, in time, withering on the vine.

    I'm just thinking about how quickly GPU mining collapsed based on a market need to go further than what GPUs can do. Would the same pressures apply to typical HPC markets today?

    Some half-interesting viewpoints from an incorrigible crank: http://semiaccurate.com/2014/06/20/i...e-desperation/
    Repost of this as first got swallowed by the forums...

    Things are heading into the direction for professional purposes. Altera and I hear Xilinx are followers of OpenCL for the kernels and are making it easier to use these devices most likely both due to response to the market pressure of the GPGPUs are giving them in scientific, military, and embedded markets as well as in the financial sector. FPGAS give higher performance per watt and have much better latency than GPUs. So they're a no brainier for everyone with specialized tasks that they're the right way to go about acceleration if they could be made easier to use - which is what opencl is addressing (finally!).

    They've been creeping into the big data sector for a while now to the point that we're finally getting them paired up at intel - interesting contrast to amd's APUs:
    http://www.extremetech.com/extreme/1...formance-boost

    I think this place also speaks volumes:
    http://picocomputing.com/
    The thought that within a year a small machine the size of one of the longer ITX media pcs running at 600 watts or so could have 60 teraflops is pretty amazing/salivating.

    Finally we have some benchmarks showing the 290s over the bloated and inefficient titan. Now do some benchmarks with double precision floating point and you'll discover how gimped Nvidia makes their non enterprise hardware - it's about stratifying products, not die savings IMO which is where it gets murky with the titan-z IIRC. Hmm.. I do remember nvidia gimped their opencl implementation too and choose to never update it/fix it's problems - most likely to make people use proprietary cuda for good performance so it'd win the language wars... if we take that into consideration that implies that at least the titans have more room to go up in ops/s based on these plots.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •