Page 1 of 3 123 LastLast
Results 1 to 10 of 22

Thread: Linux OpenCL Performance With The Newest AMD & NVIDIA Drivers

  1. #1
    Join Date
    Jan 2007
    Posts
    15,125

    Default Linux OpenCL Performance With The Newest AMD & NVIDIA Drivers

    Phoronix: Linux OpenCL Performance With The Newest AMD & NVIDIA Drivers

    The latest Linux GPU benchmarks at Phoronix for your viewing pleasure are looking at the OpenCL compute performance with the latest AMD and NVIDIA binary blobs while also marking down the performance efficiency and overall system power consumption.

    http://www.phoronix.com/vr.php?view=20731

  2. #2
    Join Date
    Jan 2009
    Posts
    1,708

    Default

    Is there any ETA on Clover?

  3. #3
    Join Date
    Sep 2012
    Posts
    289

    Default

    I love my GTX 750 Ti, one of the best purchases I've made so far.

  4. #4
    Join Date
    Jun 2012
    Posts
    328

    Default

    I can propose couple of another benchmarks:
    1) bfgminer --scrypt --benchmark (https://github.com/luke-jr/bfgminer.git) - massively parallel computations with incredibly heavy GPU memory demands. GPU should both be good at parallel computations and provide fast memory. Can be tricky a bit in sense that best results are obtained after tuning parameters for particular GPU.
    2) clpeak utility (https://github.com/krrishnarraj/clpeak.git). GPU VRAM speed benchmark. While it sounds simple, it depends on both GPU and drivers so it can be quite interesting thing to compare. This one also known good way to crash MESA+LLVM OpenCL stack, at least on AMD cards .

  5. #5
    Join Date
    May 2011
    Posts
    1,558

    Default

    Is there a danger that GPU-based computing (OpenCL, CUDA, HSA, etc.) is going to be replaced by FPGAs? Probably certainly not in the consumer space (where these technologies are rare anyway), but in HPC, which could lead to these technologies, in time, withering on the vine.

    I'm just thinking about how quickly GPU mining collapsed based on a market need to go further than what GPUs can do. Would the same pressures apply to typical HPC markets today?

    Some half-interesting viewpoints from an incorrigible crank: http://semiaccurate.com/2014/06/20/i...e-desperation/

  6. #6
    Join Date
    Mar 2009
    Posts
    210

    Default

    Quote Originally Posted by 0xBADCODE View Post
    I can propose couple of another benchmarks:
    1) bfgminer --scrypt --benchmark (https://github.com/luke-jr/bfgminer.git) - massively parallel computations with incredibly heavy GPU memory demands. GPU should both be good at parallel computations and provide fast memory. Can be tricky a bit in sense that best results are obtained after tuning parameters for particular GPU.
    2) clpeak utility (https://github.com/krrishnarraj/clpeak.git). GPU VRAM speed benchmark. While it sounds simple, it depends on both GPU and drivers so it can be quite interesting thing to compare. This one also known good way to crash MESA+LLVM OpenCL stack, at least on AMD cards .
    Yep, given the way that nVidia INTENTIONALLY gimps gpgpu capabilities of their consumer cards I was incredibly surprised to see 780 TI perf so close to the R9 290X and exceeding even the more modest ATI cards.

    Radeons pretty much trash nvidia consumer cards since, 600 series?

    Bought myself a 780 Ti as a christmas present to self last year since when I did my prior desktop build I woosed out and bought a 670 FTW instead of the 680 that I had originally planned. Thanks to a nearby microcenter and massive GPU/CPU discounts(the 670 FTW/i7-3930k) I saved c. $700 just from those plus maybe a few $100 on other components(ended up just buying everything from them as (a) it was cheaper than newegg, et. al. even including sales tax v. shipping costs(bought monitor and case as well and those are pricey to ship and this was pre-Amazon prime days for me) and (b) I could(and did) just drive out one morning to get everything to build that day... (This weas the most that I'd ever spent building a desktop system, usually I'd go with even more mid range e.g. 660, probably an ivy bridge or whatever was available back then v. LGA2011(the 4 core I don't know why they made that for 2011 it wasn't enough cheaper to truly be an option to anyone other than someone who might not be able to afford the 3930k(or 60k) immediately but with plans to upgrade later... still a waste IMNHO as quad channel memory doesn't add enough and now that I think of it I'm not even sure that 3820k(?) was even able to support quad channel as IIRC it was pretty heavily gimped...now I'm just waiting for haswell-e... but will steal as many component from the 2011 as I can and replace those with lower end stuff as it gets demoted... 4930k just didn't offer enough(ivy bridge) to bother looking at for c. $400(I love microcenters...))
    MSI GT725-074US:
    Intel Core 2 Duo P8600, 4GB DDR2-800, 320GB 7200RPM WDC WD3200BEKT-22F3T0, ATI Mobility Radeon HD 4850 512MB GDDR3, 8x Super multi DVD+/-RW, 1680x1050 (17"), 9 cell battery

    Ubuntu 8.10 x86-64 (current updates) catalyst 9.3
    Windows Vista Home Premium 32b SP1 (current update) still on shipped catalyst(8.12 I think, MSI packed -- lazy)

  7. #7
    Join Date
    May 2011
    Posts
    1,558

    Default

    Quote Originally Posted by cutterjohn View Post
    Yep, given the way that nVidia INTENTIONALLY gimps gpgpu capabilities of their consumer cards I was incredibly surprised to see 780 TI perf so close to the R9 290X and exceeding even the more modest ATI cards.
    NVIDIA GPUs aren't "gimped". They just aren't very good at some of those operations that they weren't designed for, like scrypt. It's not like you can grab a zillion-dollar Tesla card and all of a sudden get great scrypt performance.

    Bought myself a 780 Ti as a christmas present to self last year since when I did my prior desktop build I woosed out and bought a 670 FTW instead of the 680 that I had originally planned. Thanks to a nearby microcenter and massive GPU/CPU discounts(the 670 FTW/i7-3930k) I saved c. $700 just from those plus maybe a few $100 on other components(ended up just buying everything from them as (a) it was cheaper than newegg, et. al. even including sales tax v. shipping costs(bought monitor and case as well and those are pricey to ship and this was pre-Amazon prime days for me) and (b) I could(and did) just drive out one morning to get everything to build that day... (This weas the most that I'd ever spent building a desktop system, usually I'd go with even more mid range e.g. 660, probably an ivy bridge or whatever was available back then v. LGA2011(the 4 core I don't know why they made that for 2011 it wasn't enough cheaper to truly be an option to anyone other than someone who might not be able to afford the 3930k(or 60k) immediately but with plans to upgrade later... still a waste IMNHO as quad channel memory doesn't add enough and now that I think of it I'm not even sure that 3820k(?) was even able to support quad channel as IIRC it was pretty heavily gimped...now I'm just waiting for haswell-e... but will steal as many component from the 2011 as I can and replace those with lower end stuff as it gets demoted... 4930k just didn't offer enough(ivy bridge) to bother looking at for c. $400(I love microcenters...))
    Thanks for sharing.

  8. #8
    Join Date
    Sep 2010
    Posts
    701

    Default

    Quote Originally Posted by johnc View Post
    NVIDIA GPUs aren't "gimped". They just aren't very good at some of those operations that they weren't designed for, like scrypt. It's not like you can grab a zillion-dollar Tesla card and all of a sudden get great scrypt performance.



    Thanks for sharing.
    But they are. You can find quite a lot of "patches" for Geforce GPUs to gain Tessla-quality.

    Part about both of them not targeting scrypt is correct though. Same as with DX FL11_2 (or whatever MS call feature level for DX 11.2), Nvidia skimped on few rarely* used functions.

    * But quite useful in OpenCL and in coin mining.

    Anyway, if we where fair those fingers should be pointed at...
    AMD....

    For their quite horrible OpenCL compiler that had have trouble with compiling complex kernels (OpenCL programs).
    IIRC AMD already issued somehow better drivers, but they still do not satisfy needs of OpenCL renderers. (Right now, most demanding apps in terms of complexity of code)

  9. #9
    Join Date
    Jan 2014
    Location
    Wonderland
    Posts
    93

    Default

    I highly doubt that fpgas are gonna be 'mainstream' anytime soon, but there is some work going on in research where people are trying to translate opencl code into vhdl automatically.
    It's going to take years before this is really useful, but it's going to happen (at least in some form).

  10. #10
    Join Date
    May 2011
    Posts
    1,558

    Default

    Quote Originally Posted by przemoli View Post
    But they are. You can find quite a lot of "patches" for Geforce GPUs to gain Tessla-quality.
    such as...?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •