Announcement

Collapse
No announcement yet.

NVIDIA CUDA 11.0 Released With Ampere Support, New Programming Features

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • DanL
    replied
    Originally posted by hax0r View Post
    Dammit my GTX 680 4GB that I paid $600 in 2012 is paperweight now.
    Well, you can continue to run CUDA 10.x for a while.
    Also, if you've gotten 8 good years out of a GPU, be grateful.

    Leave a comment:


  • Paradigm Shifter
    replied
    Yay, I can look forward to a raft of "I-updated-CUDA-and-now-my-GPGPU-programs-don't-work-but-they-used-to-work-why-don't-they-work-any-more" flooding my inbox...

    ...

    I had a FireGL 8800 back in the day. Loved that card. First ATi GPU (VPU) I had on a Linux box, and it just worked. For years I wondered why ATi had such a bad rep for Linux drivers. Then I had the pleasure of using an X1600M in Linux. Oh boy...

    Leave a comment:


  • hax0r
    replied
    Dammit my GTX 680 4GB that I paid $600 in 2012 is paperweight now, CUDA 11.0 and included 450.51.05 driver do install fine but most of the samples won't run due to:
    Code:
    simpleTexture.cu(218) : getLastCudaError() CUDA error : Kernel execution failed : (209) no kernel image is available for execution on the device.
    during compilation nvcc says:
    Code:
    nvcc warning : The 'compute_35', 'compute_37', 'compute_50', 'sm_35', 'sm_37' and 'sm_50' architectures are deprecated
    I tried to hack Makefile to include sm_30 arch but get:
    Code:
    nvcc fatal   : Unsupported gpu architecture 'compute_30'
    thanks leather jacket man.

    Leave a comment:


  • Pranos
    replied
    Originally posted by bridgman View Post
    Actually neither of those are correct.
    thank you for clarification. keep up the great work.
    Last edited by Pranos; 08 July 2020, 06:12 PM.

    Leave a comment:


  • bridgman
    replied
    Originally posted by Slartifartblast View Post
    That's because AMD couldn't write their own for shit, did you ever use fglrx ?
    Originally posted by Pranos View Post
    fglrx was ATI. As i know, AMD had to rewrite most parts or the complete driver (even for Windows) because they didnt got the code/most of the code from ATI after they have bought it and ATI had no or worse documentations about their GPUs. So its not the fault from AMD.
    Actually neither of those are correct. ATI's initial Linux support was via open source drivers, working with VA Linux / Precision Insight, and all our Linux drivers were open source until 2001/2002, when we purchased FireGL from SonicBlue (aka Diamond Multimedia + S3).

    The fglrx driver was an attempt to use the FireGL workstation driver for both workstation and client/desktop users, so we ported the FireGL code from IBM HW to our GPUs. The resulting "fglrx" driver ended up being quite good for workstation but not so good for client/desktop, partly for architectural reasons and partly because the leveraging of Windows driver code during porting pretty much forced binary-only delivery.

    We re-started open source driver development in 2007 focusing first on client/desktop, and IIRC around 2011 started rebuilding the workstation stack around the same open source driver code. We had full access to fglrx source code and still do, although a lot of the code was shared with Windows which made it very difficult to use in an upstream driver.

    Supporting the workstation userspace drivers required some ioctl changes compared to what radeon had implemented, and at the same time we wanted to start getting ready for new generations of HW that were going to be built around a common data fabric, so we re-architected the driver to be organized around IP blocks (GFX, SDMA, UVD etc...) at the same time, resulting in the new amdgpu kernel driver and stack.

    The first fabric-based ("SOC15") GPU generation was Vega, but we were able to make amdgpu the primary driver starting with VI (Tonga).
    Last edited by bridgman; 08 July 2020, 06:57 PM.

    Leave a comment:


  • DanL
    replied
    Originally posted by Setif View Post
    Poor ... Maxwell (deprecated).
    AFAICT, only sm_50 is deprecated, which is limited to Maxwell Gen1 (GTX 750 and equivalent Tesla/Quadro GPU's).

    Explore your GPU compute capability and CUDA-enabled products.

    This guide lists the various supported nvcc cuda gencode and cuda arch flags that can be used to compile your GPU code for several different GPUs

    Leave a comment:


  • sdack
    replied
    It's actually now cuda_11.0.2 (with 450.51.05 as driver).

    Before it was cuda_11.0.1 (with 450.36.06 as driver).

    Leave a comment:


  • Pranos
    replied
    Originally posted by Slartifartblast View Post

    That's because AMD couldn't write their own for shit, did you ever use fglrx ?
    fglrx was ATI. As i know, AMD had to rewrite most parts or the complete driver (even for Windows) because they didnt got the code/most of the code from ATI after they have bought it and ATI had no or worse documentations about their GPUs.
    So its not the fault from AMD.
    Last edited by Pranos; 08 July 2020, 09:57 AM.

    Leave a comment:


  • Slartifartblast
    replied
    Originally posted by pieman
    still hoping for one day nvidia opens up their drivers... at least the main kernel driver and let the community use mesa with it. similar to how amd did things. i understand the reasons nvidia has given in regards to why they haven't. its just hard to continue to accept them when we see what amd was able to accomplish.
    That's because AMD couldn't write their own for shit, did you ever use fglrx ?
    Last edited by Slartifartblast; 08 July 2020, 07:44 AM.

    Leave a comment:


  • bug77
    replied
    Originally posted by Setif View Post

    Poor Kepler (dropped) and Maxwell (deprecated).
    Do HPC setups still use those? (I honestly don't know.)
    Since they both lack tensor cores, I'm guessing they couldn't use CUDA 11's additions anyway.

    Leave a comment:

Working...
X