Announcement

Collapse
No announcement yet.

NVIDIA CUDA 11.0 Released With Ampere Support, New Programming Features

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • NVIDIA CUDA 11.0 Released With Ampere Support, New Programming Features

    Phoronix: NVIDIA CUDA 11.0 Released With Ampere Support, New Programming Features

    NVIDIA appears to have quietly promoted CUDA 11.0 to its stable channel...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Poor Volta.

    Comment


    • #3
      Originally posted by eydee View Post
      Poor Volta.
      Poor Kepler (dropped) and Maxwell (deprecated).

      Comment


      • #4
        Still no GCC 10 / LLVM 10 support. What a shame.
        So you still need GCC 9 / LLVM 9 to make use of CUDA (or your distro has to provide 9 and 10 at same time).

        Crappy proprietary Nvidia sh**. Best thing would be, if the distros are brave enought to drop Nvidia CUDA (pakages).
        Last edited by Pranos; 08 July 2020, 05:12 AM.

        Comment


        • #5
          Originally posted by Setif View Post

          Poor Kepler (dropped) and Maxwell (deprecated).
          Do HPC setups still use those? (I honestly don't know.)
          Since they both lack tensor cores, I'm guessing they couldn't use CUDA 11's additions anyway.

          Comment


          • #6
            Originally posted by pieman
            still hoping for one day nvidia opens up their drivers... at least the main kernel driver and let the community use mesa with it. similar to how amd did things. i understand the reasons nvidia has given in regards to why they haven't. its just hard to continue to accept them when we see what amd was able to accomplish.
            That's because AMD couldn't write their own for shit, did you ever use fglrx ?
            Last edited by Slartifartblast; 08 July 2020, 07:44 AM.

            Comment


            • #7
              Originally posted by Slartifartblast View Post

              That's because AMD couldn't write their own for shit, did you ever use fglrx ?
              fglrx was ATI. As i know, AMD had to rewrite most parts or the complete driver (even for Windows) because they didnt got the code/most of the code from ATI after they have bought it and ATI had no or worse documentations about their GPUs.
              So its not the fault from AMD.
              Last edited by Pranos; 08 July 2020, 09:57 AM.

              Comment


              • #8
                It's actually now cuda_11.0.2 (with 450.51.05 as driver).

                Before it was cuda_11.0.1 (with 450.36.06 as driver).

                Comment


                • #9
                  Originally posted by Setif View Post
                  Poor ... Maxwell (deprecated).
                  AFAICT, only sm_50 is deprecated, which is limited to Maxwell Gen1 (GTX 750 and equivalent Tesla/Quadro GPU's).

                  Explore your GPU compute capability and CUDA-enabled products.

                  This guide lists the various supported nvcc cuda gencode and cuda arch flags that can be used to compile your GPU code for several different GPUs

                  Comment


                  • #10
                    Originally posted by Slartifartblast View Post
                    That's because AMD couldn't write their own for shit, did you ever use fglrx ?
                    Originally posted by Pranos View Post
                    fglrx was ATI. As i know, AMD had to rewrite most parts or the complete driver (even for Windows) because they didnt got the code/most of the code from ATI after they have bought it and ATI had no or worse documentations about their GPUs. So its not the fault from AMD.
                    Actually neither of those are correct. ATI's initial Linux support was via open source drivers, working with VA Linux / Precision Insight, and all our Linux drivers were open source until 2001/2002, when we purchased FireGL from SonicBlue (aka Diamond Multimedia + S3).

                    The fglrx driver was an attempt to use the FireGL workstation driver for both workstation and client/desktop users, so we ported the FireGL code from IBM HW to our GPUs. The resulting "fglrx" driver ended up being quite good for workstation but not so good for client/desktop, partly for architectural reasons and partly because the leveraging of Windows driver code during porting pretty much forced binary-only delivery.

                    We re-started open source driver development in 2007 focusing first on client/desktop, and IIRC around 2011 started rebuilding the workstation stack around the same open source driver code. We had full access to fglrx source code and still do, although a lot of the code was shared with Windows which made it very difficult to use in an upstream driver.

                    Supporting the workstation userspace drivers required some ioctl changes compared to what radeon had implemented, and at the same time we wanted to start getting ready for new generations of HW that were going to be built around a common data fabric, so we re-architected the driver to be organized around IP blocks (GFX, SDMA, UVD etc...) at the same time, resulting in the new amdgpu kernel driver and stack.

                    The first fabric-based ("SOC15") GPU generation was Vega, but we were able to make amdgpu the primary driver starting with VI (Tonga).
                    Last edited by bridgman; 08 July 2020, 06:57 PM.
                    Test signature

                    Comment

                    Working...
                    X