Announcement

Collapse
No announcement yet.

NVIDIA 470 Series To Be The Last Supporting GTX 600/700 Series Kepler

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Thanks for pointing that out.
    Something like this is definitely more accessible and convenient for most people.

    Comment


    • #22
      ?!?!?!?
      Nvidia basicly says that long term support driver will support GTX 6xx and 7xx cards until 2024. That basicly means we will have wayland driver with official support 12 years since orginal release of cards.

      I bought back then laptop with Intel Haswell CPU 4xxx and GTX 850M (maxwell one). Intel CPU already doesnt' have support for quite some time, meanwhile GTX850M will have support probably until... 2027? How it is not proper long term support?

      Comment


      • #23
        So is "Long term" branch different than legacy branch?

        Comment


        • #24
          I didn't know about this, I think I love you now. Thanks.

          Comment


          • #25
            Glad I'm not a closed source drivers user.
            Congrats to AMD and Intel !

            Comment


            • #26
              It is actually pretty sad news. The only Kepler card people should remember is the Tesla K80, the dual GK210 GPU that launched the Machne Learning craze. Even if the 470 LTS branch will be supported till 2024, newer CUDA versions requiring 470+ will be unusable, which means, in 6 months or so, when Tensorflow starting to require a cuda version that requires 470+ drivers, newest Tensorflow will be unusable by the K80.

              K80 is now dead, which means if you need Double Precision or Tensorflow 2.6+, P100 is the lowest one you need to buy now and P100/V100 isn't cheap. Oops. Nvidia literally raised entry machine learning cost by about 400%+(k80->P100) for people needing ECC GPUs, and Kaggle probably will be pissed by it because they have a ton of K80s.
              Last edited by phoronix_is_awesome; 23 May 2021, 11:44 AM.

              Comment


              • #27
                Versatile Compute Acceleration for Mainstream Enterprise Servers.


                There's no reason to buy P100(Compute Capability 6.0) 16GB HBM2 memory at all when A30(Compute Capability 8.0) has 24GB of HBM2 memory, 200GB more memory bandwidth, higher FP64 performance(A30 5.2 TFLOPS vs P100 4.7 TFLOPS) and Ampere is much better at compute performance than Pascal will ever be.

                Comment


                • #28
                  Originally posted by GT220 View Post
                  https://www.nvidia.com/en-us/data-ce...ducts/a30-gpu/

                  There's no reason to buy P100(Compute Capability 6.0) 16GB HBM2 memory at all when A30(Compute Capability 8.0) has 24GB of HBM2 memory, 200GB more memory bandwidth, higher FP64 performance(A30 5.2 TFLOPS vs P100 4.7 TFLOPS) and Ampere is much better at compute performance than Pascal will ever be.
                  How much is A30? Exactly. Nvidia doesn't even have a price for it. P100 SXM2 used to be pretty low price although SXM2 board/Servers are still expensive. P100 PCI-e 16GB could be had for $1000. A30 is only 10.3TF FP32(5.1TF double precision), so it is only faster than P100 when you use TF32 format or use tensorcores.
                  Last edited by phoronix_is_awesome; 24 May 2021, 03:07 AM.

                  Comment


                  • #29
                    As a 770 user this affects me.
                    But the timing is interesting as I have been eyeing a new GPU, prolly a Rtx 2060, that should become more available as stocks are replenished.

                    Comment


                    • #30
                      I am a nVidia GTX 660 user and I purchased my graphics card back in 2014. So far I have not had any issues with it , works great and performance is still more than enough for me (I don't play games much, only thedarkmod from time to time and a couple of retro stuff).

                      The most important thing for me was support for 3x monitors and CUDA support in Blender is a big bonus. I am on "stock" Debian (testing) with whatever nVidia drivers that are in the debian repo so I will manage to run it for a couple of more years without issues I guess, but I wish I could use nouveau. Last time I tested noveau it was a rather sad experience with corruptions and crashes all over the place, but I *really* wish I could use it.


                      http://www.dirtcellar.net

                      Comment

                      Working...
                      X