Announcement

Collapse
No announcement yet.

NVIDIA 470.103.01 Linux Driver Brings RTX 2050 / MX 570 / MX 550 Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • NVIDIA 470.103.01 Linux Driver Brings RTX 2050 / MX 570 / MX 550 Support

    Phoronix: NVIDIA 470.103.01 Linux Driver Brings RTX 2050 / MX 570 / MX 550 Support

    While we are awaiting the stable debut of the new NVIDIA 510 Linux driver series, NVIDIA's long-lived 470 series driver production branch has been updated...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    NVIDIA keeps on playing loose product names and their placement in the product stack. A series will have at least two different generations, or worse FIVE, at least for the 700 series.

    Consumers have no easy way of identifying a scale when it comes to the products inside a series, the x80 should be less powerful than the x90, yet you have the 3080 Ti which performs similarly close to the 3090. Then you have the subvariants inside variants. The 3080 10GB and 12GB seem like they should be the same, barring memory, yet the 3080 12GB is better with it being more inline to a cut down 3080 Ti.
    And there should be no damn reason to bring back a discontinued series to insert a new generation product into it. While the rest of the Turing cards get support discontinued in the future, the RTX 2050 will still be there getting support. When the MX 550 gets dropped because it is a Turing card, the MX 570 will still be fine for another 2-3 years after that.

    This is actively harmful to consumers, while being not as egregious as the GT 730 being a gamble of 3 different generations with 4 different memory generations as well. The RTX 2050 should've been called the RTX 3040 and the MX 550 being called an MX 460. It's just madness.

    AND I'M NOT DONE YET

    The MX 450 has FOUR different variants that are disguised to the consumer. You either get GDDR5 or GDDR6 or 12W or 25W or 28.5W TDP. The performance delta between these parts is great while at the same time also being a mediocre product too. Top config compared to bottom config is twice the performance, and the bottom one is slower than an MX 350. You can genuinely end up in a situation where your older laptop is faster than your new one.
    Last edited by Namelesswonder; 31 January 2022, 05:15 PM.

    Comment


    • #3
      Originally posted by Namelesswonder View Post
      NVIDIA keeps on playing loose product names and their placement in the product stack. A series will have at least two different generations, or worse FIVE, at least for the 700 series.
      Consumers have no easy way of identifying a scale when it comes to the products inside a series, the x80 should be less powerful than the x90, yet you have the 3080 Ti which performs similarly close to the 3090. Then you have the subvariants inside variants. The 3080 10GB and 12GB seem like they should be the same, barring memory, yet the 3080 12GB is better with it being more inline to a cut down 3080 Ti.
      And there should be no damn reason to bring back a discontinued series to insert a new generation product into it. While the rest of the Turing cards get support discontinued in the future, the RTX 2050 will still be there getting support. When the MX 550 gets dropped because it is a Turing card, the MX 570 will still be fine for another 2-3 years after that.
      This is actively harmful to consumers, while being not as egregious as the GT 730 being a gamble of 3 different generations with 4 different memory generations as well. The RTX 2050 should've been called the RTX 3040 and the MX 550 being called an MX 460. It's just madness.
      AND I'M NOT DONE YET
      The MX 450 has FOUR different variants that are disguised to the consumer. You either get GDDR5 or GDDR6 or 12W or 25W or 28.5W TDP. The performance delta between these parts is great while at the same time also being a mediocre product too. Top config compared to bottom config is twice the performance, and the bottom one is slower than an MX 350. You can genuinely end up in a situation where your older laptop is faster than your new one.
      I do not Buy NVIDIA products problem solved...

      why do you even care about Nvidia? just ignore them...
      Phantom circuit Sequence Reducer Dyslexia

      Comment


      • #4
        Originally posted by qarium View Post

        I do not Buy NVIDIA products problem solved...

        why do you even care about Nvidia? just ignore them...
        You read my mind. It is silly to buy Nvidia for anything desktop related, there's no point.

        (And yes I know that for compute they have the best solution but that's data center stuff.)

        Comment


        • #5
          I'm not a data center. Yet Nvidia is the only trouble free solution for personal distributed computing which I run for BOINC projects.

          Comment


          • #6
            Originally posted by Keith Myers View Post
            I'm not a data center. Yet Nvidia is the only trouble free solution for personal distributed computing which I run for BOINC projects.
            but tell me do you really need it ? or could you just buy a 32-64 core threadripper instead and do the same without a nvidia gpu ?
            i have a threadripper 1920X right now and before i would buy nvidia i would buy a 2970WX 24core or 2990WX 32core
            or a newer 3000er threadripper or 4000 er threadripper...

            if you see blender rendering benchmarks the fastes CPU have better results than the Nvidia GPUs..
            Phantom circuit Sequence Reducer Dyslexia

            Comment


            • #7
              Originally posted by qarium View Post

              but tell me do you really need it ? or could you just buy a 32-64 core threadripper instead and do the same without a nvidia gpu ?
              i have a threadripper 1920X right now and before i would buy nvidia i would buy a 2970WX 24core or 2990WX 32core
              or a newer 3000er threadripper or 4000 er threadripper...

              if you see blender rendering benchmarks the fastes CPU have better results than the Nvidia GPUs..
              Yes. A 10,000 core Nvidia GPU destroys a 64 core amd CPU for these tasks by like... 3 orders of magnitude.

              Comment


              • #8
                Originally posted by qarium View Post

                but tell me do you really need it ? or could you just buy a 32-64 core threadripper instead and do the same without a nvidia gpu ?
                i have a threadripper 1920X right now and before i would buy nvidia i would buy a 2970WX 24core or 2990WX 32core
                or a newer 3000er threadripper or 4000 er threadripper...

                if you see blender rendering benchmarks the fastes CPU have better results than the Nvidia GPUs..
                But BOINC distributed computing is not about rendering. The work is compute intensive for both cpu and gpu applications. Some projects are cpu only and some projects are gpu only. Some a mix of both types of applications. I have hosts with at least 32 cpu threads available and working, mostly AMD Ryzen 3950X and 5950X along with a couple 48T Epyc cpus. Most projects are using OpenCL gpu applications which can be run on Intel, Nvidia or AMD hardware. Some projects are Nvidia only. The Nvidia apps normally wipe the floor with their CUDA performance when the project offers both types of vendor specific applications compared to Intel and AMD running the OpenCL version of the application.

                Comment


                • #9
                  Originally posted by mSparks View Post

                  Yes. A 10,000 core Nvidia GPU destroys a 64 core amd CPU for these tasks by like... 3 orders of magnitude.
                  Ha ha ha LOL. Yep. I have taken the cpu source code for a projects application and just run it through the compiler for my Nvidia Jetson Nano igpu as target and process the tasks in 10-11 minutes compared to the same source code run as intended on the Nano's cpu cores which run the tasks in 9.5 hours. The Nano has all of 128 CUDA cores in it.

                  Comment


                  • #10
                    Originally posted by mSparks View Post
                    Yes. A 10,000 core Nvidia GPU destroys a 64 core amd CPU for these tasks by like... 3 orders of magnitude.
                    maybe... but remember it is Maybe not about performance the point of matter is avoid this evil company "Nvidia"

                    Phantom circuit Sequence Reducer Dyslexia

                    Comment

                    Working...
                    X