Announcement

Collapse
No announcement yet.

NVIDIA GeForce GTX 1060 To RTX 4060 GPU Compute & Renderer Performance On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • NVIDIA GeForce GTX 1060 To RTX 4060 GPU Compute & Renderer Performance On Linux

    Phoronix: NVIDIA GeForce GTX 1060 To RTX 4060 GPU Compute & Renderer Performance On Linux

    Earlier this month I provided some initial GeForce RTX 4060 vs. Radeon RX 7600 Linux gaming benchmarks for this new sub-$300 graphics card. For those considering this latest Ada Lovelace graphics card for 3D rendering or compute purposes, here are some benchmarks of the GeForce RTX 4060 on that front by looking at the generational performance of the x060 series graphics cards from the RTX 4060 back to the GTX 1060.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Doesn't feel quite right. Besides compute benchmarks almost never pitting team green against team red, the 6 GB 1060 was the faster of the mid range cards. The 4060 is the (much) slower one.

    Comment


    • #3
      Originally posted by bug77 View Post
      Doesn't feel quite right. Besides compute benchmarks almost never pitting team green against team red, the 6 GB 1060 was the faster of the mid range cards. The 4060 is the (much) slower one.
      Huh? Are you reading the charts upside-down? The 4060 is pretty much 2nd fastest, and is the most efficient.

      Comment


      • #4
        Its crazy to see the 2060 doing so well.

        I remember Turing being ridiculed for being overpriced and a small upgrade when it came out... And now it seems like a long lived bargain.

        This is both a compliment of Nvidia (as Turing was pretty forward thinking) and a criticism (Turing was overpriced, and Ampere and Ada were increasingly stingy)
        Last edited by brucethemoose; 10 July 2023, 03:43 PM.

        Comment


        • #5
          Originally posted by schmidtbag View Post
          Huh? Are you reading the charts upside-down? The 4060 is pretty much 2nd fastest, and is the most efficient.
          The Ada equivalent of the 6GB 1060 would be the 4060Ti, that's what I meant. Sorry if I was a bit vague.

          Comment


          • #6
            Just the generational NVIDIA performance is being looked at for this article with the AMD Radeon RX 7600 series not yet being officially supported by the ROCm compute stack and some of these benchmarks only supporting NVIDIA hardware.
            AMD, are you listening? I'm a casual gamer whose first graphics card was a CGA thing. I've had S3, 3dfx, nvidia, radeon, I didn't care, I went with Bang For The Buck and how much Fun I could get out of it.

            For the first time as a *user* I have a real very interesting reason to use my graphics accelerator for interesting things other than <games> (blockchain doesn't count). CUDA is king and ROCm is still not really for consumer side or really supported at all by most ML projects out-of-the-box. I mean I have to set crazy env variables to even attempt to get ROCm support on torch or whatever and I'm not a python/ML dev, just a user on this one.

            Please please give your power users more ROCm love or hack out a way to get CUDA working with radeons (don't say it can't be done, someone did it for Intel.) I don't even care if ML apps aren't quite as fast as an nvidia, I just want it *to work* without hours or days of hacking and possible rocm recompiling 5 times to find a version that works with software and works with my adapter.......I'm really really jealous of the nvidia users ability to Just Have ML Work (tm).

            I'm seriously considering a 4060 (hopefully ti or super plus good edition if out by then) for next purchase.....don't make me do it, please?

            Comment


            • #7
              Originally posted by panikal View Post

              AMD, are you listening? I'm a casual gamer whose first graphics card was a CGA thing. I've had S3, 3dfx, nvidia, radeon, I didn't care, I went with Bang For The Buck and how much Fun I could get out of it.

              For the first time as a *user* I have a real very interesting reason to use my graphics accelerator for interesting things other than <games> (blockchain doesn't count). CUDA is king and ROCm is still not really for consumer side or really supported at all by most ML projects out-of-the-box. I mean I have to set crazy env variables to even attempt to get ROCm support on torch or whatever and I'm not a python/ML dev, just a user on this one.

              Please please give your power users more ROCm love or hack out a way to get CUDA working with radeons (don't say it can't be done, someone did it for Intel.) I don't even care if ML apps aren't quite as fast as an nvidia, I just want it *to work* without hours or days of hacking and possible rocm recompiling 5 times to find a version that works with software and works with my adapter.......I'm really really jealous of the nvidia users ability to Just Have ML Work (tm).

              I'm seriously considering a 4060 (hopefully ti or super plus good edition if out by then) for next purchase.....don't make me do it, please?
              The crazy thing is that eager mode PyTorch isn't really meant for production deployment... Its for training, research, and experimentation, while various other frameworks can import pytorch models for performant inference. But alas thats where we are today. Researchers make all the projects people use and race to the next paper, and no one is around to port and optimize their projects.


              Anyway, for ML, you should skip the 4060. VRAM/bus width is everything, so save up for a 4060 TI 16GB or grab a 3060 instead.

              And keep an eye out for 24GB+ cards/big APUs from Intel.
              Last edited by brucethemoose; 10 July 2023, 04:42 PM.

              Comment


              • #8
                Originally posted by brucethemoose View Post
                Anyway, for ML, you should skip the 4060. VRAM/bus width is everything, so save up for a 4060 TI 16GB or grab a 3060 instead.

                And keep an eye out for 24GB+ cards/big APUs from Intel.
                I won't be gaming, and from previous experience bus width isn't the performance destroyer for my use case that it is for other things (e.g.: gaming) so the 16GB 4060Ti looks rather appealing if the price isn't insane. The 4060 is "cheap" here (not cheap, just less insane than the more powerful options) but I've learned long ago that prices in the US bear little to no resemblance to prices in Japan, so I'll wait and see. A "cheap" 16GB card would be nice (that extra 4GB over the 4070Ti helps a lot) but if it's too expensive then the 4080 becomes a more attractive option. And if I go there, I might as well go 4090 and have done with it.

                I hope Intel do something with more VRAM. The end of the 16GB A770 is rather daft. A 24 or 32GB card would be nice.

                Comment


                • #9
                  Originally posted by bug77 View Post
                  Doesn't feel quite right. Besides compute benchmarks almost never pitting team green against team red, the 6 GB 1060 was the faster of the mid range cards. The 4060 is the (much) slower one.
                  DSSL 2 + 3 and red mid range cards is crashed

                  Comment


                  • #10
                    Originally posted by HEL88 View Post

                    DSSL 2 + 3 and red mid range cards is crashed
                    So is your image quality and your delays. Fuck fake frames.

                    Comment

                    Working...
                    X