Announcement

Collapse
No announcement yet.

Blender CUDA Benchmarks On The GeForce GTX 1050/1060/1070/1080

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by rudl View Post
    AS much as i like this benchmark this Benchmark is useless, it is expected to be that a 1050 is slowet than a 1060 1070 slower than a 1080 and so on i want at least fermi kepler maxwell and pascal to be usefull (((((
    Yes, but this allows people to see HOW much faster/slower than the other respective cards. That said, other gen cards thrown into the mix as well as CPU based render would be useful.

    Comment


    • #12
      to improve Cycles renderer performance especially GPU compute
      for tiles setting 1G vram = 128 x 140
      3G vram 128*3 x 140*3
      8Gvram 128*8 x 140*8


      Originally posted by falstaff View Post
      Which renderer is this benchmark using?
      Cycles renderer

      Comment


      • #13
        Originally posted by GraysonPeddie View Post
        Question: Couldn't I use Radeon RX 480 for games and GTX 1070 for CUDA and as a passthrough to play games in Windows through KVM?
        I, personally, found that using an AMD card as my passthrough card was easier and more reliable. But that's just me.

        Comment


        • #14
          Should have compared against older cards, like Nvidia 580 and Titan which excelled at Blender rendering performance. 6xx series were useless in comparison.

          Comment


          • #15
            Well I'm sure late to find this.
            But thank you for doing a Blender Cycles benchmark. You should do Blender benchmarks more often. I guess I never got around to requesting it.
            But I agree that you should of included 900-series cards too for comparison and maybe one 700. I also think it would be nice to put a CPU in for comparison, as I was disappointed when I got my GTX 960 and found it to be only marginally faster than CPU rendering.

            Comment


            • #16
              Just for reference, I did some comparison benchmarks on Ubuntu 14.04 with MSI GeForce GTX580 3GB cards (no OC) - first for single card and then dual card (not in sli):

              pts/blender-1.1.0 [Blend File: BMW27 - Compute: CUDA]
              single: 327.39
              dual: 184.45

              pts/blender-1.1.0 [Blend File: Classroom - Compute: CUDA]
              single: 867.58
              dual: 529.43

              pts/blender-1.1.0 [Blend File: Fishy Cat - Compute: CUDA]
              single: 916.98
              dual: 543.83

              pts/blender-1.1.0 [Blend File: Pabellon Barcelona - Compute: CUDA]
              single: 1180.45
              dual: 693.73

              Comment


              • #17
                Nice test and if later could be possible add even a test with an nVidia Titan and nVidia Titan X even better.

                Comment


                • #18
                  I purchased 3 980ti a couple weeks ago for $945 total.

                  Got 195s in the pabellon barcellona
                  Seems I was correct in choosing these over the 1080s.

                  Comment

                  Working...
                  X