Announcement

Collapse
No announcement yet.

Blender CUDA Benchmarks On The GeForce GTX 1050/1060/1070/1080

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • webeindustry
    replied
    I purchased 3 980ti a couple weeks ago for $945 total.

    Got 195s in the pabellon barcellona
    Seems I was correct in choosing these over the 1080s.

    Leave a comment:


  • alelinuxbsd
    replied
    Nice test and if later could be possible add even a test with an nVidia Titan and nVidia Titan X even better.

    Leave a comment:


  • andyZ
    replied
    Just for reference, I did some comparison benchmarks on Ubuntu 14.04 with MSI GeForce GTX580 3GB cards (no OC) - first for single card and then dual card (not in sli):

    pts/blender-1.1.0 [Blend File: BMW27 - Compute: CUDA]
    single: 327.39
    dual: 184.45

    pts/blender-1.1.0 [Blend File: Classroom - Compute: CUDA]
    single: 867.58
    dual: 529.43

    pts/blender-1.1.0 [Blend File: Fishy Cat - Compute: CUDA]
    single: 916.98
    dual: 543.83

    pts/blender-1.1.0 [Blend File: Pabellon Barcelona - Compute: CUDA]
    single: 1180.45
    dual: 693.73

    Leave a comment:


  • Electric-Gecko
    replied
    Well I'm sure late to find this.
    But thank you for doing a Blender Cycles benchmark. You should do Blender benchmarks more often. I guess I never got around to requesting it.
    But I agree that you should of included 900-series cards too for comparison and maybe one 700. I also think it would be nice to put a CPU in for comparison, as I was disappointed when I got my GTX 960 and found it to be only marginally faster than CPU rendering.

    Leave a comment:


  • Hextremist
    replied
    Should have compared against older cards, like Nvidia 580 and Titan which excelled at Blender rendering performance. 6xx series were useless in comparison.

    Leave a comment:


  • Niarbeht
    replied
    Originally posted by GraysonPeddie View Post
    Question: Couldn't I use Radeon RX 480 for games and GTX 1070 for CUDA and as a passthrough to play games in Windows through KVM?
    I, personally, found that using an AMD card as my passthrough card was easier and more reliable. But that's just me.

    Leave a comment:


  • bored-demon
    replied
    to improve Cycles renderer performance especially GPU compute
    for tiles setting 1G vram = 128 x 140
    3G vram 128*3 x 140*3
    8Gvram 128*8 x 140*8


    Originally posted by falstaff View Post
    Which renderer is this benchmark using?
    Cycles renderer

    Leave a comment:


  • Enverex
    replied
    Originally posted by rudl View Post
    AS much as i like this benchmark this Benchmark is useless, it is expected to be that a 1050 is slowet than a 1060 1070 slower than a 1080 and so on i want at least fermi kepler maxwell and pascal to be usefull (((((
    Yes, but this allows people to see HOW much faster/slower than the other respective cards. That said, other gen cards thrown into the mix as well as CPU based render would be useful.

    Leave a comment:


  • bug77
    replied
    Originally posted by rudl View Post
    AS much as i like this benchmark this Benchmark is useless, it is expected to be that a 1050 is slowet than a 1060 1070 slower than a 1080 and so on i want at least fermi kepler maxwell and pascal to be usefull (((((
    I'm not disagreeing, but at least it shows it works as expected.
    There have been instances where a card would be inexplicably slower than lower tier cards. Usually pointing to a bug somewhere.
    I'm also agreeing that having some reference (be it a previous generation card or CPU rendering), would have added a lot of value. Then again, we know how Michael does the testing.

    Leave a comment:


  • riklaunim
    replied
    Originally posted by Michael View Post
    But then the added value is being able to compare your system's performance now directly to these results with PTS.
    You can compare that, but it's hard to compare other graphics cards to those as CPU, RAM, Storage and other things may affect the comparison. IMHO CPU-only/integrated GPU should be a baseline and with time you could extend this test with additional GPUs. Team red for next part, and older team green for third part and we get a valuable result set.

    Like Linus Media Group did some benchmarks for their video editing pipeline and they had CPU-only as baseline and then few GPU options.

    Leave a comment:

Working...
X