Announcement

Collapse
No announcement yet.

GeForce GTX 1080 Ti Announced: 3584 CUDA Cores, 11 GB vRAM, 11 Gbps

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • coder
    replied
    Some people in this thread really need to check out a proper coverage of NVidia's announcement. Lots of details in here, regarding GPU compute and the so-called "crippling" of the GTX 1080 Ti's memory subsystem:
    If you check the table, you will note that:
    • GTX 1080 Ti has more memory bandwidth than Pascal Titan X and slightly more FP32 TFLOPS
    • Pascal Titan X has crippled FP64 support (as does every Titan after the original)
    • All consumer Pascal GPUs, including Titan X, have crippled FP16 performance
    • GTX 1080 Ti shares Pascal Titan X's fast int8 performance (about the only advantage over the P100)

    They go on to say:
    Speaking of the Titan, on an interesting side note, it doesn’t look like NVIDIA is going to be doing anything to hurt the compute performance of the GTX 1080 Ti to differentiate the card from the Titan, which has proven popular with GPU compute customers. Crucially, this means that the GTX 1080 Ti gets the same 4:1 INT8 performance ratio of the Titan, which is critical to the cards’ high neural networking inference performance. As a result the GTX 1080 Ti actually has slighty greater compute performance (on paper) than the Titan. And NVIDIA has been surprisingly candid in admitting that unless compute customers need the last 1GB of VRAM offered by the Titan, they’re likely going to buy the GTX 1080 Ti instead.
    Now, considering that it's comparable or better, in every way, except having only 11/12ths as much memory, the card's list price of $700 is a steal compared with Titan X's $1200 list. Also, consider the launch price of the GTX 1080 FE cards was $700. So, to get all the added performance for the same price as the original GTX 1080 is really quite surprising. Someone is definitely worried about AMD's Vega.

    Yes, $700 is a lot of money for a graphics card. However, it's one of the best deals out there, in terms of GFLOPS/W and GFLOPS/$.

    Here's some more education, for you guys:
    One thing you can see is that all their top-end GPUs have been 250 W, for a while. This is nearing the limits of the PCIe spec (yes, that's including the auxillary power connectors), so it's a pretty firm ceiling. Some of the factory-overclocked cards will push this to 275 W or even 300 W.
    Last edited by coder; 05 March 2017, 09:38 AM.

    Leave a comment:


  • Zan Lynx
    replied
    It's a free Nvidia 980. What am I going to do with it, Ebay it? I'd rather throw it away.

    None of Xeon's I was considering for the server job had built in graphics and I didn't want IPMI just a workstation board with some ECC support. Most of those expect the board to have a Quadro or FirePro installed and don't have any graphics.

    Leave a comment:


  • chrisq
    replied
    Originally posted by Zan Lynx View Post
    Heh. Well, I am planning to get a 1080 Ti as soon as I can. I've been waiting for it so I can replace my pair of 980s. They can't quite drive a 4K display at 60 FPS in all the games, although they can do it for some. Not enough video RAM. Their 4 GB each is too little.

    My little brother is probably getting a Ryzen and a 980 GPU for a birthday present this year. The other 980 will make a good display card for my NAS server upgrade later. Ryzen if it does ECC, Xeon if not.
    A N980 in a NAS makes no sense unless it's not actually a pure NAS, but serves multiple roles.
    No serious NAS builds I have seen have ever had a 3d card in them.

    Just finished my own, and gpu wasn't even on the list of what to buy, all NAS motherboards have built-in graphics.

    Leave a comment:


  • L_A_G
    replied
    Originally posted by schmidtbag View Post
    I don't see how that changes anything. A Quadro or Telsa (with an Intel IGP) would still get the job done.
    Never said that it wouldn't get the job done so I get the feeling you've reverted to arguing for the sake of arguing rather than actually trying to make a point. In the case of the researcher he was tied to whatever the company their IT purchases had been contracted out to (Fujitsu) could provide rather than being able to go out and buy whatever he wanted.

    Uh... There are Pascal based Quadros, so there certainly are new ones:

    Teslas have been around since 2012. The first Titan (to my knowledge) was 2013.
    IIRC the Quadros were bought and installed in 2011 (or 2010, I'm not 100% on that) and thus they're unfortunately not exactly spring chickens anymore. I personally haven't needed to use them since 2015 so they may have been replaced since then as there's been a whole lot of hardware upgrades made to the university's cluster machines (thou I think it's been focused on CPUs and APU machines).

    Leave a comment:


  • efikkan
    replied
    Originally posted by smitty3268 View Post

    Source?

    Genuinely curious, I haven't looked at any Vega leaks yet.
    AMD has demonstrated slightly outperforming GTX 1080 in a AMD favoring game. And AMD is managing expectations for Vega instead of driving the hype to extremes.
    If you think about it too, GP106 is ~55%, GP104 ~80% and GP102 ~85% more efficient than Polaris, there is no way they are going to double efficiency over night to compete with GP102.

    Leave a comment:


  • schmidtbag
    replied
    Originally posted by L_A_G View Post
    ... specifically wants a desktop so that he can to do these computations with a very short turnaround times as he's making loads of iterative changes to the environmental factors he's using in his computation (i.e loads of hand crafted "what if" scenarios).
    I don't see how that changes anything. A Quadro or Telsa (with an Intel IGP) would still get the job done.
    On top of that the Quadros are a bit old by now and I think when they got the things Nvidia hadn't even introduced the Tesla cards.
    Uh... There are Pascal based Quadros, so there certainly are new ones:

    Teslas have been around since 2012. The first Titan (to my knowledge) was 2013.

    Leave a comment:


  • L_A_G
    replied
    Originally posted by schmidtbag View Post
    The description you gave for Tesla seems to fit the needs of your researcher pretty well... You don't need to SLI a Tesla, so you could use some crappy Intel GPU to operate your display and then use a Tesla as the OpenCL/CUDA workhorse, which would likely return better results than a Titan since the GPU would be left in a pristine environment where it doesn't have other tasks trying to mooch off its resources, such as your display.
    Some Quadros are sold for the same market. Quadros are like Xeons - some are designed specifically for workstations, some are designed for servers, some are designed for mainframes. In nvidia's case, the mainframe Quadros have mostly been replaced by Teslas, since having all those display connectors and SLI bridges are useless expenses.

    Agreed.
    The cluster machine with the Quadros is owned by the university I used to study at while the researcher works for a local government research and regulatory organization and specifically wants a desktop so that he can to do these computations with a very short turnaround times as he's making loads of iterative changes to the environmental factors he's using in his computation (i.e loads of hand crafted "what if" scenarios). Mind you, he's not a computing researcher, he's an agricultural researcher. On top of that the Quadros are a bit old by now and I think when they got the things Nvidia hadn't even introduced the Tesla cards.

    Leave a comment:


  • schmidtbag
    replied
    Originally posted by efikkan View Post
    Titan cards have never been consumer/gaming cards, that's why Nvidia removed the "GeForce" branding on it to avoid more confusing. It's of course excellent for gaming, but it's targeted at (semi)professionals doing CUDA development, "AI" research, game development etc. The demand for Titan X (Pascal) has been enormous, particularly due to the new fp16 support, it's been so great that the products has been sold out for longer periods of time.
    Nvidia disagrees:

    Notice the giant header saying "latest gaming technologies", but doesn't really say a whole lot about workstations, servers, or OpenCL, and only mentions the amount of CUDA cores (which they do for all of their products, including low-end models).
    Other sites also only compare the Titans to gaming GPUs:
    We've run hundreds of GPU benchmarks on Nvidia, AMD, and Intel graphics cards and ranked them in our comprehensive hierarchy, with over 80 GPUs tested.


    I'm not saying the Titans can't be used for research or are bad at it (I understand they're a pretty decent value if you're not using professional workstation software), but that isn't their specific targeted market.

    Leave a comment:


  • schmidtbag
    replied
    Originally posted by L_A_G View Post
    Tesla cards are specifically tailored to sit in cluster machines used remotely via SSH or similar methods and in something that sits on your desk. Quadro cards have also been sold for that same market (the university I used to study at had a small cluster I've worked with a few times), but in this use they have been to some extent supplanted by the Titan cards as you have OEM selling machines with Titans and Xeons (my old boss had a dual Xeon + Titan workstation). Besides, I never claimed that the Quadro line didn't exist.
    The description you gave for Tesla seems to fit the needs of your researcher pretty well... You don't need to SLI a Tesla, so you could use some crappy Intel GPU to operate your display and then use a Tesla as the OpenCL/CUDA workhorse, which would likely return better results than a Titan since the GPU would be left in a pristine environment where it doesn't have other tasks trying to mooch off its resources, such as your display.
    Some Quadros are sold for the same market. Quadros are like Xeons - some are designed specifically for workstations, some are designed for servers, some are designed for mainframes. In nvidia's case, the mainframe Quadros have mostly been replaced by Teslas, since having all those display connectors and SLI bridges are useless expenses.
    Sure you can use the 1080 Ti in compute tasks and they'll perform well as long as your're only working on data in FP32 format, again I never said you couldn't.
    Agreed.

    Leave a comment:


  • boltronics
    replied
    Originally posted by davidbepo View Post

    4k is absurd. the human eye cant see pixels at such high dpi
    Resolution != DPI. Pixels can be easier or harder to see on a 4k screen depending on the screen size. It also depends how close you sit to the screen, and generally people sit much closer to a computer screen than a TV.

    Also consider that while some people will have difficulty seeing pixels at a largish screen size at a 4k resolution, they probably could still see the individual pixels at 2k (assuming a reasonable screen size, good eyesight, etc.). 4k is the next logical step up which avoids that problem. It's not absurd to want that at all.

    Leave a comment:

Working...
X