Announcement

Collapse
No announcement yet.

GeForce GTX 1080 Ti Announced: 3584 CUDA Cores, 11 GB vRAM, 11 Gbps

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by L_A_G View Post
    ... specifically wants a desktop so that he can to do these computations with a very short turnaround times as he's making loads of iterative changes to the environmental factors he's using in his computation (i.e loads of hand crafted "what if" scenarios).
    I don't see how that changes anything. A Quadro or Telsa (with an Intel IGP) would still get the job done.
    On top of that the Quadros are a bit old by now and I think when they got the things Nvidia hadn't even introduced the Tesla cards.
    Uh... There are Pascal based Quadros, so there certainly are new ones:
    http://www.nvidia.com/object/quadro-...th-pascal.html
    Teslas have been around since 2012. The first Titan (to my knowledge) was 2013.

    Comment


    • #42
      Originally posted by smitty3268 View Post

      Source?

      Genuinely curious, I haven't looked at any Vega leaks yet.
      AMD has demonstrated slightly outperforming GTX 1080 in a AMD favoring game. And AMD is managing expectations for Vega instead of driving the hype to extremes.
      If you think about it too, GP106 is ~55%, GP104 ~80% and GP102 ~85% more efficient than Polaris, there is no way they are going to double efficiency over night to compete with GP102.

      Comment


      • #43
        Originally posted by schmidtbag View Post
        I don't see how that changes anything. A Quadro or Telsa (with an Intel IGP) would still get the job done.
        Never said that it wouldn't get the job done so I get the feeling you've reverted to arguing for the sake of arguing rather than actually trying to make a point. In the case of the researcher he was tied to whatever the company their IT purchases had been contracted out to (Fujitsu) could provide rather than being able to go out and buy whatever he wanted.

        Uh... There are Pascal based Quadros, so there certainly are new ones:
        http://www.nvidia.com/object/quadro-...th-pascal.html
        Teslas have been around since 2012. The first Titan (to my knowledge) was 2013.
        IIRC the Quadros were bought and installed in 2011 (or 2010, I'm not 100% on that) and thus they're unfortunately not exactly spring chickens anymore. I personally haven't needed to use them since 2015 so they may have been replaced since then as there's been a whole lot of hardware upgrades made to the university's cluster machines (thou I think it's been focused on CPUs and APU machines).
        "Why should I want to make anything up? Life's bad enough as it is without wanting to invent any more of it."

        Comment


        • #44
          Originally posted by Zan Lynx View Post
          Heh. Well, I am planning to get a 1080 Ti as soon as I can. I've been waiting for it so I can replace my pair of 980s. They can't quite drive a 4K display at 60 FPS in all the games, although they can do it for some. Not enough video RAM. Their 4 GB each is too little.

          My little brother is probably getting a Ryzen and a 980 GPU for a birthday present this year. The other 980 will make a good display card for my NAS server upgrade later. Ryzen if it does ECC, Xeon if not.
          A N980 in a NAS makes no sense unless it's not actually a pure NAS, but serves multiple roles.
          No serious NAS builds I have seen have ever had a 3d card in them.

          Just finished my own, and gpu wasn't even on the list of what to buy, all NAS motherboards have built-in graphics.

          Comment


          • #45
            It's a free Nvidia 980. What am I going to do with it, Ebay it? I'd rather throw it away.

            None of Xeon's I was considering for the server job had built in graphics and I didn't want IPMI just a workstation board with some ECC support. Most of those expect the board to have a Quadro or FirePro installed and don't have any graphics.

            Comment


            • #46
              Some people in this thread really need to check out a proper coverage of NVidia's announcement. Lots of details in here, regarding GPU compute and the so-called "crippling" of the GTX 1080 Ti's memory subsystem:
              If you check the table, you will note that:
              • GTX 1080 Ti has more memory bandwidth than Pascal Titan X and slightly more FP32 TFLOPS
              • Pascal Titan X has crippled FP64 support (as does every Titan after the original)
              • All consumer Pascal GPUs, including Titan X, have crippled FP16 performance
              • GTX 1080 Ti shares Pascal Titan X's fast int8 performance (about the only advantage over the P100)

              They go on to say:
              Speaking of the Titan, on an interesting side note, it doesn’t look like NVIDIA is going to be doing anything to hurt the compute performance of the GTX 1080 Ti to differentiate the card from the Titan, which has proven popular with GPU compute customers. Crucially, this means that the GTX 1080 Ti gets the same 4:1 INT8 performance ratio of the Titan, which is critical to the cards’ high neural networking inference performance. As a result the GTX 1080 Ti actually has slighty greater compute performance (on paper) than the Titan. And NVIDIA has been surprisingly candid in admitting that unless compute customers need the last 1GB of VRAM offered by the Titan, they’re likely going to buy the GTX 1080 Ti instead.
              Now, considering that it's comparable or better, in every way, except having only 11/12ths as much memory, the card's list price of $700 is a steal compared with Titan X's $1200 list. Also, consider the launch price of the GTX 1080 FE cards was $700. So, to get all the added performance for the same price as the original GTX 1080 is really quite surprising. Someone is definitely worried about AMD's Vega.

              Yes, $700 is a lot of money for a graphics card. However, it's one of the best deals out there, in terms of GFLOPS/W and GFLOPS/$.

              Here's some more education, for you guys:
              One thing you can see is that all their top-end GPUs have been 250 W, for a while. This is nearing the limits of the PCIe spec (yes, that's including the auxillary power connectors), so it's a pretty firm ceiling. Some of the factory-overclocked cards will push this to 275 W or even 300 W.
              Last edited by coder; 03-05-2017, 09:38 AM.

              Comment

              Working...
              X