Announcement

Collapse
No announcement yet.

GeForce GTX 1080 Ti Announced: 3584 CUDA Cores, 11 GB vRAM, 11 Gbps

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Evil Penguin View Post
    700 bucks for a further crippled GP102? Yeah, no!
    It's 700 bucks for the most efficient, best priced high end model ever. How can this be a bad thing?

    Talking of crippled, how about the 120W 1280-core 6 GB GPU beating the 150W 2304-core 8 GB GPU called RX 480. People need to stop worry about the "specs" and start caring about real performance.

    Originally posted by davidbepo View Post
    it consumes less than r9 390x or r9 fury x
    altough its still a lot and it has more power than the necesary for anything (except opencl/cuda things)
    For anyone running the latest games at 4K wanting a stable 60 Hz or wanting 1440p at >120 Hz, there is no current GPU fast enough for the games we have now. As a matter of fact, the demand for higher resolutions and frame rates is the reason why Pascal has been the best seller ever.

    Originally posted by drohm View Post
    $700 is ridiculous though, no question they are gouging. Most mobo/cpu combos aren't that expensive, think about it. Vega can't come soon enough. I plan on building a new rig this summer, all AMD, can't wait.
    Vega 10 will compete with GP104, not GP102. Vega 10 with HBM2 is a big expensive chip, so don't expect AMD to be pushing the prices a lot lower.

    Originally posted by schmidtbag View Post
    Actually, it's the Quadro and Tesla cards that are meant for heavy computing. Titans are capable of it, but they're marketed as gaming cards. When writing your own software, a Titan may be a better choice, but if you're using any real-world professional applications, it's likely a poor choice. Nvidia doesn't have the same driver optimizations for Titans as they do for Quadros.

    The 1080Ti definitely is not intended for OpenCL/CUDA (though again, they're capable of it).
    Titan cards have never been consumer/gaming cards, that's why Nvidia removed the "GeForce" branding on it to avoid more confusing. It's of course excellent for gaming, but it's targeted at (semi)professionals doing CUDA development, "AI" research, game development etc. The demand for Titan X (Pascal) has been enormous, particularly due to the new fp16 support, it's been so great that the products has been sold out for longer periods of time.


    Comment


    • #32
      Originally posted by schmidtbag View Post
      Actually, it's the Quadro and Tesla cards that are meant for heavy computing. Titans are capable of it, but they're marketed as gaming cards. When writing your own software, a Titan may be a better choice, but if you're using any real-world professional applications, it's likely a poor choice. Nvidia doesn't have the same driver optimizations for Titans as they do for Quadros.

      The 1080Ti definitely is not intended for OpenCL/CUDA (though again, they're capable of it).
      Tesla cards are specifically tailored to sit in cluster machines used remotely via SSH or similar methods and in something that sits on your desk. Quadro cards have also been sold for that same market (the university I used to study at had a small cluster I've worked with a few times), but in this use they have been to some extent supplanted by the Titan cards as you have OEM selling machines with Titans and Xeons (my old boss had a dual Xeon + Titan workstation). Besides, I never claimed that the Quadro line didn't exist.

      Sure you can use the 1080 Ti in compute tasks and they'll perform well as long as your're only working on data in FP32 format, again I never said you couldn't.

      Originally posted by efikkan View Post
      Talking of crippled, how about the 120W 1280-core 6 GB GPU beating the 150W 2304-core 8 GB GPU called RX 480. People need to stop worry about the "specs" and start caring about real performance.
      Talk about being defensive... The RX 480 was always intended on a mid to low-mid range GPU and the 6GB GTX 1060 is more expensive card so it's to be expected to perform slightly better. The difference between the two really isn't anything earth shattering on windows where drivers are much more even.
      Last edited by L_A_G; 01 March 2017, 05:36 PM.

      Comment


      • #33
        Originally posted by efikkan View Post
        Vega 10 will compete with GP104, not GP102.
        Source?

        Genuinely curious, I haven't looked at any Vega leaks yet.

        Comment


        • #34
          Most high end videocards sell for around $1000 or more here in Australia, we are use to that premium, and this time around its kinda needed for 4k which is what I run at but I will likely wait until VEGA to see if they have competitive pricing.

          I know many people assume people in Australia are rich, but this really isn't the truth, of all the people I have met here NONE earned over 40k a year, so yes there is a select minority of people in Australia that bolster our working yearly average, overall we are not wealthy however! So paying $1000+ for a videocard REALLY hurts!

          The radio silence on the actual performance of the VEGA cards is a little concerning, are they rushing in last minute tweaks to better match 1080 cards? I suspect so.
          Last edited by theriddick; 01 March 2017, 09:40 PM.

          Comment


          • #35
            Originally posted by theriddick View Post
            The radio silence on the actual performance of the VEGA cards is a little concerning, are they rushing in last minute tweaks to better match 1080 cards? I suspect so.
            I think they're just trying to give Ryzen it's time in the spotlight right now. If we still haven't heard anything by the end of March, then I'll be concerned.

            Comment


            • #36
              Originally posted by efikkan View Post

              For anyone running the latest games at 4K wanting a stable 60 Hz or wanting 1440p at >120 Hz, there is no current GPU fast enough for the games we have now. As a matter of fact, the demand for higher resolutions and frame rates is the reason why Pascal has been the best seller ever.
              4k is absurd. the human eye cant see pixels at such high dpi

              Comment


              • #37
                Originally posted by davidbepo View Post

                4k is absurd. the human eye cant see pixels at such high dpi
                Resolution != DPI. Pixels can be easier or harder to see on a 4k screen depending on the screen size. It also depends how close you sit to the screen, and generally people sit much closer to a computer screen than a TV.

                Also consider that while some people will have difficulty seeing pixels at a largish screen size at a 4k resolution, they probably could still see the individual pixels at 2k (assuming a reasonable screen size, good eyesight, etc.). 4k is the next logical step up which avoids that problem. It's not absurd to want that at all.

                Comment


                • #38
                  Originally posted by L_A_G View Post
                  Tesla cards are specifically tailored to sit in cluster machines used remotely via SSH or similar methods and in something that sits on your desk. Quadro cards have also been sold for that same market (the university I used to study at had a small cluster I've worked with a few times), but in this use they have been to some extent supplanted by the Titan cards as you have OEM selling machines with Titans and Xeons (my old boss had a dual Xeon + Titan workstation). Besides, I never claimed that the Quadro line didn't exist.
                  The description you gave for Tesla seems to fit the needs of your researcher pretty well... You don't need to SLI a Tesla, so you could use some crappy Intel GPU to operate your display and then use a Tesla as the OpenCL/CUDA workhorse, which would likely return better results than a Titan since the GPU would be left in a pristine environment where it doesn't have other tasks trying to mooch off its resources, such as your display.
                  Some Quadros are sold for the same market. Quadros are like Xeons - some are designed specifically for workstations, some are designed for servers, some are designed for mainframes. In nvidia's case, the mainframe Quadros have mostly been replaced by Teslas, since having all those display connectors and SLI bridges are useless expenses.
                  Sure you can use the 1080 Ti in compute tasks and they'll perform well as long as your're only working on data in FP32 format, again I never said you couldn't.
                  Agreed.

                  Comment


                  • #39
                    Originally posted by efikkan View Post
                    Titan cards have never been consumer/gaming cards, that's why Nvidia removed the "GeForce" branding on it to avoid more confusing. It's of course excellent for gaming, but it's targeted at (semi)professionals doing CUDA development, "AI" research, game development etc. The demand for Titan X (Pascal) has been enormous, particularly due to the new fp16 support, it's been so great that the products has been sold out for longer periods of time.
                    Nvidia disagrees:

                    Notice the giant header saying "latest gaming technologies", but doesn't really say a whole lot about workstations, servers, or OpenCL, and only mentions the amount of CUDA cores (which they do for all of their products, including low-end models).
                    Other sites also only compare the Titans to gaming GPUs:
                    We've run hundreds of GPU benchmarks on Nvidia, AMD, and Intel graphics cards and ranked them in our comprehensive hierarchy, with over 80 GPUs tested.


                    I'm not saying the Titans can't be used for research or are bad at it (I understand they're a pretty decent value if you're not using professional workstation software), but that isn't their specific targeted market.

                    Comment


                    • #40
                      Originally posted by schmidtbag View Post
                      The description you gave for Tesla seems to fit the needs of your researcher pretty well... You don't need to SLI a Tesla, so you could use some crappy Intel GPU to operate your display and then use a Tesla as the OpenCL/CUDA workhorse, which would likely return better results than a Titan since the GPU would be left in a pristine environment where it doesn't have other tasks trying to mooch off its resources, such as your display.
                      Some Quadros are sold for the same market. Quadros are like Xeons - some are designed specifically for workstations, some are designed for servers, some are designed for mainframes. In nvidia's case, the mainframe Quadros have mostly been replaced by Teslas, since having all those display connectors and SLI bridges are useless expenses.

                      Agreed.
                      The cluster machine with the Quadros is owned by the university I used to study at while the researcher works for a local government research and regulatory organization and specifically wants a desktop so that he can to do these computations with a very short turnaround times as he's making loads of iterative changes to the environmental factors he's using in his computation (i.e loads of hand crafted "what if" scenarios). Mind you, he's not a computing researcher, he's an agricultural researcher. On top of that the Quadros are a bit old by now and I think when they got the things Nvidia hadn't even introduced the Tesla cards.

                      Comment

                      Working...
                      X