Announcement

Collapse
No announcement yet.

NVIDIA Announces Turing-Based Quadro RTX GPUs As The "World's First Ray-Tracing GPU"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by starshipeleven View Post
    FYI, it's actually GPU hardware. A GPU is a massively parallel processor, by its own nature. This is just bigger than most.

    CPUs with GPUs exist already. BOOM! We are living in the future, kid.

    Err no, I'm talking a second specialized processor or instruction unit if you will. Similar to the approach we see on some of the latest ARM processors.

    Comment


    • #22
      Originally posted by wizard69 View Post
      Err no, I'm talking a second specialized processor or instruction unit if you will. Similar to the approach we see on some of the latest ARM processors.
      A GPU is a second specialized parallel processor, AIs are software that runs on highly parallel processors. iGPUs are specialized processors embedded in the SoC or die.

      There are neuromorphic processors like TrueNorth https://en.wikipedia.org/wiki/TrueNorth but they are much more specialized and more likely to be the leading processor of the device, while the conventional CPU on their board will be tiny and will be doing "system/thermal management" or other menial jobs.
      Last edited by starshipeleven; 14 August 2018, 11:16 AM.

      Comment


      • #23
        Originally posted by discordian View Post
        HBM2 needs to sit on top of the GPU,
        It doesn't sit on top, but rather on the same substrate.

        Originally posted by discordian View Post
        If your target market is fine with 8-16GB memory
        Nvidia sells GP100 (Pascal) with 16 GB and Volta with 12, 16, or 32 GB.

        Originally posted by discordian View Post
        a 256bit bus then HBM2 makes sense (costs ignored), if you want more you need more chips and/or a wider bus at which point you run out of space.
        Here, you must be confused with GDDR. HBM/HBM2 typically uses data bus widths of 1024 to 4096 bits. That would be impractical for off-package memory, like GDDR.

        Comment


        • #24
          Originally posted by Weasel View Post
          Why are gamers commenting on a non-Geforce card? It's an (overpriced) workstation GPU.
          Because Nvidia didn't yet announce a new gaming card. If they had announced the GTX version first, gamers would be ignoring this (as usual, for Quadros).

          But gamers can smell the GTX announcement coming and are hungry for any news, clues, rumors, or hints.

          Comment


          • #25
            Originally posted by Weasel View Post
            Because all Quadro cards are overpriced relative to their specs. Just because this is the "first Ray-tracing" GPU doesn't mean it will be any different.

            NVIDIA were mad about the fact people were opting for GeForce cards even on workstations so they made a clause in the driver license that it's not allowed to use them that way (artificial restriction) to force them to buy Quadro cards. You can piece the rest of the stuff together yourself.
            Depends on what you do with them. NVIDIA typically nerfs consumer video cards on FP16 and FP64 computation. The AI folks use FP16 a lot and FP64 is used by people who need high precision like finite-element simulation software or other high-end applications.

            Comment


            • #26
              Originally posted by vasc View Post
              Depends on what you do with them. NVIDIA typically nerfs consumer video cards on FP16 and FP64 computation.
              Quadros are no different.

              AFAIK, all consumer-level Pascal-generation chips shipped without nerfing anything - it's just that the only chip that physically had the packed fp16 support and denser fp64 units was the GP100, which never shipped in a consumer SKU.

              With Quadro, you're just paying for certification with professional applications and certain proprietary driver optimizations. And the lower-end versions use only a single slot. That's it. They really are a monumental rip off, traditionally.

              Oh, and they run at lower clock speeds, to compensate for the inferior, single-slot coolers. So, that's another "benefit" you're paying for.

              That said, the $9000 Quadro GV100, does offer several advantages over the $3000 Titan V:
              • ECC memory (but the rest of the Quadros don't)
              • 32 GB instead of 12
              • 33% faster memory, due to the 4th stack being enabled
              • NVLink connection to one other card
              But the Quadro GP100 and GV100 should probably be thought of as special cases. Regarding the GP100, there never was a lower-priced "Titan" version, as the Titan Xp used the GP102 GPU from the GTX 1080 Ti.
              Last edited by coder; 14 August 2018, 11:07 PM.

              Comment


              • #27
                Originally posted by FireBurn View Post
                Any idea why they went for GDDR6 rather than HBM2?
                I'd wager that SkHynix being part of the Apple consortium of patent pooling to acquire Toshiba's NAND portfolio with Apple ponying up $10 Billion out of the $18 Billion alone seems to be a safe bet that Apple wants partnerships in place to have first dibs on their partners offerings.

                Comment


                • #28
                  Originally posted by starshipeleven View Post
                  A GPU is a second specialized parallel processor, AIs are software that runs on highly parallel processors. iGPUs are specialized processors embedded in the SoC or die.

                  There are neuromorphic processors like TrueNorth https://en.wikipedia.org/wiki/TrueNorth but they are much more specialized and more likely to be the leading processor of the device, while the conventional CPU on their board will be tiny and will be doing "system/thermal management" or other menial jobs.
                  This sounds like an interesting discussion. wizard69 Do you mind giving a follow-up here or, perhaps, conceding some of the points made by starshipeleven ?

                  Thank you, both.

                  Comment


                  • #29
                    Originally posted by Marc Driftmeyer View Post

                    I'd wager that SkHynix being part of the Apple consortium of patent pooling to acquire Toshiba's NAND portfolio with Apple ponying up $10 Billion out of the $18 Billion alone seems to be a safe bet that Apple wants partnerships in place to have first dibs on their partners offerings.
                    Sounds intriguing, albeit incomplete. Do you have any sources for your hypothesis?

                    Comment


                    • #30
                      Originally posted by azdaha View Post
                      This sounds like an interesting discussion. wizard69 Do you mind giving a follow-up here or, perhaps, conceding some of the points made by starshipeleven ?

                      Thank you, both.
                      While I can't claim he has conceded the point, I can explain my point in a bit less tongue-in-cheek way.

                      The "second specialized processor or instruction unit" he wants to integrate in CPUs for running AI programs is imho going to be the integrated graphics, the iGPU, because of the reasons I stated above. A GPU is a generalist massively multithreaded coprocessor, by design it's generic enough to be a decent target for consumer-grade AI programs.

                      Dedicated AI hardware can be much better, but requires software written for its specific architecture and is therefore not that great for general computing in the consumer space where you can't afford to write software that runs only on SOME of the hardware your customers may have, nor the hardware itself has a particularly affordable price. it's good for supercomputers/mainframe or embedded devices where development cost of the software is less relevant in the job or paid by hardware sales of the embedded device.

                      Especially AMD is pushing in this direction with HSA, by having CPU and GPU (or any other parallel coprocessor, it's not just an AMD thing, although other companies showed up, NVIDIA and Intel of course did not, so it didn't catch that much steam for PC usage afaik) actually share the same RAM (while other integrated graphics use a separated "partition" of the system RAM) so they only pass over pointers when they switch process between CPU and GPU, and don't need to copy over all data.

                      Comment

                      Working...
                      X