Announcement

Collapse
No announcement yet.

NVIDIA Announces Turing-Based Quadro RTX GPUs As The "World's First Ray-Tracing GPU"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by FireBurn View Post
    Any idea why they went for GDDR6 rather than HBM2?
    Cost & performance.
    HBM2 needs to sit on top of the GPU, the amount of chips you can place there is rather limited. GDDR6 (on a similar bus) is not far behind in performance, but you can tuck on more chips (=more Memory) on a wider bus (=more performance).

    If your target market is fine with 8-16GB memory and a 256bit bus then HBM2 makes sense (costs ignored), if you want more you need more chips and/or a wider bus at which point you run out of space.
    Last edited by discordian; 14 August 2018, 06:48 AM.

    Comment


    • #12
      Why are gamers commenting on a non-Geforce card? It's an (overpriced) workstation GPU.

      Comment


      • #13
        Originally posted by Weasel View Post
        Why are gamers commenting on a non-Geforce card? It's an (overpriced) workstation GPU.
        So how do you know its overpriced, did you measure its performance in ray-tracing and AI in particular?
        Or are you a gamer and your meta-post is referring to yourself?

        Comment


        • #14
          Originally posted by discordian View Post
          So how do you know its overpriced, did you measure its performance in ray-tracing and AI in particular?
          Or are you a gamer and your meta-post is referring to yourself?
          Because all Quadro cards are overpriced relative to their specs. Just because this is the "first Ray-tracing" GPU doesn't mean it will be any different.

          NVIDIA were mad about the fact people were opting for GeForce cards even on workstations so they made a clause in the driver license that it's not allowed to use them that way (artificial restriction) to force them to buy Quadro cards. You can piece the rest of the stuff together yourself.

          Comment


          • #15
            Originally posted by coder View Post
            Imagination did sell PCIe-based raytracing accelerator cards for pro users. I have no idea how well they sold.

            Before that, there was at least one prior raytracing hardware vendor, with products on the market, but I don't know if that's the same company bought by Imagination.


            MS has already added support for it, in Direct X:

            https://blogs.msdn.microsoft.com/dir...tx-raytracing/
            TBH I didn't read the entire blog but I've seen some news about the DX ray tracing. To me, the main question is: Is this a vendor neutral DX-API which anyone can support or is it Nvidia technology that only Nvidia cards can use (PhysX) or is it like Game Works, which can run most features anywhere IIRC but it will of course be optimized for Nvidia only.

            Comment


            • #16
              Originally posted by Weasel View Post
              Because all Quadro cards are overpriced relative to their specs. Just because this is the "first Ray-tracing" GPU doesn't mean it will be any different.

              NVIDIA were mad about the fact people were opting for GeForce cards even on workstations so they made a clause in the driver license that it's not allowed to use them that way (artificial restriction) to force them to buy Quadro cards. You can piece the rest of the stuff together yourself.
              It is a mixed calculation that all vendors do. Based on the same logic, I could argue that Geforce cards are too cheap and Quadro and Tesla are necessary to make the entire development profitable in the end. The only difference is the expected ROI. What ROI is to be expected is highly subjective.

              Comment


              • #17
                Originally posted by discordian View Post
                So how do you know its overpriced, did you measure its performance in ray-tracing and AI in particular?
                Or are you a gamer and your meta-post is referring to yourself?
                Count up the number of transistors or chips on a card and you come to the same conclusion, grossly overpriced. As for gamers how many would becexpected to shell out $10,000 for a GPU card. A few possibly but this isnt a gaming card by a long shot.

                AI performance is another thing that is interesting but it is a hardware service that doesnt belong on the GPU card. For such hardware to get any long term traction it needs to be embedded in the CPU chip as a standard feature. ML/AR/AI acceleration hardware the becomes a feature like the floating point unit or the vector processor.

                Comment


                • #18
                  Originally posted by GruenSein View Post
                  True, but that never gained much traction, did it? With Nvidia, one major player is pushing this so it might have more success. Unfortunately, I suspect this will be another vendor-centric technology.
                  NVIDIA is 100% guaranteed to make it a vendor-locked technology.

                  Comment


                  • #19
                    Originally posted by Weasel View Post
                    Because all Quadro cards are overpriced relative to their specs. Just because this is the "first Ray-tracing" GPU doesn't mean it will be any different.

                    NVIDIA were mad about the fact people were opting for GeForce cards even on workstations so they made a clause in the driver license that it's not allowed to use them that way (artificial restriction) to force them to buy Quadro cards. You can piece the rest of the stuff together yourself.
                    FYI: it's not "overpricing", it's different cost distribution.

                    The cost of developing a new gen of GPUs is a single large expense, and from that generation they create products for gaming or for professional use.

                    The products for gaming have their cost reduced in exchange of removing features that consumer cards don't need (like for example they are artificially hobbled in floating point performance).

                    The products for professional use get the full cost of their development added in.

                    (and anyone in the IC design businness does the same, for that matter)

                    Similar to tiered software features, with "basic" licenses being cheaper than "pro" ones.

                    Comment


                    • #20
                      Originally posted by wizard69 View Post
                      AI performance is another thing that is interesting but it is a hardware service that doesnt belong on the GPU card. For such hardware to get any long term traction it needs to be embedded in the CPU chip as a standard feature. ML/AR/AI acceleration hardware the becomes a feature like the floating point unit or the vector processor.
                      FYI, it's actually GPU hardware. A GPU is a massively parallel processor, by its own nature. This is just bigger than most.

                      CPUs with GPUs exist already. BOOM! We are living in the future, kid.


                      Comment

                      Working...
                      X