Originally posted by starshipeleven
View Post
Announcement
Collapse
No announcement yet.
NVIDIA Announces Turing-Based Quadro RTX GPUs As The "World's First Ray-Tracing GPU"
Collapse
X
-
Originally posted by wizard69 View PostErr no, I'm talking a second specialized processor or instruction unit if you will. Similar to the approach we see on some of the latest ARM processors.
There are neuromorphic processors like TrueNorth https://en.wikipedia.org/wiki/TrueNorth but they are much more specialized and more likely to be the leading processor of the device, while the conventional CPU on their board will be tiny and will be doing "system/thermal management" or other menial jobs.Last edited by starshipeleven; 14 August 2018, 11:16 AM.
Comment
-
Originally posted by discordian View PostHBM2 needs to sit on top of the GPU,
Originally posted by discordian View PostIf your target market is fine with 8-16GB memory
Originally posted by discordian View Posta 256bit bus then HBM2 makes sense (costs ignored), if you want more you need more chips and/or a wider bus at which point you run out of space.
- Likes 2
Comment
-
Originally posted by Weasel View PostWhy are gamers commenting on a non-Geforce card? It's an (overpriced) workstation GPU.
But gamers can smell the GTX announcement coming and are hungry for any news, clues, rumors, or hints.
Comment
-
Originally posted by Weasel View PostBecause all Quadro cards are overpriced relative to their specs. Just because this is the "first Ray-tracing" GPU doesn't mean it will be any different.
NVIDIA were mad about the fact people were opting for GeForce cards even on workstations so they made a clause in the driver license that it's not allowed to use them that way (artificial restriction) to force them to buy Quadro cards. You can piece the rest of the stuff together yourself.
- Likes 1
Comment
-
Originally posted by vasc View PostDepends on what you do with them. NVIDIA typically nerfs consumer video cards on FP16 and FP64 computation.
AFAIK, all consumer-level Pascal-generation chips shipped without nerfing anything - it's just that the only chip that physically had the packed fp16 support and denser fp64 units was the GP100, which never shipped in a consumer SKU.
With Quadro, you're just paying for certification with professional applications and certain proprietary driver optimizations. And the lower-end versions use only a single slot. That's it. They really are a monumental rip off, traditionally.
Oh, and they run at lower clock speeds, to compensate for the inferior, single-slot coolers. So, that's another "benefit" you're paying for.
That said, the $9000 Quadro GV100, does offer several advantages over the $3000 Titan V:- ECC memory (but the rest of the Quadros don't)
- 32 GB instead of 12
- 33% faster memory, due to the 4th stack being enabled
- NVLink connection to one other card
Last edited by coder; 14 August 2018, 11:07 PM.
- Likes 2
Comment
-
Originally posted by FireBurn View PostAny idea why they went for GDDR6 rather than HBM2?
Comment
-
Originally posted by starshipeleven View PostA GPU is a second specialized parallel processor, AIs are software that runs on highly parallel processors. iGPUs are specialized processors embedded in the SoC or die.
There are neuromorphic processors like TrueNorth https://en.wikipedia.org/wiki/TrueNorth but they are much more specialized and more likely to be the leading processor of the device, while the conventional CPU on their board will be tiny and will be doing "system/thermal management" or other menial jobs.
Thank you, both.
Comment
-
Originally posted by Marc Driftmeyer View Post
I'd wager that SkHynix being part of the Apple consortium of patent pooling to acquire Toshiba's NAND portfolio with Apple ponying up $10 Billion out of the $18 Billion alone seems to be a safe bet that Apple wants partnerships in place to have first dibs on their partners offerings.
Comment
-
Originally posted by azdaha View PostThis sounds like an interesting discussion. wizard69 Do you mind giving a follow-up here or, perhaps, conceding some of the points made by starshipeleven ?
Thank you, both.
The "second specialized processor or instruction unit" he wants to integrate in CPUs for running AI programs is imho going to be the integrated graphics, the iGPU, because of the reasons I stated above. A GPU is a generalist massively multithreaded coprocessor, by design it's generic enough to be a decent target for consumer-grade AI programs.
Dedicated AI hardware can be much better, but requires software written for its specific architecture and is therefore not that great for general computing in the consumer space where you can't afford to write software that runs only on SOME of the hardware your customers may have, nor the hardware itself has a particularly affordable price. it's good for supercomputers/mainframe or embedded devices where development cost of the software is less relevant in the job or paid by hardware sales of the embedded device.
Especially AMD is pushing in this direction with HSA, by having CPU and GPU (or any other parallel coprocessor, it's not just an AMD thing, although other companies showed up, NVIDIA and Intel of course did not, so it didn't catch that much steam for PC usage afaik) actually share the same RAM (while other integrated graphics use a separated "partition" of the system RAM) so they only pass over pointers when they switch process between CPU and GPU, and don't need to copy over all data.
- Likes 1
Comment
Comment