Originally posted by coder
View Post
Announcement
Collapse
No announcement yet.
Think Silicon Shows Off First RISC-V 3D GPU
Collapse
X
-
Originally posted by coder View PostHere's where I think Pi is really being held back by Broadcom. If Broadcom wouldn't have had their VideoCore IP they seem to keep trying to push, then I'll bet R.Pi Foundation could swing a good deal on a much better-performing Mali.
- Likes 1
Comment
-
Originally posted by Developer12 View Posthonestly none of that is all that special, particularly per-core ram.
"So.....they glued a bunch of small RISC-V CPU cores together and called it a GPU? Yeah, intel tried that one too."
Intel's Larrabee didn't have local, directly-addressable SRAM. And the x86 ISA has more baggage than RISC-V, which means it doesn't scale down as well.
The fact of the matter is that this thing shares a lot more similarities with modern GPUs, in key areas, than there are differences. This looks a lot closer to the mark than Intel ever got. No, it's not going to take over the world, but that's not the point.
So, internet tough guy, if you're so convinced it's rubbish, state your case. So far, all you've done is attack it as a Larrabee-derivative, which is neither very accurate nor very informative. Tell us what's bad about it, why it's bad, and how much each negative will contribute to its overall deficit. Bonus points for citing any disadvantages I haven't already listed.
Comment
-
Originally posted by coder View PostYou said:"So.....they glued a bunch of small RISC-V CPU cores together and called it a GPU? Yeah, intel tried that one too."
Intel's Larrabee didn't have local, directly-addressable SRAM. And the x86 ISA has more baggage than RISC-V, which means it doesn't scale down as well.
The fact of the matter is that this thing shares a lot more similarities with modern GPUs, in key areas, than there are differences. This looks a lot closer to the mark than Intel ever got. No, it's not going to take over the world, but that's not the point.
So, internet tough guy, if you're so convinced it's rubbish, state your case. So far, all you've done is attack it as a Larrabee-derivative, which is neither very accurate nor very informative. Tell us what's bad about it, why it's bad, and how much each negative will contribute to its overall deficit. Bonus points for citing any disadvantages I haven't already listed.
Calling it in any way comparable to the microarchitecture of a conventional (eg nvidia/intel/amd/powerVR/apple/mali/adreno) GPU is a gross mischaracterization. Probably won't perform nearly as well.Last edited by Developer12; 22 June 2022, 11:04 AM.
Comment
-
Originally posted by Developer12 View PostIt's an unoriginal and low-effort idea.
Originally posted by Developer12 View PostCalling it in any way comparable to the microarchitecture of a conventional (eg nvidia/intel/amd/powerVR/apple/mali/adreno) GPU is a gross mischaracterization. Probably won't perform nearly as well.
We agree that being tied to the RISC-V ISA puts them at a nonzero disadvantage, but it can also act as a selling point. I know this sounds like a Larrabee play, but this isn't Larrabee and Think Silicon doesn't have the same ambitions for it as Intel had for their effort. The question that needs to be considered is whether they realistically could've had a competitive offering by directly trying to beat ARM, PowerVR, and others at their own game.
Something else that's interesting to ponder is to what extent the cores in this cluster can act as the "Little" cores, in a Big.Little RISC-V SoC. This approach could lead to an interesting place, down the road.
- Likes 1
Comment
-
-
Originally posted by coder View PostGood question.
Modern GPUs all combine in-order cores with wide SIMD and heavy SMT. At some superficial level, it seems there's no reason you couldn't. However, a closer look shows a few more distinguishing characteristics:
...
In summary, I think GPUs using a standard CPU ISA will never take the crown in perf/area or perf/W. However, it's certainly possible to be well within the same order of magnitude. At that point, other factors could drive adoption.
What are major implementation differences between CUDA vs. graphics optimized architectures?
Could RISC-V + extensions make more sense there?
Edit:
My thinking is that there's very limited win in using RISC-V architecture for graphics processing, since any gains of having open architecture is mostly abstracted away behind Vulkan or DX12 APIs.
Whereas on the number crunching side, there's lots of stuff going on and lots of communities are writing different libraries, compilers and such... so there are gains to be made, if the architecture is open for research and exploitation.Last edited by pkese; 23 June 2022, 07:48 AM.
- Likes 1
Comment
-
Originally posted by pkese View PostWould the same hold true also if the hardware was optimized for CUDA style scientific computing rather than graphics processing?
What are major implementation differences between CUDA vs. graphics optimized architectures?
Could RISC-V + extensions make more sense there?
Apart from cache & memory model semantics, these aren't really ISA-level details.
Originally posted by pkese View PostMy thinking is that there's very limited win in using RISC-V architecture for graphics processing, since any gains of having open architecture is mostly abstracted away behind Vulkan or DX12 APIs.
Originally posted by pkese View PostWhereas on the number crunching side, there's lots of stuff going on and lots of communities are writing different libraries, compilers and such... so there are gains to be made, if the architecture is open for research and exploitation.
- Likes 1
Comment
-
Originally posted by coder View PostThis is a non-sequitur. From what I can see, Libre SoC has a single-core, dual-issue, in-order implementation at ~300 MHz, with no mention of hardware texture or raster units. That's at least 2 orders of magnitude below what this product seems to be targeting.
It's ridiculous to compare this to Libre SoC. They're two very different projects, with very different sets of goals, resources, and organizations. The only point of intersection was Libre's prior focus on RISC-V, and I think Michael merely mentioned it to avoid folks confusing the two.
BTW, I don't want to detract from what the Libre folks are doing. Full marks to them, for all their progress!
as jacob mentioned, that test ASIC is real basic: it answered the question "can an entirely new team who've literally never done VLSI or HDL before in their lives actually produce an ASIC" and the answer was "yes".
a bit of background on this: Atif from Pixilica was originally talking with the ThinkSilicon team, a couple of years ago. Atif's initiative was to get a Working Group together to create an entirely *properly* open 3D GPU Standard https://www.pixilica.com/graphics because he recognises that collaboration reduces effort and helps avoid costly mistakes. his vision is to see the entire project be FOSS. the RISC-V Foundation undermined his initiative and promoted ThinkSilicon's custom proprietary secretive and closed-doors efforts instead. ThinkSilicon received EU Funding to develop their secretive and proprietary solution, and was bought by a U.S. company. no source code or specifications are publicly available.
as people have noticed, thinksilicon's primary focus is on ultra-low-power embedded use-cases: even before they started this proprietary GPU effort they have some astonishingly good silicon and compiler technology. it's just a real pity that they'll be in the same category as PowerVR: proprietary drivers, proprietary silicon, because you and i are not their customers.Last edited by lkcl; 24 June 2022, 08:56 AM.
- Likes 2
Comment
-
Originally posted by pkese View Post
Would the same hold true also if the hardware was optimized for CUDA style scientific computing rather than graphics processing?
What are major implementation differences between CUDA vs. graphics optimized architectures?
the Vulkan Spec specifically recognises that absolute accuracy is not crucial in 3D. therefore commercial GPUs cut down the silicon needed by 75% so that you can get 4x as much work done for the same power consumption.
... and many more things but this is the biggest reason why you can't just take a Vector ISA Spec such as RVV, add a bunch of extra opcodes and declare it ready for commercial GPU usage in today's markets (embedded *or* high-end).
- Likes 1
Comment
Comment