Announcement

Collapse
No announcement yet.

Think Silicon Shows Off First RISC-V 3D GPU

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by coder View Post
    Like ARM, "Think Silicon" is an IP vendor. They sell designs, not physical chips.
    Look at their roadmap. They want to, but can't.

    Comment


    • #32
      Originally posted by coder View Post
      Here's where I think Pi is really being held back by Broadcom. If Broadcom wouldn't have had their VideoCore IP they seem to keep trying to push, then I'll bet R.Pi Foundation could swing a good deal on a much better-performing Mali.
      Agreed: Broadcom's inability (well, "refusal" really, I suspect) to build a "GPU" that's actually even remotely competent at anything other than digital signage absolutely crippled the Pi4 as anything other than a headless server. The problem that RPF has is simply that Broadcom has the volume to provide the CPU side of things cheaper than anyone else, and it seems likely that a secondary aspect of that is simply *because* they can discount their own graphics IP to zero cost on the SoCs if that's what it takes to win a contract from their competitors.

      Comment


      • #33
        Originally posted by Developer12 View Post
        honestly none of that is all that special, particularly per-core ram.
        You said:

        "So.....they glued a bunch of small RISC-V CPU cores together and called it a GPU? Yeah, intel tried that one too."

        Intel's Larrabee didn't have local, directly-addressable SRAM. And the x86 ISA has more baggage than RISC-V, which means it doesn't scale down as well.

        The fact of the matter is that this thing shares a lot more similarities with modern GPUs, in key areas, than there are differences. This looks a lot closer to the mark than Intel ever got. No, it's not going to take over the world, but that's not the point.

        So, internet tough guy, if you're so convinced it's rubbish, state your case. So far, all you've done is attack it as a Larrabee-derivative, which is neither very accurate nor very informative. Tell us what's bad about it, why it's bad, and how much each negative will contribute to its overall deficit. Bonus points for citing any disadvantages I haven't already listed.

        Comment


        • #34
          Originally posted by coder View Post
          You said:
          "So.....they glued a bunch of small RISC-V CPU cores together and called it a GPU? Yeah, intel tried that one too."




          Intel's Larrabee didn't have local, directly-addressable SRAM. And the x86 ISA has more baggage than RISC-V, which means it doesn't scale down as well.

          The fact of the matter is that this thing shares a lot more similarities with modern GPUs, in key areas, than there are differences. This looks a lot closer to the mark than Intel ever got. No, it's not going to take over the world, but that's not the point.

          So, internet tough guy, if you're so convinced it's rubbish, state your case. So far, all you've done is attack it as a Larrabee-derivative, which is neither very accurate nor very informative. Tell us what's bad about it, why it's bad, and how much each negative will contribute to its overall deficit. Bonus points for citing any disadvantages I haven't already listed.
          It's an unoriginal and low-effort idea. It's also strange enough that it's unlikely to gain traction. Dunno why you need to make such a fuss about it.

          Calling it in any way comparable to the microarchitecture of a conventional (eg nvidia/intel/amd/powerVR/apple/mali/adreno) GPU is a gross mischaracterization. Probably won't perform nearly as well.
          Last edited by Developer12; 22 June 2022, 11:04 AM.

          Comment


          • #35
            Originally posted by Developer12 View Post
            It's an unoriginal and low-effort idea.
            It doesn't strike me as low-effort. I doubt they had much in the way of IP to use as a starting point, for this. Any RISC-V core they could've used would have needed extremely heavy reworking.

            Originally posted by Developer12 View Post
            Calling it in any way comparable to the microarchitecture of a conventional (eg nvidia/intel/amd/powerVR/apple/mali/adreno) GPU is a gross mischaracterization. Probably won't perform nearly as well.
            So, here's where I think they did something interesting. They have experience in creating low-power display controllers, 2D accelerators, and simple GPUs. Bigger GPUs were the next logical step, for them. However, rather than try to beat the big, established players in the mobile GPU market (ARM, Qualcomm, PowerVR, Verisilicon/Vivante), and probably a few Chinese GPU upstarts at their own game, the idea of adapting RISC-V both saves them a bit of time (mostly on software tooling) and gives customers more flexibility. It's a way to carve out a niche for themselves, in a market that's already crowded enough for PowerVR to have fallen on hard times, after Apple stopped directly licensing their designs.

            We agree that being tied to the RISC-V ISA puts them at a nonzero disadvantage, but it can also act as a selling point. I know this sounds like a Larrabee play, but this isn't Larrabee and Think Silicon doesn't have the same ambitions for it as Intel had for their effort. The question that needs to be considered is whether they realistically could've had a competitive offering by directly trying to beat ARM, PowerVR, and others at their own game.

            Something else that's interesting to ponder is to what extent the cores in this cluster can act as the "Little" cores, in a Big.Little RISC-V SoC. This approach could lead to an interesting place, down the road.

            Comment


            • #36
              Originally posted by Developer12 View Post
              It's literally a bunch of RISC-V cores in a NoC fabric, with some SIMD extensions and shader buffers slapped on for good measure.
              It's literally just a bunch of transistors slapped together.
              Who cares about architecture anyway.

              Comment


              • #37
                Originally posted by coder View Post
                Good question.

                Modern GPUs all combine in-order cores with wide SIMD and heavy SMT. At some superficial level, it seems there's no reason you couldn't. However, a closer look shows a few more distinguishing characteristics:
                ...
                In summary, I think GPUs using a standard CPU ISA will never take the crown in perf/area or perf/W. However, it's certainly possible to be well within the same order of magnitude. At that point, other factors could drive adoption.
                Would the same hold true also if the hardware was optimized for CUDA style scientific computing rather than graphics processing?
                What are major implementation differences between CUDA vs. graphics optimized architectures?
                Could RISC-V + extensions make more sense there?

                Edit:
                My thinking is that there's very limited win in using RISC-V architecture for graphics processing, since any gains of having open architecture is mostly abstracted away behind Vulkan or DX12 APIs.
                Whereas on the number crunching side, there's lots of stuff going on and lots of communities are writing different libraries, compilers and such... so there are gains to be made, if the architecture is open for research and exploitation.
                Last edited by pkese; 23 June 2022, 07:48 AM.

                Comment


                • #38
                  Originally posted by pkese View Post
                  Would the same hold true also if the hardware was optimized for CUDA style scientific computing rather than graphics processing?
                  What are major implementation differences between CUDA vs. graphics optimized architectures?
                  Could RISC-V + extensions make more sense there?
                  GPU Compute APIs are designed to work well on GPU-type architectures. So, that means exposing the runtime to more information about the memory hierarchy of your workload and also minimizing tight coupling between threads that would stress the cache hierarchy in ways it's not optimized for. Basically, GPUs have throughput-oriented architectures and GPU compute APIs facilitate & nudge programmers into structuring their code in a form that runs efficiently on that kind of machine.

                  Apart from cache & memory model semantics, these aren't really ISA-level details.

                  Originally posted by pkese View Post
                  My thinking is that there's very limited win in using RISC-V architecture for graphics processing, since any gains of having open architecture is mostly abstracted away behind Vulkan or DX12 APIs.
                  Agreed. RISC-V isn't going to be an asset for graphics rendering, unless someone does something like I said: porting a CPU-based renderer (e.g. LavaPipe) to utilize the hardware features of these special cores. Such an effort would be an easy way to achieve conformance of a complex API like Vulkan while running @ reasonable speed, but probably still at some non-trivial performance penalty relative to a fully-customized implementation.

                  Originally posted by pkese View Post
                  Whereas on the number crunching side, there's lots of stuff going on and lots of communities are writing different libraries, compilers and such... so there are gains to be made, if the architecture is open for research and exploitation.
                  It adds a lot of flexibility for AI, for instance. In that case, there's still value in blurring the line between a general-purpose CPU and a purpose-built accelerator. And here, there could be real value to having the ability to simply link into an existing C/C++ library compiled for RISC-V, rather than having to rewrite it in CUDA, OpenCL, etc.

                  Comment


                  • #39
                    Originally posted by coder View Post
                    This is a non-sequitur. From what I can see, Libre SoC has a single-core, dual-issue, in-order implementation at ~300 MHz, with no mention of hardware texture or raster units. That's at least 2 orders of magnitude below what this product seems to be targeting.

                    It's ridiculous to compare this to Libre SoC. They're two very different projects, with very different sets of goals, resources, and organizations. The only point of intersection was Libre's prior focus on RISC-V, and I think Michael merely mentioned it to avoid folks confusing the two.

                    BTW, I don't want to detract from what the Libre folks are doing. Full marks to them, for all their progress!
                    really appreciated, i think from quick-reading the comments here you make, you and others reaaaaalllly get how much frickin work is involved. we're coming up to four (4!) years just for the Specification alone https://libre-soc.org/openpower/sv/ and the binutils upstream patches have only just been submitted over the past couple of months.

                    as jacob mentioned, that test ASIC is real basic: it answered the question "can an entirely new team who've literally never done VLSI or HDL before in their lives actually produce an ASIC" and the answer was "yes".

                    a bit of background on this: Atif from Pixilica was originally talking with the ThinkSilicon team, a couple of years ago. Atif's initiative was to get a Working Group together to create an entirely *properly* open 3D GPU Standard https://www.pixilica.com/graphics because he recognises that collaboration reduces effort and helps avoid costly mistakes. his vision is to see the entire project be FOSS. the RISC-V Foundation undermined his initiative and promoted ThinkSilicon's custom proprietary secretive and closed-doors efforts instead. ThinkSilicon received EU Funding to develop their secretive and proprietary solution, and was bought by a U.S. company. no source code or specifications are publicly available.

                    as people have noticed, thinksilicon's primary focus is on ultra-low-power embedded use-cases: even before they started this proprietary GPU effort they have some astonishingly good silicon and compiler technology. it's just a real pity that they'll be in the same category as PowerVR: proprietary drivers, proprietary silicon, because you and i are not their customers.
                    Last edited by lkcl; 24 June 2022, 08:56 AM.

                    Comment


                    • #40
                      Originally posted by pkese View Post

                      Would the same hold true also if the hardware was optimized for CUDA style scientific computing rather than graphics processing?
                      no.

                      What are major implementation differences between CUDA vs. graphics optimized architectures?
                      accuracy on the FP32 transcendentals and the need for FP64 transcendentals in CUDA but not 3D is a maaaajor power-drain / area difference. given that GPUs have an astonishing 30% die area dedicated to FP, and given that it takes *FOUR* times the amount of silicon to get the final digit accuracy needed for FP32 for "scientific" purposes, there is neverrrr going to be a scenario where full-IEEE754-compliance of CUDA-style scientific computing is commercially acceptable for 3D GPUs.

                      the Vulkan Spec specifically recognises that absolute accuracy is not crucial in 3D. therefore commercial GPUs cut down the silicon needed by 75% so that you can get 4x as much work done for the same power consumption.

                      ... and many more things but this is the biggest reason why you can't just take a Vector ISA Spec such as RVV, add a bunch of extra opcodes and declare it ready for commercial GPU usage in today's markets (embedded *or* high-end).

                      Comment

                      Working...
                      X