Announcement

Collapse
No announcement yet.

Think Silicon Shows Off First RISC-V 3D GPU

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #21
    Originally posted by Developer12 View Post
    Something that the article failed to pick up: they're waaaaaaay behind LibreSoC. All they have is a design, a simulator, and on their TODO list an FPGA prototype.

    Meanwhile LibreSoC has actual test chips on real silicon, going onto dev boards. https://twitter.com/lkcl/status/1539231319166689281
    true, but that test chip is closer to barely working than it is to high performance...it gets much less than 1 ipc...in comparison with x86, you'd have to go all the way back to the 80386 to get those ipc values.

    basically we didn't implement the superscalar out-of-order execution yet, so it's stuck running 1 instruction every few cycles.

    Comment


    • #22
      Originally posted by dragon321 View Post
      That's almost twice as PlayStation 3 GPU (230 GFLOPS). Or less than twice as iPhone 7 GPU (260 GFLOPS).
      As Dawn pointed out, 409 GFLOPS is a fp16 spec and applies to the maximal implementation.

      Originally posted by dragon321 View Post
      It's also comparable to some older Intel integrated GPUs perfectly capable of doing lighter work or even some older and lighter games.
      The likely targets of this IP would seem to be embedded applications, like kiosks and devices with touch-screen GUIs that might need to render 3D maps or something.

      Comment


      • #23
        Originally posted by Developer12 View Post
        Gee, maybe you shouldn't assume what I read after only reading the title yourself.
        I didn't assume - I made a qualified observation, based on your reductive and simplistic assessment. You should decide whether you care more about your cynicism or your esteem.

        On the other hand, what you just said is blatantly inaccurate.

        Originally posted by Developer12 View Post
        It's literally a bunch of lightly-modified RISC-V cores in a mesh network-on-chip.
        No, it's not just that. I won't repeat what I've already posted in this thread, but it should be clear that they've put more thought and effort into it than that.

        I clearly think it's (potentially) fit for purpose and you don't. Since we have no empirical measurements, we can leave it at that.

        Comment


        • #24
          Originally posted by Developer12 View Post
          Something that the article failed to pick up: they're waaaaaaay behind LibreSoC. All they have is a design, a simulator, and on their TODO list an FPGA prototype.

          Meanwhile LibreSoC has actual test chips on real silicon, going onto dev boards. https://twitter.com/lkcl/status/1539231319166689281
          This is a non-sequitur. From what I can see, Libre SoC has a single-core, dual-issue, in-order implementation at ~300 MHz, with no mention of hardware texture or raster units. That's at least 2 orders of magnitude below what this product seems to be targeting.


          It's ridiculous to compare this to Libre SoC. They're two very different projects, with very different sets of goals, resources, and organizations. The only point of intersection was Libre's prior focus on RISC-V, and I think Michael merely mentioned it to avoid folks confusing the two.

          BTW, I don't want to detract from what the Libre folks are doing. Full marks to them, for all their progress!

          Comment


          • #25
            Originally posted by -MacNuke- View Post
            Yet it drives a complete and huge ecosystem. And now we are "crying" about something that can be over 10 times faster?
            I love the Pi for all that it is and has done, but that doesn't mean it has no weaknesses or room for criticism. I was not crying, but merely pointing out that its GPU has limited relevance as any sort of benchmark. It doesn't represent the state of the art, even for what it's trying to be.

            Comment


            • #26
              Originally posted by coder View Post
              This is a non-sequitur. From what I can see, Libre SoC has a single-core, dual-issue, in-order implementation at ~300 MHz, with no mention of hardware texture or raster units. That's at least 2 orders of magnitude below what this product seems to be targeting.


              It's ridiculous to compare this to Libre SoC. They're two very different projects, with very different sets of goals, resources, and organizations. The only point of intersection was Libre's prior focus on RISC-V, and I think Michael merely mentioned it to avoid folks confusing the two.

              BTW, I don't want to detract from what the Libre folks are doing. Full marks to them, for all their progress!
              I'm comparing them because there is a direct comparison being made in the bottom of the article. Which you did not bother to read.

              Michael asserts that libresoc is behind, but they're the only ones with any silicon at all.

              Comment


              • #27
                Originally posted by coder View Post
                I didn't assume - I made a qualified observation, based on your reductive and simplistic assessment. You should decide whether you care more about your cynicism or your esteem.

                On the other hand, what you just said is blatantly inaccurate.


                No, it's not just that. I won't repeat what I've already posted in this thread, but it should be clear that they've put more thought and effort into it than that.

                I clearly think it's (potentially) fit for purpose and you don't. Since we have no empirical measurements, we can leave it at that.
                And I quote:

                "NEOX™ is a parallel multicore and multithreaded GPU architecture based on the RISC-V RV64C ISA instruction set with adaptive NoC. The number of cores varies from 4 to 64 organized in 1-16 cluster elements, each configured for cache sizes and thread counts ."

                It's literally a bunch of RISC-V cores in a NoC fabric, with some SIMD extensions and shader buffers slapped on for good measure.

                Comment


                • #28
                  Originally posted by Developer12 View Post
                  Michael asserts that libresoc is behind, but they're the only ones with any silicon at all.
                  Like ARM, "Think Silicon" is an IP vendor. They sell designs, not physical chips.

                  Comment


                  • #29
                    Originally posted by Developer12 View Post
                    "NEOX™ is a parallel multicore and multithreaded GPU architecture based on the RISC-V RV64C ISA instruction set with adaptive NoC. The number of cores varies from 4 to 64 organized in 1-16 cluster elements, each configured for cache sizes and thread counts ."

                    It's literally a bunch of RISC-V cores in a NoC fabric, with some SIMD extensions and shader buffers slapped on for good measure.
                    Sure, you can cherry-pick some subset of what they've published to support your case. I didn't say it's not a mesh of RISC-V cores, just that it's not simply a mesh of RISC-V cores.

                    Again, I'm not going to repeat what I've already posted in this thread. If you care what I think about their approach, you can read it here.

                    For the sake of adding some value to this exchange, here's a closeup of the diagram you (or others) mightn't have examined:



                    There are several elements you wouldn't find in a typical cluster of RISC-V cores:
                    • Cluster control unit & task scheduler
                    • Hardware ROP & Texture engines
                    • Local SRAM (i.e. other than cache)

                    Also, SIMD isn't yet a common feature of RISC-V cores and this is probably one of the first to feature SMT - both very important features for GPU cores.

                    I'd imagine the custom ISA is mostly for accessing the dedicated hardware engines, but perhaps they've also added hardware implementations of common shading operations.

                    Comment


                    • #30
                      Originally posted by coder View Post
                      Sure, you can cherry-pick some subset of what they've published to support your case. I didn't say it's not a mesh of RISC-V cores, just that it's not simply a mesh of RISC-V cores.

                      Again, I'm not going to repeat what I've already posted in this thread. If you care what I think about their approach, you can read it here.

                      For the sake of adding some value to this exchange, here's a closeup of the diagram you (or others) mightn't have examined:



                      There are several elements you wouldn't find in a typical cluster of RISC-V cores:
                      • Cluster control unit & task scheduler
                      • Hardware ROP & Texture engines
                      • Local SRAM (i.e. other than cache)

                      Also, SIMD isn't yet a common feature of RISC-V cores and this is probably one of the first to feature SMT - both very important features for GPU cores.

                      I'd imagine the custom ISA is mostly for accessing the dedicated hardware engines, but perhaps they've also added hardware implementations of common shading operations.
                      Oh I did see that diagram, but honestly none of that is all that special, particularly per-core ram. Where have you been?

                      Comment

                      Working...
                      X