Announcement

Collapse
No announcement yet.

There's A New Libre GPU Effort Building On RISC-V, Rust, LLVM & Vulkan

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • There's A New Libre GPU Effort Building On RISC-V, Rust, LLVM & Vulkan

    Phoronix: There's A New Libre GPU Effort Building On RISC-V, Rust, LLVM & Vulkan

    Over the past decade and a half of covering the Linux graphics scene, there have been many attempts at providing a fully open-source GPU (or even just display adapter) down to the hardware level, but none of them have really panned out from Project VGA to other FPGA designs. There's a new very ambitious project trying to create a "libre 3D GPU" built atop RISC-V, leveraging Rust and LLVM on the software side, and would also support Vulkan...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    They missed the video out part though.

    This may work well if the processor has a lot of cores...

    Comment


    • #3
      ...and wide vectors with good gather/scatter and predication. That's the first source of parallelism in modern GPUs (though "real" cores, like what NVidia calls SM, are slowly getting close).
      Last edited by HadrienG; 28 September 2018, 01:11 AM.

      Comment


      • #4
        This is a similar setup to something that Esperanto Technologies said they had prototyped. There's really nothing about RISC-V that prevents you from using it as a GPU base ISA.

        What I think would be interesting is an architecture which is not completely like most GPUs, which would basically be a set of full system cores with some graphics functionality, in addition to non-graphics cores. If your process uses the graphics functionality, it traps and is scheduled on a graphics-optimized core. These cores would have special stuff like MMU views to do interpolation and texture swizzles and decoding, datatypes appropriate for graphics available to vector runs. You'd still want fixed functions for rasterization, fragment blending, etc., but maybe a middle ground has some value.
        Last edited by microcode; 28 September 2018, 02:15 AM.

        Comment


        • #5
          I'm not quite sure what to think of this, after the big failure that EOMA68 has been. The basic idea doesn't sound that great either: GPUs are efficient because of substantial amount of fixed-function, special-purpose hardware to accelerate common tasks like rasterization, texture sampling, geometry processing and now even raytracing. An array of general-purpose CPUs won't cut it.

          Comment


          • #6
            Originally posted by microcode View Post
            This is a similar setup to something that Esperanto Technologies said they had prototyped. There's really nothing about RISC-V that prevents you from using it as a GPU base ISA.

            What I think would be interesting is an architecture which is not completely like most GPUs, which would basically be a set of full system cores with some graphics functionality, in addition to non-graphics cores. If your process uses the graphics functionality, it traps and is scheduled on a graphics-optimized core. These cores would have special stuff like MMU views to do interpolation and texture swizzles and decoding, datatypes appropriate for graphics available to vector runs. You'd still want fixed functions for rasterization, fragment blending, etc., but maybe a middle ground has some value.
            Forgive my ignorance but isn't this what Xeon Phi was kinda like? 64c256t "general" cores with lots and lots of AVX thrown in?

            Comment


            • #7
              Originally posted by vegabook View Post
              Forgive my ignorance but isn't this what Xeon Phi was kinda like? 64c256t "general" cores with lots and lots of AVX thrown in?
              Well, sorta, but no. Larrabee/Phi did not have rasterization and blending hardware, and as such is severely disadvantaged in standrad rasterized pipelines before we even get to the question of shaders. In addition, generally this "GPU" type hardware was still separate from your host application processor, whereas what I'm describing makes the same process capable of being scheduled onto and off of the graphics-oriented cores, without any fiddling about.

              Also, I don't know and can't comment on swizzles and texture compression on Larrabee/Phi, but I suspect neither is supported, which further limits usefulness for realtime graphics.
              Last edited by microcode; 28 September 2018, 03:43 AM.

              Comment


              • #8
                Originally posted by microcode View Post
                There's really nothing about RISC-V that prevents you from using it as a GPU base ISA.
                Quite the understatement considering one of the first, if not the first, risc-v implementations was nVidia's NV-RISCV as used in their GPUs following ~2016: https://riscv.org/wp-content/uploads...V_Story_V2.pdf https://riscv.org/wp-content/uploads...ijstermans.pdf

                Comment


                • #9
                  Originally posted by c117152 View Post

                  Quite the understatement considering one of the first, if not the first, risc-v implementations was nVidia's NV-RISCV as used in their GPUs following ~2016: https://riscv.org/wp-content/uploads...V_Story_V2.pdf https://riscv.org/wp-content/uploads...ijstermans.pdf
                  Well NV-RISCV was used as just a Falcon replacement in their control processor and not really for the graphics ISA.
                  Michael Larabel
                  https://www.michaellarabel.com/

                  Comment


                  • #10
                    Originally posted by Michael View Post

                    Well NV-RISCV was used as just a Falcon replacement in their control processor and not really for the graphics ISA.
                    True. But that's most of the silicon logic anyhow considering the rest of the cores just do raw compute... no? I mean, there's a reason they insist on keeping the microcode closed and don't care about the rest being out there. I think?

                    Comment

                    Working...
                    X