No announcement yet.

Gallium3D-LLVM, MicroBlaze and the Xilinx Zynq-7000 Dual-Core Cortex A9 FPGA Hybrid

  • Filter
  • Time
  • Show
Clear All
new posts

  • Gallium3D-LLVM, MicroBlaze and the Xilinx Zynq-7000 Dual-Core Cortex A9 FPGA Hybrid

    folks, i recently encountered the new Xilinx Zynq-7000 which is a hybrid Dual-Core 800mhz Cortex A9 with an on-board Series 7 FPGA, all in 28nm. whilst discussing this on for an openpandora v2.0, i suddenly realised that it might be possible to use the MicroBlaze FPGA target option in the LLVM 2.7 compiler, in order to compile the Gallium3D OpenGL 3D Driver for the on-board FPGA of the Zynq-7000.

    i've asked the llvm developers if they could kindly look into that, here:

    but the upshot basically would be that the Free Software Community, through this superb Xilinx ARM Cortex A9 / FPGA Hybrid would quite literally get the world's first truly Free-Software compliant 3D Accelerated GPU, entirely by accident, without any reverse-engineering or any proprietary Silicon IC Development, or in fact any dependence on any 3rd party proprietary vendor of any kind, with the exception of Xilinx.

    i wonder if Xilinx themselves are aware of this?

    Last edited by lkcl; 08-20-2011, 07:43 PM. Reason: make link bigger (put openpandora v2 in it)

  • #2
    Originally posted by lkcl View Post
    anyway, yes: what's possible, and where can people find out more
    about how gallium3d uses LLVM? and (for those people not familiar
    with 3D), why is the shader program not "static" i.e. why is a
    compiler needed at runtime at _all_? (if there's an answer already
    somewhere on a wiki, already, that'd be great).
    The Gallium3D stack currently uses LLVM to generate optimized CPU code for running shaders on the CPU via llvmpipe, and the LunarGLASS implementation uses LLVM as a more traditional GPU shader compiler, with additional layers and tools to let even GPU-specific optimizations be kept somewhat separate from GPU code generation (which is nice).

    In both cases, the LLVM processing results in hardware instructions that get executed on a separate piece of hardware, ie you would need still need FPGA logic corresponding to actual GPU hardware. If you had FPGA space left over then you could look at fitting some of the compiler logic in as well but my guess is you wouldn't fit much if any. That said, it's still a real interesting device.

    Shader programs go through a JIT compile stage so that the application code can be hardware independent. The most important part of the JIT compile is generating code specific to the hardware the app is running on right now. Most systems allow shader programs to be saved and reloaded in hardware-specific form (this is common for safety-critical apps such as avionics) but on a typical PC you don't save much time by skipping the compile step. On a small low power device the potential savings are higher, of course.

    Originally posted by lkcl View Post
    and: would moving this compiler onto the FPGA (so that it interprets
    the shader program), making it an interpreter instead, be a viable
    option? just throwing ideas out, here. btw i'm familiar with the
    dynamic architecture of 3D engines, but many people who might be
    reading this (from the archives) are not. i've seen the hardware
    block diagram for the PowerVR SGX engine, for example and it's...
    interesting, to say the least
    I think you would still want to keep syntax & semantic checking outside the hardware. In general the hardware design tends to be pretty basic in order to fit on the available die space and run fast, so the compiler job is arguably "translating something complicated into code that runs on something pretty simple-minded".


    • #3
      it's possible to try

      bridgman, thank you - that's very informative.

      i think... the most important thing here is that this platform is actually affordable, and thus would give a wider audience of free software contributors the opportunity to even try. even if the current incarnations of the Zynq-7000 series aren't good enough, there will be better versions coming. 22nm for example.