A New Open-Source GPU Comes About
Phoronix: A New Open-Source GPU Comes About
After writing last month the open-source graphics card is dead and why the open-source graphics card failed, this weekend I received an email that begins with "Open Graphics! Here we go again! As our master thesis work we have implemented a open source graphics accelerator."..
Aha, I don't think this will really do much for open-source graphics. It appears to be a basic fixed-point, non-programmable pipeline, if I'm not missing a lot by skimming the code. An actually good set of floating point units would probably help out far more.
Kudos for finishing the master's thesis and making something neat, though.
"An actually good set of floating point units would probably help out far more."
Very true, but that will be a lot of more work.. you do what you can with the time you got. This is still better then nothing
So they've implemented a GPU using a CPU.... isn't that essentially what you get by mixing LLVMpipe + GMA500?
The pipeline is implemented in hardware, also on in the hardware there is a CPU, the OpenRISC processor that can send instructions to the graphics accelerator. If its still unclear please read this: http://en.wikipedia.org/wiki/Field-p...ble_gate_array
Originally Posted by droidhacker
They've implemented a GPU using a FPGA...
Hmmm, I think this GPU will not be getting me 100 FPS in Crysis any time soon.
They just need some good driver optimizations.
Originally Posted by hoohoo
That's it: prerendered scenes! FPGAs are just a sea of lookup tables, right? This one will only be a little bigger...
Originally Posted by smitty3268
This is close to my proposal but has mistakes. 1)We have a good software-rasterizer(llvm-pipe), just add some 3d-instructions in OpenRisc (like Mips-3d), in order to accelerate the rasterizer and write an llvm-backend. Don't create asic-circuits, they are difficult even for companies like Nvidia, they want to end them by adding more 3d-instructions to the shaders(general cores). 2)Use little big processing, 2-4cores for general computing with 7dmips(20m transistors), and 32-64-128cores for graphics 2,5dmips(1m transistor). Each mini core has 512bit-fmac, that is 64gflops@2ghz per core, also that gives many tflops/watt on latest lithography. 3)Add emulation-instructions like godson-mips, then you will be able to execute "qemu wine" in live speed.
Tags for this Thread