Have people seen the Parallella architecture from Adapteva, currently kickstarting at http://www.kickstarter.com/projects/...r-for-everyone
They have an scalable multicore architecture, and are aiming to make 16 and 64 core chips, with very low power consumption (~2W for 16 core), and reasonable CPU power. There long term goal is 1024 and 4096 core models. Its not a SIMD architecture (like a GPU, or SSE, AVX, Neon on a CPU), so it can achieve a higher computational efficiency. The patches are in GCC since 4.7, and the dev boards run ubuntu on the host dual core ARM A9
There are spec sheets and manuals
and some demos
The 64core dev board might be quite competitive with a desktop for image and video processing if your app has openCL or openMP support (GEGL/GIMP, blender, etc).
The 16core version is cheaper than a pandaboard, and has a similar dualcore ARM A9 as the host (so if you were thinking of buying a pandaboard, you may as well get this, and get a free 16core coprocessor).
A 1024 core version would be like a xeon phi. it could also make a very nice opensource graphics card via LLVMpipe.
They have an scalable multicore architecture, and are aiming to make 16 and 64 core chips, with very low power consumption (~2W for 16 core), and reasonable CPU power. There long term goal is 1024 and 4096 core models. Its not a SIMD architecture (like a GPU, or SSE, AVX, Neon on a CPU), so it can achieve a higher computational efficiency. The patches are in GCC since 4.7, and the dev boards run ubuntu on the host dual core ARM A9
There are spec sheets and manuals
and some demos
The 64core dev board might be quite competitive with a desktop for image and video processing if your app has openCL or openMP support (GEGL/GIMP, blender, etc).
The 16core version is cheaper than a pandaboard, and has a similar dualcore ARM A9 as the host (so if you were thinking of buying a pandaboard, you may as well get this, and get a free 16core coprocessor).
A 1024 core version would be like a xeon phi. it could also make a very nice opensource graphics card via LLVMpipe.
Comment