I think there are quite a few graphics and multimedia tasks that will run pretty well on it. so while the raspberrypi is very cool and cheap, but a bit disapointing as a desktop (speaking as a pi owner), the parallella will be as fast as a normal desktop for some things (the ARM cpu is already several times faster than the pi's armv6).
of course the 16 and 64 core versions are not actual supercomputers by todays standards (i work with 48core opteron machine and would not call that a supercomputer). its a step on the way to 1024 and 4096 core chips. the limit in big supercomputing is power usage. i have seen HPC clusters where 1000s of 3 year old servers are thrown out because its cheaper to replace them a few hundred new machines, than pay for the electricity to run them.
there are many approaches to how to improve flops/watt. you can assume that standard CPUs will get a bit better every year by them selves, or you can try to come up with a whole new way of doing things. one of these is GPUs where you make huge use of SIMD (single instruction multiple data), which is great when you want to do exactly the same operation to each data value, but hopeless when you don't. another is put lots of full x86 cores on a single die, like intels MIC. epiphany is sort of a halfway, lots of simple but still capable independent cores on a chip. They also think that a network like memory system will be more efficient then a cache hierarchy. its hard to know whos right. time will tell.
of course the 16 and 64 core versions are not actual supercomputers by todays standards (i work with 48core opteron machine and would not call that a supercomputer). its a step on the way to 1024 and 4096 core chips. the limit in big supercomputing is power usage. i have seen HPC clusters where 1000s of 3 year old servers are thrown out because its cheaper to replace them a few hundred new machines, than pay for the electricity to run them.
there are many approaches to how to improve flops/watt. you can assume that standard CPUs will get a bit better every year by them selves, or you can try to come up with a whole new way of doing things. one of these is GPUs where you make huge use of SIMD (single instruction multiple data), which is great when you want to do exactly the same operation to each data value, but hopeless when you don't. another is put lots of full x86 cores on a single die, like intels MIC. epiphany is sort of a halfway, lots of simple but still capable independent cores on a chip. They also think that a network like memory system will be more efficient then a cache hierarchy. its hard to know whos right. time will tell.
Comment