Page 2 of 3 FirstFirst 123 LastLast
Results 11 to 20 of 23

Thread: Parallella: Low-Cost Linux Multi-Core Computing

  1. #11

    Default

    I think there are quite a few graphics and multimedia tasks that will run pretty well on it. so while the raspberrypi is very cool and cheap, but a bit disapointing as a desktop (speaking as a pi owner), the parallella will be as fast as a normal desktop for some things (the ARM cpu is already several times faster than the pi's armv6).

    of course the 16 and 64 core versions are not actual supercomputers by todays standards (i work with 48core opteron machine and would not call that a supercomputer). its a step on the way to 1024 and 4096 core chips. the limit in big supercomputing is power usage. i have seen HPC clusters where 1000s of 3 year old servers are thrown out because its cheaper to replace them a few hundred new machines, than pay for the electricity to run them.

    there are many approaches to how to improve flops/watt. you can assume that standard CPUs will get a bit better every year by them selves, or you can try to come up with a whole new way of doing things. one of these is GPUs where you make huge use of SIMD (single instruction multiple data), which is great when you want to do exactly the same operation to each data value, but hopeless when you don't. another is put lots of full x86 cores on a single die, like intels MIC. epiphany is sort of a halfway, lots of simple but still capable independent cores on a chip. They also think that a network like memory system will be more efficient then a cache hierarchy. its hard to know whos right. time will tell.

  2. #12
    Join Date
    Feb 2008
    Location
    Linuxland
    Posts
    4,752

    Default

    Quote Originally Posted by ssam View Post
    of course the 16 and 64 core versions are not actual supercomputers by todays standards (i work with 48core opteron machine and would not call that a supercomputer). its a step on the way to 1024 and 4096 core chips. the limit in big supercomputing is power usage. i have seen HPC clusters where 1000s of 3 year old servers are thrown out because its cheaper to replace them a few hundred new machines, than pay for the electricity to run them.
    Dear sir, may you divulge the location of that particular dumpster?

  3. #13
    Join Date
    Mar 2012
    Posts
    106

    Default

    Quote Originally Posted by YannV View Post
    It is certainly a moving target. I remember Apple marketing some PowerPC computers with the term as well, but likely the best comparison we can bring up is in flops/W to actual current day supercomputers. Those seem to hover about 2.1Gflops/W (and that model is #1 of top500 as well). The Epiphany IV 64-core processor is advertised as 100Gflops (peak) with 2W power consumption (max), placing it at nearly 50Gflops/W; but obviously you need a power supply, memory and so on, so it's more appropriate to check the whole board. That has a quoted typical consumption of 5W, so if we bump that up by 2W, and then factor in a pessimistic 80% efficient power supply, we get 11.4Gflops/W. It's certainly in the ballpark, though you'd need thousands of them to make a true supercomputer. The base model (16 cores) has the same power consumption, so just divide by 4.
    Communication overhead will swallow all the advantages.

    I prefer GPU or Intel MIC, even for personal usage.

  4. #14
    Join Date
    Oct 2012
    Posts
    4

    Default

    Quote Originally Posted by zxy_thf View Post
    Communication overhead will swallow all the advantages.

    I prefer GPU or Intel MIC, even for personal usage.
    It depends on the application. The external links (4, one on each side) each operate at 2GB/s, while internal rates are higher ("64GB/s Network-On-Chip Bisection Bandwidth"). As every processor in the Epiphany has a DMA unit, moving data around may not be all that costly.

    There are mainly three reasons for me to prefer the Parallella: firstly the openness, secondly the cost, and thirdly the low power needs. The architectural differences may come in later. This won't mean the GPU loses its place; but if it succeeds it may lower the price for that MIC eventually.

  5. #15
    Join Date
    Mar 2008
    Posts
    199

    Default

    For the cost it might be fun to play with. It really ends up depending on how easily it is to port existing code over to run on it. 90gflops does seem kind of low considering there are terraflop boards out there (at dramatically higher cost).

    How well it handles double precision is of most interest to me, but that does push the 1GB ram a bit. Single precision would be more interesting to those wanting to try to leverage this thing as a 3d video accelerator.

  6. #16
    Join Date
    Sep 2010
    Posts
    568

    Default

    Quote Originally Posted by bnolsen View Post
    For the cost it might be fun to play with. It really ends up depending on how easily it is to port existing code over to run on it. 90gflops does seem kind of low considering there are terraflop boards out there (at dramatically higher cost).

    How well it handles double precision is of most interest to me, but that does push the 1GB ram a bit. Single precision would be more interesting to those wanting to try to leverage this thing as a 3d video accelerator.

    NO double precision. Only single precision, and not all operations are performed in hw. (division is not handled)

    What all here miss is application in mobile. Where low power and low cost can bring some benefits. That board will be one of (as this company hope) many devices with that technology.

    I can see it as "addon" or "replacement" for current FPU coprocessors in mobile.

  7. #17
    Join Date
    Aug 2012
    Posts
    245

    Default

    Guys, don't forget that 16 and 64 cores is only to get started. They plan for ~1.000 cores in two years.

  8. #18
    Join Date
    Mar 2012
    Posts
    106

    Default

    Quote Originally Posted by Rigaldo View Post
    Guys, don't forget that 16 and 64 cores is only to get started. They plan for ~1.000 cores in two years.
    This design would clearly challenge current OS kernels if they want to use a single kernel for 1000 cores.
    Practical (in theory) solutions are still in labs.

    Quote Originally Posted by YannV View Post
    It depends on the application. The external links (4, one on each side) each operate at 2GB/s, while internal rates are higher ("64GB/s Network-On-Chip Bisection Bandwidth"). As every processor in the Epiphany has a DMA unit, moving data around may not be all that costly.

    There are mainly three reasons for me to prefer the Parallella: firstly the openness, secondly the cost, and thirdly the low power needs. The architectural differences may come in later. This won't mean the GPU loses its place; but if it succeeds it may lower the price for that MIC eventually.
    Sounds feasible, if the application won't need many communications.
    But I still doubt the price of the switchers for the 2GB/s links, clearly there won't be many consumers of this kind of switchers.

    Gbps Ethernet switchers are much cheaper, while they are not sufficient for large-scale clusters .

  9. #19

    Default

    Quote Originally Posted by przemoli View Post
    Anyone tested galium3d on that thing?
    According to Adapteva board doesn't have GPU.

  10. #20
    Join Date
    Mar 2011
    Posts
    357

    Default

    Quote Originally Posted by RussianNeuroMancer View Post
    According to Adapteva board doesn't have GPU.
    Why do they use the term GPU then?
    The Epiphany chips are much smaller high end CPUs and GPUs
    BTW: They are not that open as they state to be, from the FAQ:
    Will you open source the Epiphany chips?

    Not initially, but it may be considered in the future.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •