Announcement

Collapse
No announcement yet.

Parallella: Low-Cost Linux Multi-Core Computing

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Parallella: Low-Cost Linux Multi-Core Computing

    Phoronix: Parallella: Low-Cost Linux Multi-Core Computing

    Parallella is an attempt to make Linux parallel computing easier and is advertised as a "supercomputer for everyone", but will it come to fruition?..

    http://www.phoronix.com/vr.php?view=MTIxNTQ

  • #2
    Sounds very interesting. I'd really like to try this as my NAS server which will also transcode some videos

    Comment


    • #3
      Thanks for the heads up.

      Comment


      • #4
        You keep using that word...

        Yeah... "Expected to be on the Parallella computer is a Zynq-7010 Dual-core ARM A9 CPU, an Epiphany Multi-core Accelerator, 1GB of RAM, USB 2.0 support, Gigabit Ethernet,"

        Supercomputing... you keep using that word. I do not think it means what you think it means.

        Comment


        • #5
          I imagine the readers here will like the detailed reference manuals.
          http://www.kickstarter.com/projects/...e/posts/323691

          There is also this which is a bit lighter to read http://www.adapteva.com/news/adaptev...ps-less-watts/

          Comment


          • #6
            Anyone tested galium3d on that thing?

            Comment


            • #7
              Originally posted by chuckula View Post
              Yeah... "Expected to be on the Parallella computer is a Zynq-7010 Dual-core ARM A9 CPU, an Epiphany Multi-core Accelerator, 1GB of RAM, USB 2.0 support, Gigabit Ethernet,"

              Supercomputing... you keep using that word. I do not think it means what you think it means.
              It is certainly a moving target. I remember Apple marketing some PowerPC computers with the term as well, but likely the best comparison we can bring up is in flops/W to actual current day supercomputers. Those seem to hover about 2.1Gflops/W (and that model is #1 of top500 as well). The Epiphany IV 64-core processor is advertised as 100Gflops (peak) with 2W power consumption (max), placing it at nearly 50Gflops/W; but obviously you need a power supply, memory and so on, so it's more appropriate to check the whole board. That has a quoted typical consumption of 5W, so if we bump that up by 2W, and then factor in a pessimistic 80% efficient power supply, we get 11.4Gflops/W. It's certainly in the ballpark, though you'd need thousands of them to make a true supercomputer. The base model (16 cores) has the same power consumption, so just divide by 4.

              Comment


              • #8
                I predict failure

                I don't see what this product is good for. Is it parallel computing just for the sake of it? 90 GFLOPS @ 5 watts is good I suppose, but it's not enough if you really need to do heavy calculations. In that case the 5 watts doesn't matter that much. Also, the 1 GB of RAM is a joke for anything but the most trivial tasks. Then what good is 64 cores?
                Speaking of 64 cores, few problems scale linearly with cores. This means that the actual performance will be way less than the theoretical 90 GFLOPS.
                This is a solution in search of a problem.

                Comment


                • #9
                  I guess you're going to be able to mine bitcoins with it quite efficiently.

                  Comment


                  • #10
                    Originally posted by Staffan View Post
                    I don't see what this product is good for. Is it parallel computing just for the sake of it? 90 GFLOPS @ 5 watts is good I suppose, but it's not enough if you really need to do heavy calculations. In that case the 5 watts doesn't matter that much. Also, the 1 GB of RAM is a joke for anything but the most trivial tasks. Then what good is 64 cores?
                    Speaking of 64 cores, few problems scale linearly with cores. This means that the actual performance will be way less than the theoretical 90 GFLOPS.
                    This is a solution in search of a problem.
                    Something I can agree on; it is in search of problems. That's why we want it priced at a point where people start thinking about using it.

                    You're here apparently stating that there is a class "heavy calculations" consistently requiring more than 1GB RAM and where power is not an issue. I disagree with the implicit claim that this is the only set where a considerably parallel processor is an advantage; for instance, it rules out real-time low-latency video processing.

                    Comment


                    • #11
                      I think there are quite a few graphics and multimedia tasks that will run pretty well on it. so while the raspberrypi is very cool and cheap, but a bit disapointing as a desktop (speaking as a pi owner), the parallella will be as fast as a normal desktop for some things (the ARM cpu is already several times faster than the pi's armv6).

                      of course the 16 and 64 core versions are not actual supercomputers by todays standards (i work with 48core opteron machine and would not call that a supercomputer). its a step on the way to 1024 and 4096 core chips. the limit in big supercomputing is power usage. i have seen HPC clusters where 1000s of 3 year old servers are thrown out because its cheaper to replace them a few hundred new machines, than pay for the electricity to run them.

                      there are many approaches to how to improve flops/watt. you can assume that standard CPUs will get a bit better every year by them selves, or you can try to come up with a whole new way of doing things. one of these is GPUs where you make huge use of SIMD (single instruction multiple data), which is great when you want to do exactly the same operation to each data value, but hopeless when you don't. another is put lots of full x86 cores on a single die, like intels MIC. epiphany is sort of a halfway, lots of simple but still capable independent cores on a chip. They also think that a network like memory system will be more efficient then a cache hierarchy. its hard to know whos right. time will tell.

                      Comment


                      • #12
                        Originally posted by ssam View Post
                        of course the 16 and 64 core versions are not actual supercomputers by todays standards (i work with 48core opteron machine and would not call that a supercomputer). its a step on the way to 1024 and 4096 core chips. the limit in big supercomputing is power usage. i have seen HPC clusters where 1000s of 3 year old servers are thrown out because its cheaper to replace them a few hundred new machines, than pay for the electricity to run them.
                        Dear sir, may you divulge the location of that particular dumpster?

                        Comment


                        • #13
                          Originally posted by YannV View Post
                          It is certainly a moving target. I remember Apple marketing some PowerPC computers with the term as well, but likely the best comparison we can bring up is in flops/W to actual current day supercomputers. Those seem to hover about 2.1Gflops/W (and that model is #1 of top500 as well). The Epiphany IV 64-core processor is advertised as 100Gflops (peak) with 2W power consumption (max), placing it at nearly 50Gflops/W; but obviously you need a power supply, memory and so on, so it's more appropriate to check the whole board. That has a quoted typical consumption of 5W, so if we bump that up by 2W, and then factor in a pessimistic 80% efficient power supply, we get 11.4Gflops/W. It's certainly in the ballpark, though you'd need thousands of them to make a true supercomputer. The base model (16 cores) has the same power consumption, so just divide by 4.
                          Communication overhead will swallow all the advantages.

                          I prefer GPU or Intel MIC, even for personal usage.

                          Comment


                          • #14
                            Originally posted by zxy_thf View Post
                            Communication overhead will swallow all the advantages.

                            I prefer GPU or Intel MIC, even for personal usage.
                            It depends on the application. The external links (4, one on each side) each operate at 2GB/s, while internal rates are higher ("64GB/s Network-On-Chip Bisection Bandwidth"). As every processor in the Epiphany has a DMA unit, moving data around may not be all that costly.

                            There are mainly three reasons for me to prefer the Parallella: firstly the openness, secondly the cost, and thirdly the low power needs. The architectural differences may come in later. This won't mean the GPU loses its place; but if it succeeds it may lower the price for that MIC eventually.

                            Comment


                            • #15
                              For the cost it might be fun to play with. It really ends up depending on how easily it is to port existing code over to run on it. 90gflops does seem kind of low considering there are terraflop boards out there (at dramatically higher cost).

                              How well it handles double precision is of most interest to me, but that does push the 1GB ram a bit. Single precision would be more interesting to those wanting to try to leverage this thing as a 3d video accelerator.

                              Comment

                              Working...
                              X