Announcement

Collapse
No announcement yet.

Parallella: Low-Cost Linux Multi-Core Computing

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • frantaylor
    replied
    Originally posted by chuckula View Post
    Supercomputing... you keep using that word. I do not think it means what you think it means.
    SGI Onyx was sold as a "supercomputer"

    The promotional materials still exist

    Does that mean it's not a "supercomputer" today? When did the promotional materials pass from truth to non-truth?

    Today any android device will blow its doors off

    YOU are the one who thinks "Supercomputer" has some magical mysterious meaning that you apparently refuse to share with the rest of us.

    Or perhaps you are the ultimate arbitrator of the english language and you are here to correct all our speling errors and grammar misteaks.
    Last edited by frantaylor; 29 October 2012, 12:00 PM.

    Leave a comment:


  • V10lator
    replied
    Originally posted by YannV View Post
    As for the "much smaller" quote, that's simply a missing word. It should read "much smaller than high end CPUs and GPUs", as is clear if you include the context of the sentence. They also misspelled envelope in the same paragraph, suggesting a bit of a rush job.
    Thanks for clarification, I'm not a native english speaker.

    But I still think they used the term GPU for a reason. As far as I see the primary CPU is the Zynq-7010 Dual-core ARM A9 so the epiphany chip could be used for (small) GPU tasks. I mean the parallelism of it speaks for itself.

    Leave a comment:


  • YannV
    replied
    Originally posted by TAXI View Post
    BTW: They are not that open as they state to be, from the FAQ:
    They are exactly as open as they seem to be - they're up front about this, and it's there because it is the exception (literally everything else they've been asked for they are opening up). The source this particular question is talking about is that of the processor itself, i.e. like the OR1200 or openMSP430 on OpenCores. That is only useful to hardware developers (like me), who could use it to more easily (i.e. simply copy) make their own compatible processor chips. For board designers it's the datasheets and board design files that matter (which they will publish), and for software developers the architecture documents (already published) and software development tools (already free, will be published in the SDK). They are at least as open as any chip producer I've heard of.

    As for the "much smaller" quote, that's simply a missing word. It should read "much smaller than high end CPUs and GPUs", as is clear if you include the context of the sentence. They also misspelled envelope in the same paragraph, suggesting a bit of a rush job.
    Last edited by YannV; 27 October 2012, 04:43 PM.

    Leave a comment:


  • V10lator
    replied
    Originally posted by RussianNeuroMancer View Post
    According to Adapteva board doesn't have GPU.
    Why do they use the term GPU then?
    The Epiphany chips are much smaller high end CPUs and GPUs
    BTW: They are not that open as they state to be, from the FAQ:
    Will you open source the Epiphany chips?

    Not initially, but it may be considered in the future.

    Leave a comment:


  • RussianNeuroMancer
    replied
    Originally posted by przemoli View Post
    Anyone tested galium3d on that thing?
    According to Adapteva board doesn't have GPU.

    Leave a comment:


  • zxy_thf
    replied
    Originally posted by Rigaldo View Post
    Guys, don't forget that 16 and 64 cores is only to get started. They plan for ~1.000 cores in two years.
    This design would clearly challenge current OS kernels if they want to use a single kernel for 1000 cores.
    Practical (in theory) solutions are still in labs.

    Originally posted by YannV View Post
    It depends on the application. The external links (4, one on each side) each operate at 2GB/s, while internal rates are higher ("64GB/s Network-On-Chip Bisection Bandwidth"). As every processor in the Epiphany has a DMA unit, moving data around may not be all that costly.

    There are mainly three reasons for me to prefer the Parallella: firstly the openness, secondly the cost, and thirdly the low power needs. The architectural differences may come in later. This won't mean the GPU loses its place; but if it succeeds it may lower the price for that MIC eventually.
    Sounds feasible, if the application won't need many communications.
    But I still doubt the price of the switchers for the 2GB/s links, clearly there won't be many consumers of this kind of switchers.

    Gbps Ethernet switchers are much cheaper, while they are not sufficient for large-scale clusters .

    Leave a comment:


  • Rigaldo
    replied
    Guys, don't forget that 16 and 64 cores is only to get started. They plan for ~1.000 cores in two years.

    Leave a comment:


  • przemoli
    replied
    Originally posted by bnolsen View Post
    For the cost it might be fun to play with. It really ends up depending on how easily it is to port existing code over to run on it. 90gflops does seem kind of low considering there are terraflop boards out there (at dramatically higher cost).

    How well it handles double precision is of most interest to me, but that does push the 1GB ram a bit. Single precision would be more interesting to those wanting to try to leverage this thing as a 3d video accelerator.

    NO double precision. Only single precision, and not all operations are performed in hw. (division is not handled)

    What all here miss is application in mobile. Where low power and low cost can bring some benefits. That board will be one of (as this company hope) many devices with that technology.

    I can see it as "addon" or "replacement" for current FPU coprocessors in mobile.

    Leave a comment:


  • bnolsen
    replied
    For the cost it might be fun to play with. It really ends up depending on how easily it is to port existing code over to run on it. 90gflops does seem kind of low considering there are terraflop boards out there (at dramatically higher cost).

    How well it handles double precision is of most interest to me, but that does push the 1GB ram a bit. Single precision would be more interesting to those wanting to try to leverage this thing as a 3d video accelerator.

    Leave a comment:


  • YannV
    replied
    Originally posted by zxy_thf View Post
    Communication overhead will swallow all the advantages.

    I prefer GPU or Intel MIC, even for personal usage.
    It depends on the application. The external links (4, one on each side) each operate at 2GB/s, while internal rates are higher ("64GB/s Network-On-Chip Bisection Bandwidth"). As every processor in the Epiphany has a DMA unit, moving data around may not be all that costly.

    There are mainly three reasons for me to prefer the Parallella: firstly the openness, secondly the cost, and thirdly the low power needs. The architectural differences may come in later. This won't mean the GPU loses its place; but if it succeeds it may lower the price for that MIC eventually.

    Leave a comment:

Working...
X