Announcement

Collapse
No announcement yet.

Calxeda Shows Off 192-Core ARM Ubuntu Server

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Calxeda Shows Off 192-Core ARM Ubuntu Server

    Phoronix: Calxeda Shows Off 192-Core ARM Ubuntu Server

    Kicking off the Ubuntu 12.10 Developer Summit is a keynote by Mark Shuttleworth where he and Calxeda just showed off a 192 core ARM server...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    "Being shown off publicly for the first time today is a 48 node, 192-core Calxeda ARM server in a 2U form factor."

    So this is 48 quad-core arm chips inside the same box?

    Damn, I was looking forward to a massive core count shared-memory machine.

    Comment


    • #3
      Wow, this sounds really nice for all sorts of cluster/hpc tasks. I recently saw a demo of a "cluster in a box" built with a bunch of mATX Atom boards, but this sounds a lot nicer!

      Now we just need one with Tegra chips that supports CUDA/OpenCL!
      Last edited by TechMage89; 07 May 2012, 03:31 PM.

      Comment


      • #4
        off topic slightly and noob question (you've been warned)


        why multicore chips (or better a sum of them like the server here) like arm cannot be used in desktop environments and compete with x86. does it have to do with apps not being multithreaded or are there other technical matters???

        to give a better example tilera offers a 64 core low power chip that says its good for servers but it will not cut it for desktop use.

        Comment


        • #5
          Because most (non-Linux) desktop software *only* runs on x86, and good luck getting hundreds of companies to port their stuff. Microsoft is releasing an ARM version of Windows 8 for the tablet market, I believe, but it remains to be seen how much adoption it will get.

          Also, taking advantage of more than 4 cores (or in some cases, more than 2) is not easy for some tasks, and involves rethinking as well as rewriting algorithms to take advantage of that kind of parallelism.

          It would be great for stuff like video editing and gaming (once the games catch up to using that many threads, there's plenty of work to divide up), though.

          Comment


          • #6
            no pics, no proof.

            also, nerd porn?!

            Comment


            • #7
              32-bit

              I guess it is 32-bit, not 64-bit.

              We need ARMv8 with 64-bit.

              Comment


              • #8
                Originally posted by 89c51 View Post
                off topic slightly and noob question (you've been warned)


                why multicore chips (or better a sum of them like the server here) like arm cannot be used in desktop environments and compete with x86. does it have to do with apps not being multithreaded or are there other technical matters???

                to give a better example tilera offers a 64 core low power chip that says its good for servers but it will not cut it for desktop use.
                Reasons why:

                1. Cost: having a huge amount of cores like this is much more expensive than a desktop. If you can afford it, great; it'll be blazing fast. If not, sucks to be you (i.e. 99.9% of the population).

                2. Serial performance: Machines like this are often designed with poor serial performance in mind. The assumption is that there just won't be a whole lot of serial effort required, so it's OK to make each core about as fast (or slow) as a Pentium III @ 500 MHz, give or take. Some of the supercomputers have hundreds or thousands of individual cores that are as serially powerful as the highest-end desktop cores we have today, but then see point 1 about cost.

                3. Processor optimization: There are fixed function capabilities of desktop x86 processors (especially recently) that accelerate common CPU-intensive tasks that most (or at least, many) people will want. It's simply (much) more efficient to put these things into a CPU instruction and run it on bare metal rather than running it at the software layer. Examples include encryption, video encoding/decoding, virtualization, and media processing acceleration with streaming vector instruction sets like SSE in all of its incarnations; and let's not forget the most recent one, having a GPU inside the walls of the CPU itself. Some *very* high-end CISC or SoC many-core boxes do have some similar fixed function features also, but I refer you back to the first point if you think you can get 48 cores and up with comparable features at the same cost as a desktop.

                Also on the topic of processor optimization, you often see desktop processors with things like "Turbo Mode" where it can consume vastly larger quantities of power for short bursts when real-time responsiveness is needed on the desktop. Again, server / cluster workloads that those many-CPU systems are designed for, are not having "real-time desktop responsiveness in under 10 ms" in mind when they design it. They're just looking for throughput, which means processing lots and lots of calculations, without as much emphasis on getting *certain* instructions through the pipeline faster than others (this requirement adds a scheduling complexity that is undesirable on workloads where real-time is unnecessary).

                4. Form factor: In case you didn't catch the article, it uses a 2U server chassis. Holy crap that's big -- at least compared to a desktop. Most of the space within a typical desktop case is occupied by a large 5.25" DVD drive, a large power supply, many large hard drives, a large graphics card, and a little bitty tiny CPU on a tiny motherboard in the back of the case. The 48, 64, 96 core servers are almost solid silicon with CPUs; they don't have these big bulky components. The power supply in those cases is surprisingly small considering how much power it draws. Miniaturization is expensive, and people aren't going to want to carry around an 80 pound laptop or have a desktop with a motherboard as big as one in a 2U (this increases the height and depth of the case and requires at least as much depth as a full ATX, if you use a desktop shaped chassis).

                Power draw might also be an interesting factor... some people consider their energy bill when shopping for a computer. Even 48 "low power" cores are going to be much more power hungry than a standard desktop quad core. 5 Watts per CPU at 48 cores is 240 watts just for the CPUs, whereas a "huge" desktop CPU (Intel "enthusiast performance" line, overclocked) uses around 150 watts tops.

                OK, that's it. I could probably come up with more if I got into the software side, compilers, instruction set architecture, cache coherency, etc. but I'll leave it at these few points which I feel are the most convincing.

                Comment


                • #9
                  thanks for the replies

                  i have no idea about the cost of the tilera i mentioned on the first post but when i read through the specs it had a typical power consumption of 25watts (the 36core one the 16 core had even lower power requirements) and the first thing that popped in my mind was "wow you could built a nice laptop with that". it runs linux and with all those cores you can probably virtualize anything you want.

                  but probably thats not the case

                  Comment


                  • #10
                    Originally posted by 89c51 View Post
                    why multicore chips (or better a sum of them like the server here) like arm cannot be used in desktop environments and compete with x86. does it have to do with apps not being multithreaded or are there other technical matters???
                    You absolutely can, see the TrimSlice for example.

                    Comment

                    Working...
                    X