Announcement

Collapse
No announcement yet.

HP Launches Their Low-Power Moonshot Servers

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • HP Launches Their Low-Power Moonshot Servers

    Phoronix: HP Launches Their Low-Power Moonshot Servers

    For the better part of two years now HP has been working on "Project Moonshot" as what the company hopes will be revolutionary as a new ultra energy-efficient server architecture. Moonshot began with Calxeda-based ARM SoCs, but in the end HP settled for Intel Atom processors. Released today were HP's Moonshot system based on the Intel Atom S1200...

    http://www.phoronix.com/vr.php?view=MTM0NjA

  • #2
    I much prefer the AMD offering from HP's Microserver line. While not U1 (would be awesome though), the HP36L, HP40L and now HP54L are awesome servers.

    Comment


    • #3
      I don't fully understand the point of using ARM in servers. Home servers I can understand because most handle tasks that a PIII can do; nearly all modern ARM processors are faster than PIII and are about 1/6th as power demanding. But what's the point in mainframe or company servers? ARM doesn't have the grunt to outperform an Opteron or Xeon (and if it did, it would be more power consuming), and it doesn't have the multithreaded capabilities of a GPU clusters. Also, unless you virtualize the ARM server, setting up ARM devices is considerably more tedious than x86.

      I like ARM, I hope it becomes a major contender in the PC market, but I just don't understand what gain it has in server situations.

      Comment


      • #4
        They use Atom CPU's not ARM:
        Moonshot began with Calxeda-based ARM SoCs, but in the end HP settled for Intel Atom processors.

        Comment


        • #5
          Originally posted by Nuc!eoN View Post
          They use Atom CPU's not ARM:
          I know, I was just asking in general.

          Comment


          • #6
            Because we always have to visualize here's a few pics explaining it a little better.





            Apparently, they want to have this rack, where you can slide in all sorts of modules, be in x86, arm etc and build a cluster. It does makes sense. Arm CPU's for light load with low power usage, x86 cpu's for heavy lifiting and intensive tasks. Can't run 1 process obviously and switch it between various architectures. But one enclusure with an ARM storage 'bit', arm firewall, but x86 for your VM's

            Comment


            • #7
              Originally posted by schmidtbag View Post
              I don't fully understand the point of using ARM in servers. Home servers I can understand because most handle tasks that a PIII can do; nearly all modern ARM processors are faster than PIII and are about 1/6th as power demanding. But what's the point in mainframe or company servers? ARM doesn't have the grunt to outperform an Opteron or Xeon (and if it did, it would be more power consuming), and it doesn't have the multithreaded capabilities of a GPU clusters. Also, unless you virtualize the ARM server, setting up ARM devices is considerably more tedious than x86.
              Actually I wonder why they decided not to use ARM. ARM should be the better choice compared to Atom CPUs.
              What I generally do not understand is why they can use such low power CPUs at all?! I mean normally servers are rich of Opterons/Xeons as you've already mentioned.

              Comment


              • #8
                Originally posted by Nuc!eoN View Post
                Actually I wonder why they decided not to use ARM. ARM should be the better choice compared to Atom CPUs.
                What I generally do not understand is why they can use such low power CPUs at all?! I mean normally servers are rich of Opterons/Xeons as you've already mentioned.
                There are server farms out there that usually have 1 CPU to monitor all in a cluster (I believe IBM did this once with each rack having maybe 8 or so Power7 servers and 1 quad core Opteron to control them all) in which case either ARM or Atom would probably do a good job. The only reason I see Atom being used over ARM is compatibility, because yes, ARM would otherwise be not only a faster but cheaper choice.

                Comment


                • #9
                  Originally posted by schmidtbag View Post
                  I don't fully understand the point of using ARM in servers. Home servers I can understand because most handle tasks that a PIII can do; nearly all modern ARM processors are faster than PIII and are about 1/6th as power demanding. But what's the point in mainframe or company servers? ARM doesn't have the grunt to outperform an Opteron or Xeon (and if it did, it would be more power consuming), and it doesn't have the multithreaded capabilities of a GPU clusters. Also, unless you virtualize the ARM server, setting up ARM devices is considerably more tedious than x86.

                  I like ARM, I hope it becomes a major contender in the PC market, but I just don't understand what gain it has in server situations.
                  There are workloads that use large numbers of threads/processes, don't need much performance per thread/process, but are not suited to run on GPUs. Webservers for example would benefit from running on a lot of small ARM cores instead of few huge Xeon/Opteron cores, AFAIK.

                  Comment


                  • #10
                    Originally posted by Vim_User View Post
                    There are workloads that use large numbers of threads/processes, don't need much performance per thread/process, but are not suited to run on GPUs. Webservers for example would benefit from running on a lot of small ARM cores instead of few huge Xeon/Opteron cores, AFAIK.
                    Yep, this is designed to go into large farms where they mostly need to do lightweight webservers. That runs much more power-efficiently on these low power cpus, than if you used typical server cpus.

                    However, it's a very specialized field. You certainly wouldn't want to run a database, or any app that needs good single-thread performance.

                    Comment


                    • #11
                      @vim_user and smitty

                      Those ideas did cross my mind, but I guess it comes down to how much more cost effective ARM is compared to using a retired high-workload server or just simply a low-end AMD server.

                      As another quick question - do any of the 64 bit ARM systems allow replaceable memory, or is it all SoC? I could see a use for ARM in a server market if the 64 bit models allow regular DDR3 DIMMs or SO-DIMMs - sometimes tasks are very memory demanding but not so CPU demanding. I suppose ARM would also be handy if it were used as a central backup system, assuming some of the server models have bundles of SATA/SAS ports.

                      Comment


                      • #12
                        Originally posted by schmidtbag View Post
                        Those ideas did cross my mind, but I guess it comes down to how much more cost effective ARM is compared to using a retired high-workload server or just simply a low-end AMD server.
                        Major costs in running a server nowadays are not the price of the hardware, but the price for the electricity needed to run it and the energy needed to cool the systems and low power ARM systems are almost unbeatable here for workloads that don't cope well with few heavy x86 cores.

                        As another quick question - do any of the 64 bit ARM systems allow replaceable memory, or is it all SoC? I could see a use for ARM in a server market if the 64 bit models allow regular DDR3 DIMMs or SO-DIMMs - sometimes tasks are very memory demanding but not so CPU demanding. I suppose ARM would also be handy if it were used as a central backup system, assuming some of the server models have bundles of SATA/SAS ports.
                        As you can see in this article, especially the picture of the machine, it seems that those machines use standard RAM and have at least on implementation as storage server:
                        Gopi also unveiled three server reference designs that AppliedMicro has come up with, to show server makers what they can build. They’re dubbed X-Memory, X-Compute and X-Storage, depending on the target application.

                        The X-Storage system is aimed at Hadoop-type analytics applications, and combines a sea of hard disks with a single X-Gene server board. It had a total 36TB of storage, Gopi said.

                        Comment


                        • #13
                          Originally posted by Vim_User View Post
                          Major costs in running a server nowadays are not the price of the hardware, but the price for the electricity needed to run it and the energy needed to cool the systems and low power ARM systems are almost unbeatable here for workloads that don't cope well with few heavy x86 cores.
                          It's not just the price of the energy directly - most data centers are limited by the amount of power they can supply internally, which means increasing the power efficiency let's you stick a lot more servers in 1 location.

                          Running 1 data center is a lot, lot cheaper than running 2 completely separate ones.

                          None of this particularly matters for your average company that just runs a dozen servers to power everything. It's the major data centers that have thousands upon thousands that are interested in these types of systems.

                          Comment


                          • #14
                            Originally posted by smitty3268 View Post
                            It's not just the price of the energy directly - most data centers are limited by the amount of power they can supply internally, which means increasing the power efficiency let's you stick a lot more servers in 1 location.

                            Running 1 data center is a lot, lot cheaper than running 2 completely separate ones.

                            None of this particularly matters for your average company that just runs a dozen servers to power everything. It's the major data centers that have thousands upon thousands that are interested in these types of systems.
                            That said, it would be fun if were possible to buy a tiny consumer version of that backplane box - scaling it down to e.g. four slots should limit the amount of switching hardware and such to a more manageable price, and the blades themselves ought to be fairly cheap. Not going to happen, but it would have been neat.

                            Comment


                            • #15
                              Originally posted by dnebdal View Post
                              That said, it would be fun if were possible to buy a tiny consumer version of that backplane box - scaling it down to e.g. four slots should limit the amount of switching hardware and such to a more manageable price, and the blades themselves ought to be fairly cheap. Not going to happen, but it would have been neat.
                              I would like to have one of these in even smaller versions, with one or two of those SoCs, should be a powerful but cheap and energy-efficient home-server and maybe even fanless. Would be a nice improvement to my Atom home-server.

                              Comment

                              Working...
                              X