Announcement

Collapse
No announcement yet.

Calxeda Claims 15x Advantage Over Intel Xeon

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Calxeda Claims 15x Advantage Over Intel Xeon

    Phoronix: Calxeda Claims 15x Advantage Over Intel Xeon

    Calxeda has put out its first benchmark of their forthcoming Calxeda ARM Server. The company is claiming a 15x performance-per-Watt advantage over a recent Intel Xeon CPU...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Originally posted by phoronix View Post
    Phoronix: Calxeda Claims 15x Advantage Over Intel Xeon

    Calxeda has put out its first benchmark of their forthcoming Calxeda ARM Server. The company is claiming a 15x performance-per-Watt advantage over a recent Intel Xeon CPU...

    http://www.phoronix.com/vr.php?view=MTEyMzc
    Well, that's a joke. Let's see what happens when both servers are approximately the same performance.

    I fully expect ARM to win, but as badly as this test was setup to favor ARM I'm starting to have my doubts.

    Comment


    • #3
      Originally posted by smitty3268 View Post
      Well, that's a joke. Let's see what happens when both servers are approximately the same performance.

      I fully expect ARM to win, but as badly as this test was setup to favor ARM I'm starting to have my doubts.
      What exactly do you mean by same performance? Micro servers are being built for specific workloads and are showing nearly the same performance for a fraction of the power usage. A Pandaboard ES or Cotton Candy has excellent performance as a general purpose computer that runs Ubuntu all for a fraction of the cost and power. Once Cortex A15 based systems start coming out at 2Ghz in the quad and eight core varieties as well as the new ARMv8 64 bit parts there will be an even greater advantage for ARM. Also take into account that process is getting smaller as well where 22nm and within a couple of years 18nm and 16nm parts will be shipping thus increasing the performance/power ratio even greater in ARM's favor. Couple this with high end GPUs like the nVidia Kepler or Mali T6xx series and it makes an ARM laptop/desktop even more of a reality.

      Comment


      • #4
        Originally posted by smitty3268 View Post
        Well, that's a joke. Let's see what happens when both servers are approximately the same performance.

        I fully expect ARM to win, but as badly as this test was setup to favor ARM I'm starting to have my doubts.
        They're talking about performance PER WATT, which scales in a fairly linear manner once you're adding lots of CPU cores.

        Comment


        • #5
          Originally posted by Darkseider View Post
          What exactly do you mean by same performance? Micro servers are being built for specific workloads and are showing nearly the same performance for a fraction of the power usage. A Pandaboard ES or Cotton Candy has excellent performance as a general purpose computer that runs Ubuntu all for a fraction of the cost and power. Once Cortex A15 based systems start coming out at 2Ghz in the quad and eight core varieties as well as the new ARMv8 64 bit parts there will be an even greater advantage for ARM. Also take into account that process is getting smaller as well where 22nm and within a couple of years 18nm and 16nm parts will be shipping thus increasing the performance/power ratio even greater in ARM's favor. Couple this with high end GPUs like the nVidia Kepler or Mali T6xx series and it makes an ARM laptop/desktop even more of a reality.
          Hmm, i somewhat take back my comment. The performance of that server is much better than i realized.

          However, this gives me pause:

          The Intel (Sandybridge) platform is based on published TDP values for the CPU and I/O chipset, along with an estimate for DDR memory. Unfortunately, at the time of this blog post, we didn?t have a way to measure actual power consumption with the same level of fine detail.
          I don't think the standard TDP measurements are generally very accurate. At least, I know they aren't on the desktop. Maybe it's better for Xeon servers, but i doubt it.

          Also, Apache is pretty much the best case for these servers, or at least that was always true for those old servers Sun used to create that were similar (low-power/high-core), i can't remember what they were called. But since Apache is actually a decent use-case, that's valid enough.

          Comment


          • #6
            They seem to be serving static content; no, apache is far from the top in that. But if they're maxing the pipe either way...

            Comment


            • #7
              Originally posted by Darkseider View Post
              What exactly do you mean by same performance? Micro servers are being built for specific workloads and are showing nearly the same performance for a fraction of the power usage. A Pandaboard ES or Cotton Candy has excellent performance as a general purpose computer that runs Ubuntu all for a fraction of the cost and power. Once Cortex A15 based systems start coming out at 2Ghz in the quad and eight core varieties as well as the new ARMv8 64 bit parts there will be an even greater advantage for ARM. Also take into account that process is getting smaller as well where 22nm and within a couple of years 18nm and 16nm parts will be shipping thus increasing the performance/power ratio even greater in ARM's favor. Couple this with high end GPUs like the nVidia Kepler or Mali T6xx series and it makes an ARM laptop/desktop even more of a reality.
              OK, I'm not the original person calling the comparison bogus, but here are some issues:
              1. They compare a measured value against the TDP of the Xeon, not the actual usage. The Xeon was at 15%, and on top of that TDP != power draw.
              2. They have 4 GB in the ARM, 16 GB in the Xeon. Using BS TDP that extra 12GB uses 12 W extra. Unfair!
              3. They exclude the hard drive and the power supply, which is not logical.
              4. The Xeon saturated the link at 15% load. You would never use that chip in that workload with a single NIC. You would have a 4-port board at least.
              5. The V2 Xeon's are out which give performance increases and power-savings.

              So here's my equally bogus made up comparison, assumptions in ()'s:
              Xeon E3-L-V2 (17W at 100%) with 4 NIC board (linear scaling), 4GB RAM (4W), HD (7W), and PSU (~90% efficient)
              (6950?4)?((4+6.7+(17?1)+7)*1.1) = 728.3 req/W

              Calxeda
              5500?((5.26+7)*1.1) = 407.8 req/W

              Well, my-oh-my! Calxeda lose! Even if some of my assumptions and figures were worse, it would certainly be nowhere near a 15x advantage.

              Comment


              • #8
                Originally posted by droidhacker View Post
                They're talking about performance PER WATT, which scales in a fairly linear manner once you're adding lots of CPU cores.
                But not with speed per-core, which was my point. That ARM server is fast enough that i take back my original argument though.

                But my point was that if a single-core was providing 15x more speed, it's not a surprise that it would be 15x less efficient per watt.

                Edit - and again i take this back, if what the above person is saying is true. (yep, it is)
                The Sandybridge system saturated the single 1Gb NIC with less than 15% CPU utilization.
                I'll just give up now, since i don't have the time to actually look into this myself.

                But if the xeon is saturating the link without maxing out the cpu, then it seems likely that the ARM cpu really does suck compared to it performance-wise.

                So my original argument is back - cut down that Intel CPU or beef up the arm one, and things are likely to be very different. And it's just stupid to try and include full TDP for one platform when you arent' even maxing it out.
                Last edited by smitty3268; 20 June 2012, 03:13 PM.

                Comment


                • #9
                  Originally posted by kiwi_kid_aka_bod View Post
                  OK, I'm not the original person calling the comparison bogus, but here are some issues:
                  No need to go into details like you did. It's obviously such a bogus comparison that their press release just did them more harm than good. It actually gave me a chuckle.

                  I really hope Microsoft and Oracle invest heavily in their technology.

                  F

                  Comment


                  • #10
                    And now it's hit engadget! Cue the misplaced fawning over the misrepresented and bogus results.

                    As an aside, the guy at http://www.servethehome.com/ has done tests of systems with Xeon's power usage at idle and load. Typically a 70W delta! So at 15% load that chip would have been using around a quarter of the value they put into their chart.

                    Comment

                    Working...
                    X