Page 2 of 2 FirstFirst 12
Results 11 to 15 of 15

Thread: Canonical Announces The Orange Box $12k USD Ubuntu Cluster Suitcase

  1. #11
    Join Date
    May 2014
    Posts
    2

    Default

    I love the idea of using NUCs to build a cluster but I'm disappointed the price is so high. A single board with RAM and SSD retails at ~$600 (could have been cheaper--not sure how useful AMT is in this case). The GigE NIC is $150. 6 Grand I'd pay in a heartbeat but twice that is hard to justify.

  2. #12
    Join Date
    Oct 2007
    Posts
    121

    Default Expensive

    I did read all the comments and only the last one speaks loud about PRICE
    Perhaps because i am a economist, but PRICE / PERFORMANCE is my favorite benchmark, and it is usually lost here in Phoronix

    The reply before this saved me to make the numbers, twice as expensive as making it yourself, worse than Apple

    I would like tio see ONE X86 new Intel SoCs Ubuntubook - Instead of Chromebooks -

    ONE DEAL for web sells,perhasp in Amazon, you can have good pricing from 10.000 units,
    Canonical can spend 2 M USD or less in Acer Chromebooksalike and sell them as Ubuntubooks
    Or even do it better with 4 Gb of RAM and 300 to 500 Gb HDD and 200 to 250 USD theUbuntubook would be abest seller netbook - having more than 20 Chormebooks models to compete with,probably the best sold notebook.

    Add to this a CHEAP Ubuntubox for 150 USD or even less and they will begin to make some real hardware money.

  3. #13
    Join Date
    May 2014
    Posts
    2

    Default

    I'm sure there's a lot of cost in the specialized custom case. It's well suited to their usage, trade shows and training seminars, but perhaps overkill for the rest of us. It's hard to tell from the pictures if the boards are easily replaceable or upgradeable. Something like Build-a-Blade for the NUC would be less expensive and more flexible. Call it "Cloud for the IT Guy/Gal"?

  4. #14
    Join Date
    Dec 2010
    Location
    MA, USA
    Posts
    1,385

    Default

    Quote Originally Posted by Imroy View Post
    Because this is based on Intel NUC boards. There is no i7 NUC. There is no Xeon NUC.
    Yes, I know they're NUCs which can't be upgraded beyond an i5, but it still comes down to "why not find something else?". NUCs are small but they're not the ONLY choice.
    I guess because there are no good, affordable, server-class ARM boards. Most are either lowish-end Cortex-A7's or hobbled by USB-only I/O. The only server-class boards are far too expensive. Something based on i.MX6 (quad-core Cortex-A9 + SATA + GigE) could be half-decent - although it's limited to 4 GB of RAM and the GigE is limited to around 480 Mb/s.
    What you said really only applies if you're expecting an x86 replacement, which ARM is not capable of, nor should be. ARM is good for handling many instances of simple tasks, which is a waste of power and resources for an x86 platform. The only 2 reasons why this server is better as x86 is virtualization and hadoop. It all depends on what you intend to use it for, but like I said, Canonical has made a dent in the ARM market so that's why I think they'd attract more interest making this an ARM based server. Besides, you said yourself that ARM isn't perfect - if Canonical made their own ARM server that was prepared with everything you need out-of-the-box, that could take away a lot of the "will this work?" questions and make a more attractive product.

    On a side note, only AArch64 has major software issues. 32 bit ARM in my experience handles almost everything I need it to.
    There is a 16 port GigE switch in there. Can't you read?
    I didn't see that until after I already posted.
    As for storage, it's better for each node to have fast access to local storage. You don't want to use the network for everything, there's too much latency.
    Again, depends on what you're doing. If you have an ARM server with gigabit Ethernet where all devices work toward a common goal, there wouldn't be much of an issue. However, if you can really take advantage of 10 SSDs with their own dedicated SATA port then yeah, a NAS is a HORRIBLE idea.
    Maybe someone using this as a portable training or demo device?
    Hefty price for a demo. The wifi proposes more of a risk than anything.
    Seriously... have you never used a Linux system remotely? SSH and/or X11.
    Yes, I'm aware of remote access (I'm using it right now) but that's not my point... What I'm getting at is what good will only ONE HDMI port do if you've got 10 systems? That's like having a house with 10 rooms but only 1 has a window. But, apparently you can access the other 9 HDMI ports from the bottom of this device, so I guess bringing this up doesn't matter.
    Last edited by schmidtbag; 05-14-2014 at 09:53 AM.

  5. #15
    Join Date
    Apr 2013
    Posts
    103

    Default

    Quote Originally Posted by schmidtbag View Post
    Are you suggesting you get more performance on a quad core vs a quad core with HT? But suppose that isn't what you meant - Xeons are largely specialized in VMs. If HT were so ineffective, AMD would be dominating the server market with their TRUE 8, 10, 12, and 16 core opterons.
    Yes, you get more performance on a quad core vs. a quad core with HT. This assumes a hypervisor+VM workload. The same holds true with other CPU-bound operations like video transcoding.

    Opteron is the preferred chip for the virtualized server market due to their superior core density. AMD has true 16 cores per socket, intel maxes out at 10 I believe. For a standard 4-socket server, that means 64 real AMD cores per hypervisor vs. only 40 with intel. All of our VMware servers use 16 core Opterons.
    Last edited by torsionbar28; 05-19-2014 at 01:31 PM.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •