Announcement

Collapse
No announcement yet.

Low(er) power server issues

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • curaga
    replied
    I also wonder when there will be encryption chips like the VIA padlock for insertion onboard or in PCI slots etc.
    There are, and were before the Padlock. They have weak points, biggest being the limiting rate of the pci bus.
    Some of the first ones didn't even support DMA/bus-mastering, so they could actually increase the cpu load.

    Leave a comment:


  • Adarion
    replied
    Originally posted by stevea View Post
    Thanks for your comments and especially power figures Adarion.
    No problem.
    B.t.w. on the HDDs, I don't know about all the other manufracturers but Seagate's 7200.11/12 are doing quite fine on power consumption, esp. the one with only 1 disc (320GB often, maybe even 500). Not as low as a 2,5" but good anyway, nice idle values.
    Maybe with the increasing data density they already use less platters on your mentioned 1TB drive.
    I have some of these 320/500 and until now they seem to be quite sturdy (like all mine Seagate drives did). (I had a loose power or data cable a few times but no real horror. Unlike with some other models that died from now to now+1minute.)

    That's an intersting read.
    Well, I found a recently published book (1st press) on embedded linux in German, dunno if it's also transalted to English) but I guess I'll get myself a copy.
    What makes me wonder, too, are these pricey boards beyond all reason. But I saw a few offers in Germany, Swiss and maybe Austria and they offered some headless and with GPU boards for... well... more reasonable prices (150-300E maybe, you could probably calculate this 1:1 in US$ since overseas electronic equipment is far cheaper). So there should be boards that are buyable.
    Thing is, that only few bear a lot of storage options. Which sucks of course.
    Then there was that mentioned external storage solution by WD which is running a Linux on a headless ... ARM? Actink to the outside as NAS or something. Still there is probably only 1 disc attached. (But at least you get em with >1TB if you need it.)
    When you look around there are tons of cheap Linux on non-x86 hardware parts around, problem is that most of them are routers and designed to just exactly fit this purpose. So they often only have a storage chip (not even a slotted CF card or something) and no mass storage or USB connectors. Otherwise they sell for 30-80 bucks and should have everything else neccessary. Just the missing storage. :/

    >to make some off-the-shelf SATA card play with these.
    Aren't all the Kernel-internal drivers working on all arches the kernel is ported to? Maybe only if PCI bus/slots are absolutely uncommon on the specific arch. I would expect (or hope) them to run.

    >These are not realistic solutions unless some mass-market
    >product contains the features I need.
    I fear there won't be a mass market so soon. Look at these mini-ITX invented by VIA. They're still not cheap at all even though the HW is lame when it comes to computing power. I mess with these VIA stuff for some years now and slowly offers are getting better. More stuff to choose from. Still the small hulls for a MiniITX-HTPC are very expensive, like a full big tower.
    The only thing is see atm are these already finished black boxes meant as storage solutions where you must file a Harald-Welte-GPL-violations lawsuit until you get any access to the device. And then these things will be limited often to the pure purpose only so you may not have the best conditions for encryption, networking/filtering and so on.
    I also wonder when there will be encryption chips like the VIA padlock for insertion onboard or in PCI slots etc. That would be a really nice addition, taking a lot of load off the CPU.


    >Laptop drives have little or no power advantage on a "per
    >Gigabyte" basis.
    *nods* But then it depends how much of a storage you need. For a file server it's of course not enough but for a HTPC which is used for playing music and a DVD from time to time it should be fine.

    >Also I'm not seeking a truly silent system, but I would prefer
    >to offload the cooling to 120mm case fans.
    Scythe (Japanese firm I think)is offering some nice fans, quiet but still getting air in/out. Not the cheapest though and you still have to compare between models since not all of their products will fit perfectly.


    >Another approach is the use of a solid state drive.
    Nah, as you mentioned they're very expensive and then they suck about the same power all the time (a 2,5" laptop drive in sleep mode sucks less power I read some weeks/moths ago). They may be a fine speedup as a system/bootup drive though when one has the money.

    Generally I wonder when the principles of design will turn 180? in terms of power usage. Until now systems where meant/built to be always on, and some things may go to sleep when not needed. But why not start design where the default is being off and only being waked when needed?
    Prof. Luithard told me that there is a long way to go, both for HW manufracturers as well as the Linux kernel.


    >even a simple RAID1 improves the reliability dramatically
    Humm, I always counter this.
    The cons:
    a) twice the price, but only one time storage
    b) twice the power, heat, noise
    c) it's only pure HW failure reliability. If a program runs amok, some malware fVcks your system or an ext4 chooses to keep your data round in RAM for 10 hours even though you closed your apps... *cough* well. Then nothing is gained with a Mirror RAID and the defect data or no data is written (or not) nice and synchron to both drives.

    I don't understand why everybody packs these RAID capable chips on every board.
    Furthermore I know some guy with a RAID5 and the 3 discs failed after a few months within some weeks. One by one.
    So I came not to trust too much into any RAID system.
    Okay, maybe it weren't the best conditions how he used his HDDs, I dunno exactly what he did.

    >using the remainder for non-RAID backup
    Yes, that's what I do. Only that I have no RAID at all and tons of new, old and even older PATA discs (from 320 GB to 250 MB from the 486). But once they were so cheap and quicker than a CDR(W)/DVDR(W) I use them for backups. And with good handling they should do well many further years.

    Well, feel free to tell your experiences if you find a suitable system.

    Leave a comment:


  • stevea
    replied
    You are right, but OTOH if I have a couple disks (say RAID1) only spun-up daily to perform archival backup ,then I don't care greatly if this is a fair bit slower.

    Doesn't solve the fundamental problem - the embedded parts have too few interfaces. Need a minimum of 4X SATA and 2X enet (one GigE, the other at least 100Mbit), and should really have an IDE. I don't see this on embedded card, but it's easy to find a <$100 AMD Mobo with all these and more.

    Leave a comment:


  • movieman
    replied
    Originally posted by Adarion View Post
    Well, if you can find at least one board with a PCI/PCIe slot you could buy a controller with lots of IDE or SATA (or mixed?) storage and you should be ok with that.
    I thought about that, but if you care about disk throughput and want to slap on a bunch of drives in a RAID configuration then two modern hard drives can saturate a PCI bus and one modern hard drive can saturate a poorly-implemented PCI bus (getting maybe 60% of theoretical throughput).

    Leave a comment:


  • stevea
    replied
    Thanks for your comments and especially power figures Adarion.

    I develop driver/kernel software for embedded systems (Linux & other) of various sorts for a living, so I'm quite familiar wiith the non-x86 issues. The embedded ARM NAS boards mostly use Marvell chipsets that do not meet my needs. Most of the development boards are shockingly expensive ($3000 for the Freescale MPC8641D for example) and even tho' Linux has been ported, it will likely require much driver work to make some off-the-shelf SATA card play with these. There are also more powerful ARM CPUs than the Marvell but these aren't available on any modestly prices boards I'm aware of. Perhaps on a $1500 6U PCI board or a similarly priced VME card. These are not realistic solutions unless some mass-market product contains the features I need.

    If you read my initial post you'll see that I an not building a NAS, I'm building a router/server that will maintain a VPN connection, perform DNS forwarding, routing, firewall, and a number of modestly compute intensive apps and especially some performance challenging crypto tasks. These will not fit well in a 128KB DRAM footprint. I think 1GB DRAM should be sufficient and leave some good space for disk and network buffers.

    Originally posted by Adarion View Post
    On the HDDs: if you really want solid stable rugged drives you will have to invest tons of money into SCSI. Still the best and most robust solution. SCSI was built for 24/7 operation - most IDE/SATA are not. I know that Seagate (my preference in HDDs) offers some labeled as 24/7. They publish a lot of specs on their pages so you might want to check the MTBF etc.. Of course these 3,5" suck more energy than a 2,5" but latter are not built for file servers. They're prolly more for laptops, maybe with a high spin up/down count.

    I am not impressed with SCSI reliability claims ... see these ...



    Conclusions from the second (Usenix) paper states, "In our data sets, the replacement rates of SATA disks are not worse than the replacement rates of SCSI or FC disks". Pro'ly once true that SCSI & FC were superior, but today the difference is marginal.

    Both papers (my POV) indicate that there are several failure mechanisms not captured in the manufacturers MTTF calculation. So yes use the mfgr data, but expect the failure rate to be higher this. In the Google paper the range of measured failure rates indicate the best disk have ~500k hour MTTF, the worst(defective lot) ~67k hr by measurement.

    I've read the data on Seagates Momentus 7200.4 laptop drive (600khr MTTF, and low power - mostly under 2.2W while operating). There are a lot of power-thrifty 3.5" drives, like the Seagate 7200.12 1TB which uses 6.6/5.4/5.0/0.79W during seek/rw/idle/standby (750k hr mttf). I too prefer Seagates and they are very good about providing data on their drives.

    The 500GB laptop drive costs as much a good quality 3.5" 1.5TB drive, so *IF* you need the disk space, then 3x laptop drives will consume about the same power as a decent 1.5TB, will cost 3x as much, and the 3x laptop drives have 1/3rd the reliability of a single laptop drive which is already inferior to the 3.5" drive. Laptop drives have little or no power advantage on a "per gigabyte" basis.

    So I can appreciate the use of a laptop drive where the primary goal is low power, single drive, and cost and reliability aren't design drivers. That's not my situation. Also I'm not seeking a truly silent system, but I would prefer to offload the cooling to 120mm case fans.

    ----------
    Another approach is the use of a solid state drive. These are currently very pricey, and one 64GB drive will consume ~2W when active, just like a laptop drive. The MTTF is reported at 1500k hours, but the price per gigabyte is currently about 20X that of a 3.5" 1TB rotating drive. These flash based drives have a limited number of write cycles, and their I/O can be very fast. So they might fill a niche for frequently accessed data. Perhaps holding a Linux XIP(execute in place) capable root file system mostly mounted read-only to avoid flash wear-out.

    ==========

    Although I object to lower reliability drives, it must be recognized that even a simple RAID1 improves the reliability dramatically - to the "don't care" point even for crummy drives. One of the papers mentions a bad batch of disks with a 13% AFR (~67k hr MTTF). The other paper suggests that failure of a disk in certain RAID increases the probability of another RAID disk failing by a factor of 39. So a RAID1 of these terrible disks, might still have a combined MTTF of 20 Million hours - far better than any single commercial drive. Of course RAID1 means approximately double the power.

    I have concerns about the currently reliability & MTTF of my legacy disks, since some have many hours (years actually) of wear; mostly 160GB to 300GB SATAs. I'm thinking of using the oldest drives in a RAID(non-0) config, and using the remainder for non-RAID backup mostly kept in standby. That way if I lose a RAID drive (most likely case I think) I can rebuild the RAID with a new disk. If a backup drive(s) fail I can reconstruct a current backup from the live copy.

    Note there are still single-points-of-failures, your PC might be hit by lightening, so RAID does not replace an offsite backup scheme, but it nearly eliminates the fear of single drive failure data losses.
    Last edited by stevea; 26 May 2009, 01:40 AM.

    Leave a comment:


  • deanjo
    replied
    Originally posted by Adarion View Post
    Me neither, but I didn't really look our for it. At least on the x86 side. Probably the coreboot can implement a headless boot without breaking. But the IGPs shouldn't suck up too much power for a normal system. But then if one really wants to go down with power usage its even better without any of that. (but less easy to set up)
    Why would you need coreboot? Like I said most boards (if not all within the last 3-4 years and the ones that did were using AGP slots IIRC) don't need any graphics at all to boot. An IGP won't really draw any appreciable power if not being used.

    Leave a comment:


  • Adarion
    replied
    Originally posted by deanjo View Post
    I haven't ran across a board in a while that would not boot without a display adapter.
    Me neither, but I didn't really look our for it. At least on the x86 side. Probably the coreboot can implement a headless boot without breaking. But the IGPs shouldn't suck up too much power for a normal system. But then if one really wants to go down with power usage its even better without any of that. (but less easy to set up)

    Leave a comment:


  • deanjo
    replied
    Originally posted by Adarion View Post
    Yep, I saw some. Back at the times it was even just a 80x25 (and others) text adapter. Well, but you know that most x86-board Vendors will have a BIOS POST check for some GPU either at ISA, AGP, PCI/PCIe and they would beep and squeak all kinds of error if there is no GPU card. You would have to look closely or use a coreboot capable board.
    I haven't ran across a board in a while that would not boot without a display adapter. Then again almost all boards have a IGP on them nowdays anyways built into the chipset.

    Leave a comment:


  • Adarion
    replied
    Originally posted by stevea View Post
    I'm estimating that an AMD 5050e would expecd 45W/7w at full/idle.
    I can only tell for a whole system with a 4850e (including 3,5" disc, onboard GPU and so on) and it was about 45W idle (disk spinning iirc and CPU @ 2x1000MHz, no undervolting) and both cores compiling and stuff at 75W. It's some time ago I measured that so I could check that again. But I'm to lazy to pull the plug now for the measurement device.

    I think you underestimate the Linux offerings...
    Well, as I said there are some though most distributions keep to x86 and maybe PPC. But of course if you look around you'll find some for the other arches as well. Esp. Gentoo will probably work everywhere like NetBSD (which would also be a choice). Gentoo is also offering cross compiling so you may compile on a big AMD/intel box for a slower ARM/... embedded system.
    But yes, it will be hard to find one that can work as a storage controller. I know of the WD something, just a case with a non-x86 and some storage, Linux already into it. (But I don't trust WD on hard disks.)
    Well, if you can find at least one board with a PCI/PCIe slot you could buy a controller with lots of IDE or SATA (or mixed?) storage and you should be ok with that. Just need to make sure the Linux kernel driver for the controller will compile/work on your specific arch..
    I don't know which kind of throughput you expect on the file server, but 64-128M could be enough if you don't need a big disc cache.

    The problem I have is getting a non-x86 system with
    sufficient power & mem & peripherals to do the job.
    Mhm, well but as you said there are some chips around that can do quite a job at computing. From the Chemnitz Linux Days I know that some people were dealing with extremely low power environment Linux (Prof. Luithardt from Switzerland) and he told me that there are vendors offering complete boards with e.g. ARM-CPU, RAM (slotted or solidly mounted), network, sometimes even GPU, RS232 for serial console or via FireWire iirc. and often with an option for CF or SD Cards as mass storage. I wanted to get myself into that topic, also with headless stuff that works only via serial console but alas since I started working on my chemistry back at university there was a severe lack of time. And I'm a total starter on headless and/or non-x86 machines.

    Many X86 servers like the IBM X-series have a very low-end video on card on-board b/c of this. This way you only only an attached display for configuration or diagnostics.
    Yep, I saw some. Back at the times it was even just a 80x25 (and others) text adapter. Well, but you know that most x86-board Vendors will have a BIOS POST check for some GPU either at ISA, AGP, PCI/PCIe and they would beep and squeak all kinds of error if there is no GPU card. You would have to look closely or use a coreboot capable board.

    On the HDDs: if you really want solid stable rugged drives you will have to invest tons of money into SCSI. Still the best and most robust solution. SCSI was built for 24/7 operation - most IDE/SATA are not. I know that Seagate (my preference in HDDs) offers some labeled as 24/7. They publish a lot of specs on their pages so you might want to check the MTBF etc.. Of course these 3,5" suck more energy than a 2,5" but latter are not built for file servers. They're prolly more for laptops, maybe with a high spin up/down count.

    Leave a comment:


  • stevea
    replied
    Sadly I don't see much good data on CPU (not system) idle power.
    We were amazed - in fact, couldn't believe our eyes - when we first measured the power consumption of the QX9650. We repeated our measurements several times to ensure their veracity and rule out error

    has a nice but little chart. Wikipedia has a nice list of total power


    I'm estimating that an AMD 5050e would expecd 45W/7w at full/idle.
    My old P4/2.6Mhz might be 65W/42W.

    Using some performance estimates and actual numbers for part of the server tasks (w/ "sar") I think the CPU (of the class I'm looking at) will see a 4%-6% load. The idle CPU power figure consumption therefore dominates the power consumption. For example if we make the simplifying assumption that the CPU will expend max power 5% of the time and be idle the other 95%, the AMD5050e would average 8.9W, and the P4 ~43.1W. If the power expended by some hypothetical AMD part was 100W/7w then the avg would be 11.6W. So the MAX power increasing radically makes little difference for my server since the duty cycle is so low. My goal is to seek out the lowest avg power that supplies sufficient performance, so the idle power required is the big issue.

    Of course the main goal is cost - with a 5yr estimated server lifespan and my decent cost of power I expect that 1 watt of average power saving is worth ~$5.50(minimum) over the life of the server. I can probable save ~34W by using the 5050e in place of an ancient P-4, and this alone justifies the price CPU+MOBO+DRAM (ignoring the time cost of money).

    Leave a comment:

Working...
X