Announcement

Collapse
No announcement yet.

Low(er) power server issues

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by deanjo View Post
    I haven't ran across a board in a while that would not boot without a display adapter.
    Me neither, but I didn't really look our for it. At least on the x86 side. Probably the coreboot can implement a headless boot without breaking. But the IGPs shouldn't suck up too much power for a normal system. But then if one really wants to go down with power usage its even better without any of that. (but less easy to set up)

    Comment


    • #17
      Originally posted by Adarion View Post
      Me neither, but I didn't really look our for it. At least on the x86 side. Probably the coreboot can implement a headless boot without breaking. But the IGPs shouldn't suck up too much power for a normal system. But then if one really wants to go down with power usage its even better without any of that. (but less easy to set up)
      Why would you need coreboot? Like I said most boards (if not all within the last 3-4 years and the ones that did were using AGP slots IIRC) don't need any graphics at all to boot. An IGP won't really draw any appreciable power if not being used.

      Comment


      • #18
        Thanks for your comments and especially power figures Adarion.

        I develop driver/kernel software for embedded systems (Linux & other) of various sorts for a living, so I'm quite familiar wiith the non-x86 issues. The embedded ARM NAS boards mostly use Marvell chipsets that do not meet my needs. Most of the development boards are shockingly expensive ($3000 for the Freescale MPC8641D for example) and even tho' Linux has been ported, it will likely require much driver work to make some off-the-shelf SATA card play with these. There are also more powerful ARM CPUs than the Marvell but these aren't available on any modestly prices boards I'm aware of. Perhaps on a $1500 6U PCI board or a similarly priced VME card. These are not realistic solutions unless some mass-market product contains the features I need.

        If you read my initial post you'll see that I an not building a NAS, I'm building a router/server that will maintain a VPN connection, perform DNS forwarding, routing, firewall, and a number of modestly compute intensive apps and especially some performance challenging crypto tasks. These will not fit well in a 128KB DRAM footprint. I think 1GB DRAM should be sufficient and leave some good space for disk and network buffers.

        Originally posted by Adarion View Post
        On the HDDs: if you really want solid stable rugged drives you will have to invest tons of money into SCSI. Still the best and most robust solution. SCSI was built for 24/7 operation - most IDE/SATA are not. I know that Seagate (my preference in HDDs) offers some labeled as 24/7. They publish a lot of specs on their pages so you might want to check the MTBF etc.. Of course these 3,5" suck more energy than a 2,5" but latter are not built for file servers. They're prolly more for laptops, maybe with a high spin up/down count.

        I am not impressed with SCSI reliability claims ... see these ...
        http://labs.google.com/papers/disk_failures.pdf
        http://www.usenix.org/events/fast07/...tml/index.html

        Conclusions from the second (Usenix) paper states, "In our data sets, the replacement rates of SATA disks are not worse than the replacement rates of SCSI or FC disks". Pro'ly once true that SCSI & FC were superior, but today the difference is marginal.

        Both papers (my POV) indicate that there are several failure mechanisms not captured in the manufacturers MTTF calculation. So yes use the mfgr data, but expect the failure rate to be higher this. In the Google paper the range of measured failure rates indicate the best disk have ~500k hour MTTF, the worst(defective lot) ~67k hr by measurement.

        I've read the data on Seagates Momentus 7200.4 laptop drive (600khr MTTF, and low power - mostly under 2.2W while operating). There are a lot of power-thrifty 3.5" drives, like the Seagate 7200.12 1TB which uses 6.6/5.4/5.0/0.79W during seek/rw/idle/standby (750k hr mttf). I too prefer Seagates and they are very good about providing data on their drives.

        The 500GB laptop drive costs as much a good quality 3.5" 1.5TB drive, so *IF* you need the disk space, then 3x laptop drives will consume about the same power as a decent 1.5TB, will cost 3x as much, and the 3x laptop drives have 1/3rd the reliability of a single laptop drive which is already inferior to the 3.5" drive. Laptop drives have little or no power advantage on a "per gigabyte" basis.

        So I can appreciate the use of a laptop drive where the primary goal is low power, single drive, and cost and reliability aren't design drivers. That's not my situation. Also I'm not seeking a truly silent system, but I would prefer to offload the cooling to 120mm case fans.

        ----------
        Another approach is the use of a solid state drive. These are currently very pricey, and one 64GB drive will consume ~2W when active, just like a laptop drive. The MTTF is reported at 1500k hours, but the price per gigabyte is currently about 20X that of a 3.5" 1TB rotating drive. These flash based drives have a limited number of write cycles, and their I/O can be very fast. So they might fill a niche for frequently accessed data. Perhaps holding a Linux XIP(execute in place) capable root file system mostly mounted read-only to avoid flash wear-out.

        ==========

        Although I object to lower reliability drives, it must be recognized that even a simple RAID1 improves the reliability dramatically - to the "don't care" point even for crummy drives. One of the papers mentions a bad batch of disks with a 13% AFR (~67k hr MTTF). The other paper suggests that failure of a disk in certain RAID increases the probability of another RAID disk failing by a factor of 39. So a RAID1 of these terrible disks, might still have a combined MTTF of 20 Million hours - far better than any single commercial drive. Of course RAID1 means approximately double the power.

        I have concerns about the currently reliability & MTTF of my legacy disks, since some have many hours (years actually) of wear; mostly 160GB to 300GB SATAs. I'm thinking of using the oldest drives in a RAID(non-0) config, and using the remainder for non-RAID backup mostly kept in standby. That way if I lose a RAID drive (most likely case I think) I can rebuild the RAID with a new disk. If a backup drive(s) fail I can reconstruct a current backup from the live copy.

        Note there are still single-points-of-failures, your PC might be hit by lightening, so RAID does not replace an offsite backup scheme, but it nearly eliminates the fear of single drive failure data losses.
        Last edited by stevea; 05-26-2009, 01:40 AM.

        Comment


        • #19
          Originally posted by Adarion View Post
          Well, if you can find at least one board with a PCI/PCIe slot you could buy a controller with lots of IDE or SATA (or mixed?) storage and you should be ok with that.
          I thought about that, but if you care about disk throughput and want to slap on a bunch of drives in a RAID configuration then two modern hard drives can saturate a PCI bus and one modern hard drive can saturate a poorly-implemented PCI bus (getting maybe 60% of theoretical throughput).

          Comment


          • #20
            You are right, but OTOH if I have a couple disks (say RAID1) only spun-up daily to perform archival backup ,then I don't care greatly if this is a fair bit slower.

            Doesn't solve the fundamental problem - the embedded parts have too few interfaces. Need a minimum of 4X SATA and 2X enet (one GigE, the other at least 100Mbit), and should really have an IDE. I don't see this on embedded card, but it's easy to find a <$100 AMD Mobo with all these and more.

            Comment


            • #21
              Originally posted by stevea View Post
              Thanks for your comments and especially power figures Adarion.
              No problem.
              B.t.w. on the HDDs, I don't know about all the other manufracturers but Seagate's 7200.11/12 are doing quite fine on power consumption, esp. the one with only 1 disc (320GB often, maybe even 500). Not as low as a 2,5" but good anyway, nice idle values.
              Maybe with the increasing data density they already use less platters on your mentioned 1TB drive.
              I have some of these 320/500 and until now they seem to be quite sturdy (like all mine Seagate drives did). (I had a loose power or data cable a few times but no real horror. Unlike with some other models that died from now to now+1minute.)

              That's an intersting read.
              Well, I found a recently published book (1st press) on embedded linux in German, dunno if it's also transalted to English) but I guess I'll get myself a copy.
              What makes me wonder, too, are these pricey boards beyond all reason. But I saw a few offers in Germany, Swiss and maybe Austria and they offered some headless and with GPU boards for... well... more reasonable prices (150-300E maybe, you could probably calculate this 1:1 in US$ since overseas electronic equipment is far cheaper). So there should be boards that are buyable.
              Thing is, that only few bear a lot of storage options. Which sucks of course.
              Then there was that mentioned external storage solution by WD which is running a Linux on a headless ... ARM? Actink to the outside as NAS or something. Still there is probably only 1 disc attached. (But at least you get em with >1TB if you need it.)
              When you look around there are tons of cheap Linux on non-x86 hardware parts around, problem is that most of them are routers and designed to just exactly fit this purpose. So they often only have a storage chip (not even a slotted CF card or something) and no mass storage or USB connectors. Otherwise they sell for 30-80 bucks and should have everything else neccessary. Just the missing storage. :/

              >to make some off-the-shelf SATA card play with these.
              Aren't all the Kernel-internal drivers working on all arches the kernel is ported to? Maybe only if PCI bus/slots are absolutely uncommon on the specific arch. I would expect (or hope) them to run.

              >These are not realistic solutions unless some mass-market
              >product contains the features I need.
              I fear there won't be a mass market so soon. Look at these mini-ITX invented by VIA. They're still not cheap at all even though the HW is lame when it comes to computing power. I mess with these VIA stuff for some years now and slowly offers are getting better. More stuff to choose from. Still the small hulls for a MiniITX-HTPC are very expensive, like a full big tower.
              The only thing is see atm are these already finished black boxes meant as storage solutions where you must file a Harald-Welte-GPL-violations lawsuit until you get any access to the device. And then these things will be limited often to the pure purpose only so you may not have the best conditions for encryption, networking/filtering and so on.
              I also wonder when there will be encryption chips like the VIA padlock for insertion onboard or in PCI slots etc. That would be a really nice addition, taking a lot of load off the CPU.


              >Laptop drives have little or no power advantage on a "per
              >Gigabyte" basis.
              *nods* But then it depends how much of a storage you need. For a file server it's of course not enough but for a HTPC which is used for playing music and a DVD from time to time it should be fine.

              >Also I'm not seeking a truly silent system, but I would prefer
              >to offload the cooling to 120mm case fans.
              Scythe (Japanese firm I think)is offering some nice fans, quiet but still getting air in/out. Not the cheapest though and you still have to compare between models since not all of their products will fit perfectly.


              >Another approach is the use of a solid state drive.
              Nah, as you mentioned they're very expensive and then they suck about the same power all the time (a 2,5" laptop drive in sleep mode sucks less power I read some weeks/moths ago). They may be a fine speedup as a system/bootup drive though when one has the money.

              Generally I wonder when the principles of design will turn 180 in terms of power usage. Until now systems where meant/built to be always on, and some things may go to sleep when not needed. But why not start design where the default is being off and only being waked when needed?
              Prof. Luithard told me that there is a long way to go, both for HW manufracturers as well as the Linux kernel.


              >even a simple RAID1 improves the reliability dramatically
              Humm, I always counter this.
              The cons:
              a) twice the price, but only one time storage
              b) twice the power, heat, noise
              c) it's only pure HW failure reliability. If a program runs amok, some malware fVcks your system or an ext4 chooses to keep your data round in RAM for 10 hours even though you closed your apps... *cough* well. Then nothing is gained with a Mirror RAID and the defect data or no data is written (or not) nice and synchron to both drives.

              I don't understand why everybody packs these RAID capable chips on every board.
              Furthermore I know some guy with a RAID5 and the 3 discs failed after a few months within some weeks. One by one.
              So I came not to trust too much into any RAID system.
              Okay, maybe it weren't the best conditions how he used his HDDs, I dunno exactly what he did.

              >using the remainder for non-RAID backup
              Yes, that's what I do. Only that I have no RAID at all and tons of new, old and even older PATA discs (from 320 GB to 250 MB from the 486). But once they were so cheap and quicker than a CDR(W)/DVDR(W) I use them for backups. And with good handling they should do well many further years.

              Well, feel free to tell your experiences if you find a suitable system.

              Comment


              • #22
                I also wonder when there will be encryption chips like the VIA padlock for insertion onboard or in PCI slots etc.
                There are, and were before the Padlock. They have weak points, biggest being the limiting rate of the pci bus.
                Some of the first ones didn't even support DMA/bus-mastering, so they could actually increase the cpu load.

                Comment


                • #23
                  Seagate is my preference too, but ...

                  http://www.seagate.com/staticfiles/s...100452348g.pdf
                  http://www.seagate.com/staticfiles/s...100529369b.pdf
                  The 7200.11 uses 11.16/7.96 Watt operating/idle
                  The 7200.12 uses 6.57/5.0 Watt operating/idle
                  In standby they are 0.99 & 0.79 W respectively.
                  BTW most vendors do not publish such complete specs.

                  So the 7200.12 is low power but the 7200.11 is not.

                  Also there have been numerous firmware problems reported against the 7200.11. The 7200.12 has a clean record. You'd be well advised to read storagereview.com and also look for customer reviews befor buying any disk.
                  ====

                  Yes there are less expensive development boards, but usually with fewer features and more difficult to support features, I've used products like these before.
                  http://www.emacinc.com/servers/embeddedservers.htm
                  but you will find that if these are built to any standard, like PC104, that the price will far exceed a PC mobo. If it's a "one off" board it will not have the features and expandability I need. Yes there are often(usually) still problems integrating PCI parts onto MIPS, ARM or PPC architectures. You'd be surprised at some of the headaches. To start with all x86 PCs integrate PCI into the support chips, bbut most embedded parts use any of a number of offchip parts. The utility & performance of the PCI is quite dependent on the hw & sw implementation.

                  If I had the luxury of selecting the CPU and chipset and the board features this could be a good solution. Finding an appropriate board offered a la carte isn't likely.
                  =====

                  Your complaint against RAID1 is like complaining that milk is white.
                  >a) twice the price, but only one time storage
                  >b) twice the power, heat, noise
                  Yes double the power and cost buys you e' = 1-(1-e)^2 the error rate. That is why we consider RAID. Your typical ~1.5% failures/year translates to 0.0225% failures/year in RAID1. The 750Khr mttf would become 23M hr mttf.((note you will have double the component failure rate, but only a microscopic chance of data loss)). It costs double for this boon.

                  >c) it's only pure HW failure reliability. If a program runs amok, ...
                  It is not news that RAID *IS NOT* a backup. A RAID creates reliable storage only. No one should think otherwise.

                  Yes, one of the disk reliability papers notes that if one disk in a RAID fails there is a 39 times greater probability of failure. This is still a major decrease in data loss probability. They think this may be due to problem systems - bad power or frequent power losses.


                  To save power perhap I should create a RAID1 for an archival backup disk pair instead. These disk will be in standby (very low power) except during backup, perhaps daily. Then periodic offsite backups from there. This means that the local archival backup is stored with great reliability. I can afford to lose anything else. The backup pair will consume very low average power due to the low duty cycle.

                  Also consider the real price. 2 1T seagate 7200.12's in a fully active RAID1 cost $180 capital and consume 10.32W (10% active time). At my current rate for 5yr 24x7 operation that's an additional $53.25 for power (a little more for added cooling). So I can operate a 1TB RAID1 for 5yrs at ~$225 - that's quite a bit cheaper than a non-RAID 1TB across 2 -500GB disks from 18 months ago.
                  ======

                  I'm not a fan of crypto hardware generally either. Several times there have been implementation bugs, or a once secure crypto becomes no-longer safe. You consume power constantly for an intermittent task. I might feel differently if there was a extremely low power elliptical curve crypto at a low price ... but software seems cheaper in the long run.

                  It is true that I am rejecting some low-end processors, even the intel Atom based on performance. Yes the crypto task is one major reason, but not the only one. If you read smallnetbuilder website (link given before) you will see that many NAS boxes have low performance despite adequate disks and GigE networks. It appears that the basic I/O performance, perhaps interrupt handling is a limiting factor. I know there is a significant differnet in GigE performance in Linux based on the specific driver/chipset. I wish there was a good analysis of what this bottleneck is.

                  Comment


                  • #24
                    Beyond the normal chipset differences, with higher cost "server" ethernet chips being orders faster, the better class also supports DMA offloading.

                    Comment


                    • #25
                      Well the VIA boards with padlock suck up few power for an x86 so if the implemented (hardware supported) algorithm is fine for you then it could be the best solution since the padlock should do very fine on encryption while the VIA C7 CPUs are capable of normal tasks while sucking up few Watts but not good for more intensive computing.
                      Still needs comparison of the different vendors/boards and they're not all great.
                      If you dislike the HW crypto chips then the AMD 4850e/5050e should be the best, reliable and still cheap solution on today's market.
                      Power supplies are still an issue, since most normal PC ones tend to be available in horrible dimensions of 450 and far above which no sensible box will ever use but these micro PSUs are rare and expensive. I personally go with some 350/400 ones with 80+/85+ rating. And there was a test in Germany's probably best allround computer mag (c't by heise press) some time ago which showed the efficiencies under low, medium and high load. So I took their results as a base for my decisions. And my PSUs give quite a good picture in efficiency even on low load.

                      Of course you could wait some time until something better arrives or the non-x86-embedded ones get better in terms of storage and expansability but I made the experience that you will wait far longer than you expected and wanted and you still don't have your machine.
                      There will always be some sour grape in the bunch of them.

                      On the Seagates: I was referring to the 320GB 7200.11 model (1 disc spinning) which should still be low on energy consumption 8lower than the average 7200.11 series and most earlier series). Otherwise I just put a reiser3.6 on a 500GB 7200.12 today for this one shall become my main storage operating drive. Sure, these 7200.12 are better on power usage below the line. But so they are supposed to do since these are the newer models and finally vendors (besides GPU vendors... :/ ) go for the energy efficiency.
                      But it's awesome to see such a lot of specs published. I mean that's truly something to make up a decision.

                      Comment


                      • #26
                        To Curaga - *ALL* GigE chips use DMA and nearly all offload checksumming and some I P& multicast subscribing too. It's been two decades since I've seem a LANCE design where the CPU passes the enet data.

                        Originally posted by Adarion View Post
                        Well the VIA boards with padlock suck up few power for an x86 so if the implemented (hardware supported) algorithm is fine for you then it could be the best solution since
                        I'm not completely satisfied with hardware crypto, since the crypto required may change over time. Also these low-end CPUs do not perform well in NAS applications (see smallnetbuilder.com for comparisons). Maybe a better VIA mobo would help, but they seem unavailable here.

                        I agree that waiting for a great embedded NAS board may be an intolerable wait.

                        Power supplies are still an issue, since most normal PC ones tend to be available in horrible dimensions of 450 and far above
                        I completely agree. Who needs these 850W PS ??? Most PC PS operate most efficiently around 65-85% of stated capacity, and most systems have an initial power draw that is the maximum draw. So the goal is to choose the minimum sufficient PS capacity to get past startup. Of course a little headroom for upgrades is desirable

                        I recently purchased a 'kill-a-watt' power meter and applied this to two systems. My current server (3x disks, 2 in standby, 2.66Ghz P4HT 65W TPD, 1 GB DRAM), has an idle consumption around 66W, and at CPU load ~112W, then about 4 extra watts if I hit the disk hard. I expected this system would have idled over 100W so this was a shock. So my estimate is that the idle CPU (no speedstep or hibernate) uses only around 20-25W. The load CPU power is obviously limited around 65WTPD. So I don't believe that switching to the AMD part will offer much idle power saving, but will offer a lot more saving at load. Since my current system power is surprisingly low, I will implement the full functionality on this system as-is and then measure the average power - and only then make a decision to replace the mobo&cpu.

                        The other surprise is that my workstation system (3x disks in a RAID0, 2.66Ghz E6700 Core2Duo, 6GM DRAM) uses ~132W at idle and ~210W at full load, peaking around 240W at startup. I could easily use a 300W PS on this heavy-duty system.

                        On the Seagates: I was referring to the 320GB 7200.11 model (1 disc spinning)
                        Yes these run about 8/5.0 Watt at active/idle, but I have fears about the 7200.11 firmware problems and would prefer the 7200.12 which are still lower power.

                        Comment


                        • #27
                          Originally posted by stevea View Post
                          Yes these run about 8/5.0 Watt at active/idle, but I have fears about the 7200.11 firmware problems and would prefer the 7200.12 which are still lower power.
                          I would stay away from the 7200.12's as well. I've personally had a couple of 1.5 TB drives 7200.12's that died (bad sectors) within a period of 48 hours of being used bought from two different suppliers. I was asking my local supplier about them as well and I guess the RMA rate on them is so high that they are going to drop the seagate line all together.

                          Comment


                          • #28
                            What exactly are you going to be doing with your server? That will determine whether or not you need a 5050e. While the low power parts from AMD tied with a descent IGP motherboard (780g) are excellent parts, your still probably looking at a total system idle of between 45 - 65 watts (depending on the efficiency of your power supply).

                            Have you looked at the BeagleBoard? Its a fanless single board computer based on the Arm Cortex-A8 core. Costs $149, idles at 1 watt (5 watts at max CPU) and is probably about as fast as an Atom based computer. If it has enough power for your needs, you may consider it, as it is silent (passively cooled). You can run Ubuntu, OpenEmbedded or Android on it. Has a DSP core, people are watching 1080p video on it.
                            Last edited by gururise; 07-01-2009, 02:27 PM.

                            Comment


                            • #29
                              Originally posted by gururise View Post
                              Its a fanless single board computer based on the Arm Cortex-A8 core. Costs $149, idles at 1 watt (5 watts at max CPU) and is probably about as fast as an Atom based computer.
                              In my experience of ARM embedded systems I doubt a 600MHz ARM and attached DSP is as fast as a dual-core Ion system. Certainly if you want CPU performance the ARM is going to struggle to keep up with a dual-core, quad-thread x86 running nearly three times as fast.

                              I did look at the Beagle Board at one stage, but from what I remember it doesn't enough I/O to make a viable server.

                              Comment


                              • #30
                                Yeah, it doesn't appear to have any ATA or LAN interface, which isn't exactly good news for a server .

                                Comment

                                Working...
                                X