Announcement

Collapse
No announcement yet.

Low(er) power server issues

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Thanks for sharing the hdd data. I still think I will go with one if i make next changes. They are just so quiet and if I heat pipe my motherboard in a custom all in one case i'll probably want the silence. Those 10 dollar IDE CF disk drives I really hope people will create like raided versions of those where you have to plug in like 4 32 gb cards to create your disk. Even with the slow access times if those are raided they would be fairly nice. 100mb/s to 190mb/s.

    Comment


    • #12
      Originally posted by Adarion View Post
      Mainboards to choose from (ASUS M2A-VM ist quite nice and cheap, too)
      Sorry but that motherboard is the BIGGEST piece of shit around. Buggy chipset, unstable as hell, slow in disk i/o and really poor nic and really crappy USB. Asus has been trying to fix that board for a long time 19 BIOS updates and counting.

      Comment


      • #13
        Sadly I don't see much good data on CPU (not system) idle power.
        We were amazed - in fact, couldn't believe our eyes - when we first measured the power consumption of the QX9650. We repeated our measurements several times to ensure their veracity and rule out error

        has a nice but little chart. Wikipedia has a nice list of total power


        I'm estimating that an AMD 5050e would expecd 45W/7w at full/idle.
        My old P4/2.6Mhz might be 65W/42W.

        Using some performance estimates and actual numbers for part of the server tasks (w/ "sar") I think the CPU (of the class I'm looking at) will see a 4%-6% load. The idle CPU power figure consumption therefore dominates the power consumption. For example if we make the simplifying assumption that the CPU will expend max power 5% of the time and be idle the other 95%, the AMD5050e would average 8.9W, and the P4 ~43.1W. If the power expended by some hypothetical AMD part was 100W/7w then the avg would be 11.6W. So the MAX power increasing radically makes little difference for my server since the duty cycle is so low. My goal is to seek out the lowest avg power that supplies sufficient performance, so the idle power required is the big issue.

        Of course the main goal is cost - with a 5yr estimated server lifespan and my decent cost of power I expect that 1 watt of average power saving is worth ~$5.50(minimum) over the life of the server. I can probable save ~34W by using the 5050e in place of an ancient P-4, and this alone justifies the price CPU+MOBO+DRAM (ignoring the time cost of money).

        Comment


        • #14
          Originally posted by stevea View Post
          I'm estimating that an AMD 5050e would expecd 45W/7w at full/idle.
          I can only tell for a whole system with a 4850e (including 3,5" disc, onboard GPU and so on) and it was about 45W idle (disk spinning iirc and CPU @ 2x1000MHz, no undervolting) and both cores compiling and stuff at 75W. It's some time ago I measured that so I could check that again. But I'm to lazy to pull the plug now for the measurement device.

          I think you underestimate the Linux offerings...
          Well, as I said there are some though most distributions keep to x86 and maybe PPC. But of course if you look around you'll find some for the other arches as well. Esp. Gentoo will probably work everywhere like NetBSD (which would also be a choice). Gentoo is also offering cross compiling so you may compile on a big AMD/intel box for a slower ARM/... embedded system.
          But yes, it will be hard to find one that can work as a storage controller. I know of the WD something, just a case with a non-x86 and some storage, Linux already into it. (But I don't trust WD on hard disks.)
          Well, if you can find at least one board with a PCI/PCIe slot you could buy a controller with lots of IDE or SATA (or mixed?) storage and you should be ok with that. Just need to make sure the Linux kernel driver for the controller will compile/work on your specific arch..
          I don't know which kind of throughput you expect on the file server, but 64-128M could be enough if you don't need a big disc cache.

          The problem I have is getting a non-x86 system with
          sufficient power & mem & peripherals to do the job.
          Mhm, well but as you said there are some chips around that can do quite a job at computing. From the Chemnitz Linux Days I know that some people were dealing with extremely low power environment Linux (Prof. Luithardt from Switzerland) and he told me that there are vendors offering complete boards with e.g. ARM-CPU, RAM (slotted or solidly mounted), network, sometimes even GPU, RS232 for serial console or via FireWire iirc. and often with an option for CF or SD Cards as mass storage. I wanted to get myself into that topic, also with headless stuff that works only via serial console but alas since I started working on my chemistry back at university there was a severe lack of time. And I'm a total starter on headless and/or non-x86 machines.

          Many X86 servers like the IBM X-series have a very low-end video on card on-board b/c of this. This way you only only an attached display for configuration or diagnostics.
          Yep, I saw some. Back at the times it was even just a 80x25 (and others) text adapter. Well, but you know that most x86-board Vendors will have a BIOS POST check for some GPU either at ISA, AGP, PCI/PCIe and they would beep and squeak all kinds of error if there is no GPU card. You would have to look closely or use a coreboot capable board.

          On the HDDs: if you really want solid stable rugged drives you will have to invest tons of money into SCSI. Still the best and most robust solution. SCSI was built for 24/7 operation - most IDE/SATA are not. I know that Seagate (my preference in HDDs) offers some labeled as 24/7. They publish a lot of specs on their pages so you might want to check the MTBF etc.. Of course these 3,5" suck more energy than a 2,5" but latter are not built for file servers. They're prolly more for laptops, maybe with a high spin up/down count.
          Stop TCPA, stupid software patents and corrupt politicians!

          Comment


          • #15
            Originally posted by Adarion View Post
            Yep, I saw some. Back at the times it was even just a 80x25 (and others) text adapter. Well, but you know that most x86-board Vendors will have a BIOS POST check for some GPU either at ISA, AGP, PCI/PCIe and they would beep and squeak all kinds of error if there is no GPU card. You would have to look closely or use a coreboot capable board.
            I haven't ran across a board in a while that would not boot without a display adapter. Then again almost all boards have a IGP on them nowdays anyways built into the chipset.

            Comment


            • #16
              Originally posted by deanjo View Post
              I haven't ran across a board in a while that would not boot without a display adapter.
              Me neither, but I didn't really look our for it. At least on the x86 side. Probably the coreboot can implement a headless boot without breaking. But the IGPs shouldn't suck up too much power for a normal system. But then if one really wants to go down with power usage its even better without any of that. (but less easy to set up)
              Stop TCPA, stupid software patents and corrupt politicians!

              Comment


              • #17
                Originally posted by Adarion View Post
                Me neither, but I didn't really look our for it. At least on the x86 side. Probably the coreboot can implement a headless boot without breaking. But the IGPs shouldn't suck up too much power for a normal system. But then if one really wants to go down with power usage its even better without any of that. (but less easy to set up)
                Why would you need coreboot? Like I said most boards (if not all within the last 3-4 years and the ones that did were using AGP slots IIRC) don't need any graphics at all to boot. An IGP won't really draw any appreciable power if not being used.

                Comment


                • #18
                  Thanks for your comments and especially power figures Adarion.

                  I develop driver/kernel software for embedded systems (Linux & other) of various sorts for a living, so I'm quite familiar wiith the non-x86 issues. The embedded ARM NAS boards mostly use Marvell chipsets that do not meet my needs. Most of the development boards are shockingly expensive ($3000 for the Freescale MPC8641D for example) and even tho' Linux has been ported, it will likely require much driver work to make some off-the-shelf SATA card play with these. There are also more powerful ARM CPUs than the Marvell but these aren't available on any modestly prices boards I'm aware of. Perhaps on a $1500 6U PCI board or a similarly priced VME card. These are not realistic solutions unless some mass-market product contains the features I need.

                  If you read my initial post you'll see that I an not building a NAS, I'm building a router/server that will maintain a VPN connection, perform DNS forwarding, routing, firewall, and a number of modestly compute intensive apps and especially some performance challenging crypto tasks. These will not fit well in a 128KB DRAM footprint. I think 1GB DRAM should be sufficient and leave some good space for disk and network buffers.

                  Originally posted by Adarion View Post
                  On the HDDs: if you really want solid stable rugged drives you will have to invest tons of money into SCSI. Still the best and most robust solution. SCSI was built for 24/7 operation - most IDE/SATA are not. I know that Seagate (my preference in HDDs) offers some labeled as 24/7. They publish a lot of specs on their pages so you might want to check the MTBF etc.. Of course these 3,5" suck more energy than a 2,5" but latter are not built for file servers. They're prolly more for laptops, maybe with a high spin up/down count.

                  I am not impressed with SCSI reliability claims ... see these ...



                  Conclusions from the second (Usenix) paper states, "In our data sets, the replacement rates of SATA disks are not worse than the replacement rates of SCSI or FC disks". Pro'ly once true that SCSI & FC were superior, but today the difference is marginal.

                  Both papers (my POV) indicate that there are several failure mechanisms not captured in the manufacturers MTTF calculation. So yes use the mfgr data, but expect the failure rate to be higher this. In the Google paper the range of measured failure rates indicate the best disk have ~500k hour MTTF, the worst(defective lot) ~67k hr by measurement.

                  I've read the data on Seagates Momentus 7200.4 laptop drive (600khr MTTF, and low power - mostly under 2.2W while operating). There are a lot of power-thrifty 3.5" drives, like the Seagate 7200.12 1TB which uses 6.6/5.4/5.0/0.79W during seek/rw/idle/standby (750k hr mttf). I too prefer Seagates and they are very good about providing data on their drives.

                  The 500GB laptop drive costs as much a good quality 3.5" 1.5TB drive, so *IF* you need the disk space, then 3x laptop drives will consume about the same power as a decent 1.5TB, will cost 3x as much, and the 3x laptop drives have 1/3rd the reliability of a single laptop drive which is already inferior to the 3.5" drive. Laptop drives have little or no power advantage on a "per gigabyte" basis.

                  So I can appreciate the use of a laptop drive where the primary goal is low power, single drive, and cost and reliability aren't design drivers. That's not my situation. Also I'm not seeking a truly silent system, but I would prefer to offload the cooling to 120mm case fans.

                  ----------
                  Another approach is the use of a solid state drive. These are currently very pricey, and one 64GB drive will consume ~2W when active, just like a laptop drive. The MTTF is reported at 1500k hours, but the price per gigabyte is currently about 20X that of a 3.5" 1TB rotating drive. These flash based drives have a limited number of write cycles, and their I/O can be very fast. So they might fill a niche for frequently accessed data. Perhaps holding a Linux XIP(execute in place) capable root file system mostly mounted read-only to avoid flash wear-out.

                  ==========

                  Although I object to lower reliability drives, it must be recognized that even a simple RAID1 improves the reliability dramatically - to the "don't care" point even for crummy drives. One of the papers mentions a bad batch of disks with a 13% AFR (~67k hr MTTF). The other paper suggests that failure of a disk in certain RAID increases the probability of another RAID disk failing by a factor of 39. So a RAID1 of these terrible disks, might still have a combined MTTF of 20 Million hours - far better than any single commercial drive. Of course RAID1 means approximately double the power.

                  I have concerns about the currently reliability & MTTF of my legacy disks, since some have many hours (years actually) of wear; mostly 160GB to 300GB SATAs. I'm thinking of using the oldest drives in a RAID(non-0) config, and using the remainder for non-RAID backup mostly kept in standby. That way if I lose a RAID drive (most likely case I think) I can rebuild the RAID with a new disk. If a backup drive(s) fail I can reconstruct a current backup from the live copy.

                  Note there are still single-points-of-failures, your PC might be hit by lightening, so RAID does not replace an offsite backup scheme, but it nearly eliminates the fear of single drive failure data losses.
                  Last edited by stevea; 26 May 2009, 01:40 AM.

                  Comment


                  • #19
                    Originally posted by Adarion View Post
                    Well, if you can find at least one board with a PCI/PCIe slot you could buy a controller with lots of IDE or SATA (or mixed?) storage and you should be ok with that.
                    I thought about that, but if you care about disk throughput and want to slap on a bunch of drives in a RAID configuration then two modern hard drives can saturate a PCI bus and one modern hard drive can saturate a poorly-implemented PCI bus (getting maybe 60% of theoretical throughput).

                    Comment


                    • #20
                      You are right, but OTOH if I have a couple disks (say RAID1) only spun-up daily to perform archival backup ,then I don't care greatly if this is a fair bit slower.

                      Doesn't solve the fundamental problem - the embedded parts have too few interfaces. Need a minimum of 4X SATA and 2X enet (one GigE, the other at least 100Mbit), and should really have an IDE. I don't see this on embedded card, but it's easy to find a <$100 AMD Mobo with all these and more.

                      Comment

                      Working...
                      X