Announcement

Collapse
No announcement yet.

AMD processors/-based systems power hunger

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • AMD processors/-based systems power hunger

    I'm big AMD supporter, by major factor because of their opensource policy. However no-matter which processor comparison test I look, AMD systems are always draining a lot of power.

    For desktop - about 15-20 Watts more in idle, and about 20-40 Watts more under load.


    For example
    , lets us take Athlon II X4 630 and Intel core i5-750.
    A almost cache-less 630 consumes way more juice than 750, whilst also performing worser. In my country juice is not cheap and eventually, when using the mentioned processors in long 2-3 year window, renders 750 cheaper in total cost than athlon II x4.

    I found a way to undervolt my athlon II x4 630, from ridiculously high 1.4 volt Vcore (only found on full-blown Phenom II) to 1.25v, leading to consumption drop of around 25 watts in load (120w instead of 145w) and 10 in idle(100w->90w), with zero impact on stability. The logic of my mainboard allowed to reduce via percentage, not value, so that reduction scales down very well when CPU is going into cool'n'quiet mode.

    Prior to my switch from intel e5300/gf9800gt to full amd system, I had an opportunity to play with intels Speedstep, which basically reduced CPU multiplier to 6. Both cores still run @ 1.2 Ghz, where on Athlon II x4 due to CoolnQuiet(and on-demand governor) all four run at just 0,8Mhz with Vcore reduced too.

    My main questions are:

    1) Why is AMD K10 draining so much more in performance per watt than Core or even Core2Duo? Whats the reason behind so much difference?
    2) Will there be any change with Bulldozer?
    3) Why is Athlon II X4 spec'ed at 1.4v when it runs with 1.25v (or even 1.20v if u do internet search) just fine?

    Please no intel fanboyism. Ty.

  • #2
    Originally posted by crazycheese View Post
    For example[/URL], lets us take Athlon II X4 630 and Intel core i5-750.
    A almost cache-less 630 consumes way more juice than 750, whilst also performing worser. In my country juice is not cheap and eventually, when using the mentioned processors in long 2-3 year window, renders 750 cheaper in total cost than athlon II x4.

    I found a way to undervolt my athlon II x4 630, from ridiculously high 1.4 volt Vcore (only found on full-blown Phenom II) to 1.25v, leading to consumption drop of around 25 watts in load (120w instead of 145w) and 10 in idle(100w->90w), with zero impact on stability. The logic of my mainboard allowed to reduce via percentage, not value, so that reduction scales down very well when CPU is going into cool'n'quiet mode.
    Your numbers don't make sense. That is a 125 watt chip. It can't eat 145 unless you are overclocking/overvolting. If it is, then something else is really nuts.

    Idle power consumption of that chip should be around 10-15 watts, not 90-100. That's just crazy.

    Unless you're measuring full system power consumption... in which case you have other things to think about than just the CPU.... the chipset and graphics card for example. An intel system will typically have an intel GPU, which is super weakness and probably doesn't eat much power... so on a wall socket power consumption measurement, an intel system, even with a CPU that eats more power, might still have a lower "full system" power consumption.

    Note: according to this: http://www.behardware.com/articles/7...0-and-630.html --- that athlon 630 draws 12.6 at idle and 80.4 full out.

    1) Why is AMD K10 draining so much more in performance per watt than Core or even Core2Duo? Whats the reason behind so much difference?
    The fact that there is more to the story than the CPU.

    2) Will there be any change with Bulldozer?
    Again, more to the story than just the CPU!!!

    3) Why is Athlon II X4 spec'ed at 1.4v when it runs with 1.25v (or even 1.20v if u do internet search) just fine?
    Error margin to ensure maximum stability for everyone, including chips that are a little "out"... improves yields and saves money (for them).

    Comment


    • #3
      Originally posted by droidhacker View Post
      Your numbers don't make sense. That is a 125 watt chip. It can't eat 145 unless you are overclocking/overvolting. If it is, then something else is really nuts.

      Idle power consumption of that chip should be around 10-15 watts, not 90-100. That's just crazy.

      Unless you're measuring full system power consumption... in which case you have other things to think about than just the CPU.... the chipset and graphics card for example. An intel system will typically have an intel GPU, which is super weakness and probably doesn't eat much power... so on a wall socket power consumption measurement, an intel system, even with a CPU that eats more power, might still have a lower "full system" power consumption.

      Note: according to this: http://www.behardware.com/articles/7...0-and-630.html --- that athlon 630 draws 12.6 at idle and 80.4 full out.


      The fact that there is more to the story than the CPU.


      Again, more to the story than just the CPU!!!


      Error margin to ensure maximum stability for everyone, including chips that are a little "out"... improves yields and saves money (for them).
      Hi, thanks for reply! Of course its the whole idle system drain/cpu load whole system drain. The thing is Intel is draining way less and esp. in idle state, but also in load state. If you take a look at the link I provided in the first post, you will see that core 750 is draining less than athlon II x4 on full load and waay less in idle. In fact core i3-530 and core i7-870 drain same in idle, where core i7-9xx drain more due to additional memory channel(if my reading are correct).

      No way can a chip be so much overclocked for stability reasons. I mind you that Athlon II x4 concept was first normal Phenom II with disabled cache, but later they introduced new smaller cores, not just old one with disabled. The newer cores were physically Phenom II but with cache L3 physically absent. I think they either do this high drain on purpose, or they just forgot and don't care.

      I think the reason to lower core-i drain lies within ability to shut off individual cores permanently instead of driving them at lower settings.

      It would be nice if another feature of Bulldozer would be, also, revamped idle and load power management scheme.

      Its just that burning electricity is not that fun, as burning rubber.

      Comment


      • #4
        one part is the die selection the desktop users allways get the worst/bad dies..

        the notebook and server customers get the best dies.

        exampel you can buy an 8core opteron with 80watt tdp

        but an desktop dualcore burns more than 80 watt...

        and the opteron is 3-4 times faster on "7zip"

        only the chipsets are sometimes better on the desktop side.

        if you really wana have an perfekt speed per watt buy an AMD-Fusion 4core+APU in 2011..

        no system can beat that in powerconsuming

        Comment


        • #5
          "1) Why is AMD K10 draining so much more in performance per watt than Core or even Core2Duo? Whats the reason behind so much difference?"

          intel have cpus with no pretesting selection means you can get a good one if you are lucky

          amd pretests every die and the good ones goes into Opteron servers and Notebooks and the bad one goes into the desktop system


          "2) Will there be any change with Bulldozer?"

          not really... if you wana have an more powerfull system per watt buy an opteron system or an notebook.

          the desktop customers just don't care.

          but you also can build an power saving amd system with the money you save on buying an amd cpu you can buy an better ATX power unit for your pc or an SSD this saves more than the CPU---

          worst atx power units do have 80-85% the best do have 93%

          an 3,5" hdd consums 15-20watt an SSD only 0,5 watt.

          an good tft uses 45 watt and a bad one 100watt ....

          you really can get an good system without burn money on intel CPUs.

          and an other way is just buy an 6 core and downclock it to 2ghz

          Comment


          • #6
            Originally posted by crazycheese View Post
            0,8Mhz with Vcore reduced too.

            My main questions are:

            1) Why is AMD K10 draining so much more in performance per watt than Core or even Core2Duo? Whats the reason behind so much difference?
            Looking at the actual figures, it looks like you have something other than the CPU consuming most of the power in the system or you have a PSU that is horrendously inefficient. Your power draw of the system at the wall before you undervolted it was 145 W and 100 W at full load and idle. A modern CPU at idle takes somewhere between 5 and 20 watts according to people who've hooked ammeters to the CPU power cables (e.g. Lost Circuits) and tested them. The fact that the difference between idle and load power draws were 30-45 watts says there's something that consumes power irrespective of the CPU (or its VRM circuitry.) The biggest offenders are GPUs that fail to clock down fully when they idle, particularly if you have two monitors attached to the GPU. I know my GTS250 clocks down to 300 core/100 memory at idle with one monitor attached but always runs at the full 738/1100 with both of my monitors attached. Other things that can suck power are a bunch of HDDs, some peripheral cards like higher-end SATA/SAS controllers, fans, lights, and inefficient chipsets. Also, a power supply that is of poor quality or is old will be far less efficient than newer models. Modern PSUs are 80-90+% efficient, while old ones tend to be somewhere between 55% and 70% efficient. If you have an enormous PSU like a 1000 W+ unit and you are only drawing 100 W or so from it, efficiency also can be poor even if the PSU is pretty efficient when being run with a reasonable load for its size.

            2) Will there be any change with Bulldozer?
            I bet power management will be even better, but your system seems to have most of its power drawn by something other than the CPU, so I doubt changing to a Bulldozer-based CPU will help you much.

            3) Why is Athlon II X4 spec'ed at 1.4v when it runs with 1.25v (or even 1.20v if u do internet search) just fine?
            It is specced that way so that AMD can make money selling Athlon II X4s for roughly a hundred bucks. The percentage of dies that will reliably run at a given speed decreases as you lower the voltage, so AMD set the voltage level fairly high to ensure very high yields. Higher yields = the lower the price AMD can sell the chips at and still make the amount of money they need to. They could certainly lower the Vcore on most of the chips and be fine, but they'd end up with some that they would have to turn into X3s or discard than they can currently sell with the 1.40 Vcore. Your $99 Athlon II X4 640 would end up costing more than $99, perhaps significantly so.

            Comment


            • #7
              Originally posted by droidhacker View Post
              Your numbers don't make sense. That is a 125 watt chip. It can't eat 145 unless you are overclocking/overvolting. If it is, then something else is really nuts.

              Idle power consumption of that chip should be around 10-15 watts, not 90-100. That's just crazy.
              The Athlon II series have TDPs of 45, 65, or 95 watts. Only some of the Phenoms carry a 125-watt TDP rating.

              Comment


              • #8
                Originally posted by crazycheese View Post
                Hi, thanks for reply! Of course its the whole idle system drain/cpu load whole system drain. The thing is Intel is draining way less and esp. in idle state, but also in load state. If you take a look at the link I provided in the first post, you will see that core 750 is draining less than athlon II x4 on full load and waay less in idle. In fact core i3-530 and core i7-870 drain same in idle, where core i7-9xx drain more due to additional memory channel(if my reading are correct).
                You also have to look at the rest of the system configuration if you are measuring power draw at the outlet. The chipset on the LGA1156 models such as the i3 and i7-870 has one chip that is basically the southbridge and doesn't do much heavy I/O, so it draws little power. The chipset on the i7-9xx series and AMD CPUs have two chips, one of which (the northbridge) does a lot of heavy I/O and burns a bit of power. Also, the i7-9xx series must have a discrete GPU installed and that will add quite a bit to the at-the-wall power draw, whereas the i3 probably didn't have a discrete GPU installed since it is an IGP chip.

                No way can a chip be so much overclocked for stability reasons. I mind you that Athlon II x4 concept was first normal Phenom II with disabled cache, but later they introduced new smaller cores, not just old one with disabled. The newer cores were physically Phenom II but with cache L3 physically absent. I think they either do this high drain on purpose, or they just forgot and don't care.
                The Athlon II X4 was intended to mainly use the L3-less "Propus" die with a handful of units being made from L3-containing Phenom II X4 "Deneb" dies that have a defective L3 but four working cores. I think AMD specs for a high voltage on their chips for yield reasons (see my posts above.)

                I think the reason to lower core-i drain lies within ability to shut off individual cores permanently instead of driving them at lower settings.
                That may be part of it, but Intel also fabricates the i3s and i5-5xx/6xx units on the 32 nm process as compared to the 45 nm process of the i7s and AMD Athlon II/Phenom IIs. They also have a lot higher average selling price for their CPUs and can afford to bin a bit tighter for voltages than AMD can.

                It would be nice if another feature of Bulldozer would be, also, revamped idle and load power management scheme.
                I think there might be some of that in Bulldozer from what I've heard.

                Comment


                • #9
                  Originally posted by Qaridarium View Post
                  one part is the die selection the desktop users allways get the worst/bad dies..

                  the notebook and server customers get the best dies.

                  exampel you can buy an 8core opteron with 80watt tdp

                  but an desktop dualcore burns more than 80 watt...

                  and the opteron is 3-4 times faster on "7zip"
                  The 80-watt 8-core Opterons also cost about $500, compared to about $100 for the 95-watt Phenom II X2. The rest of the desktop dual-cores consume 45 or 65 watts.

                  There are also quite a few benchmarks that those Opterons will be significantly slower at than the Phenom IIs and Athlon IIs. Those 80-watt Opterons run at only 1.8 and 2.0 GHz but the desktop dual-cores go well over 3 GHz, so anything not very well threaded will be a LOT faster on the desktop chips.

                  only the chipsets are sometimes better on the desktop side.
                  They're pretty similar to tell the truth. The SR5690 northbridge used in the higher-end server boards is nearly identical to the desktop 890FX. The desktops get a little newer southbridge with the SB800 and its 6 Gbps SATA controller, while the servers use the SB7x0-based SP5100. The only real glaring difference is that none of the server chipsets have an IGP built into them like many of the desktop and most of the mobile chipsets do. Almost all servers have onboard graphics, but they're very low-power 2D-only units that typically hang off the PCI bus and are really designed for outputting a GUI for OS installation and management, not for workstation use.

                  Comment


                  • #10
                    Originally posted by MU_Engineer View Post
                    The 80-watt 8-core Opterons also cost about $500, compared to about $100 for the 95-watt Phenom II X2. The rest of the desktop dual-cores consume 45 or 65 watts.

                    There are also quite a few benchmarks that those Opterons will be significantly slower at than the Phenom IIs and Athlon IIs. Those 80-watt Opterons run at only 1.8 and 2.0 GHz but the desktop dual-cores go well over 3 GHz, so anything not very well threaded will be a LOT faster on the desktop chips.
                    i just try to explain how amd sell hardware

                    they select the best cores for the opteron CPUs and the to bad for an opteron cores are selled for desktops very cheap.
                    in europe you can get an 8core opteron for 250 means not 500dollars
                    300-400 dollars maybe!

                    your thinking about the GHZ are just wrong

                    the 8 core opteron have nearly double L3 cache per core than the desktop 6 core
                    and the 8 core opteron do have quatchannel ram per socked and the desktop one only 2 channel..
                    means on an modern well optimated sourcecode the opteron beat the desktop one



                    Originally posted by MU_Engineer View Post
                    They're pretty similar to tell the truth. The SR5690 northbridge used in the higher-end server boards is nearly identical to the desktop 890FX. The desktops get a little newer southbridge with the SB800 and its 6 Gbps SATA controller, while the servers use the SB7x0-based SP5100. The only real glaring difference is that none of the server chipsets have an IGP built into them like many of the desktop and most of the mobile chipsets do. Almost all servers have onboard graphics, but they're very low-power 2D-only units that typically hang off the PCI bus and are really designed for outputting a GUI for OS installation and management, not for workstation use.
                    similar? i call you an liar in that point

                    thats because my last socket f Opteron with an nforce 3600 pro chip set was worst against the Desktop chipsets

                    i sell it and now i have an desktop board and now i have less bugs...

                    now i can have boot partitions over the 128gb limitation

                    and the nvidia MCP55/3600 chipset was worst with the catalyst driver in the past really worst.

                    and you are wrong if you think the amd chipsets on the opteron side are better.

                    no because the opteron chipsets are fix for 5 years now and the desktop chipsets roll out every year this year the opteron chipsets are nearly the same next year the desktop chipssetzs are better again...

                    Features like PCIe3.0 or USB3.0 or Sata3

                    you can not get an opteron board with usb3.0 and pcie3,0 and Sata3

                    right now the opteron boards are pcie2.0 usb2.0 and sata2.0-..-.

                    means no only the CPUs are better on opteron systems the chipsets are NOT!

                    Comment


                    • #11
                      Originally posted by Qaridarium View Post
                      i just try to explain how amd sell hardware

                      they select the best cores for the opteron CPUs and the to bad for an opteron cores are selled for desktops very cheap.
                      in europe you can get an 8core opteron for 250 means not 500dollars
                      300-400 dollars maybe!
                      We can get the 8-core Opteron 6128 here for about $270. However, the TDP on the 6128 and all standard-TDP G34 chips is 115 watts. The 80-watt figure is "Average CPU Power," which is supposed to be the highest power consumption "on average workloads." It ended up being roughly similar to Intel's TDP in both numerical values and that the CPUs can certainly exceed them under heavy loads. The tests I've seen have put the actual power consumption of the standard-wattage G34 CPUs at somewhere between the ACP and TDP, so the 115 W TDP chips consume about 100 W at full load. I'd be tempted to agree since my 6128s run slightly warmer at load with similar heatsinks than my file server's old 92-watt "Gallatin" Xeons.

                      There are 8-core Opterons with 85-watt TDPs, but they are the HE models and cost $455 and $523 for the 6124 HE and 6128 HE, respectively.

                      your thinking about the GHZ are just wrong

                      the 8 core opteron have nearly double L3 cache per core than the desktop 6 core
                      and the 8 core opteron do have quatchannel ram per socked and the desktop one only 2 channel..
                      means on an modern well optimated sourcecode the opteron beat the desktop one
                      The G34 Opterons are multi-chip modules consisting of two six-core dies with either four or six cores per die active. Each die is similar to a Phenom II X6 die with 6 MB of L3 cache per die (although 1 MB is claimed by HT Assist and not visible to the OS) and a two-channel DDR3-1333 controller. The two dies in the CPU communicate over an HT link using NUMA, just like a dual Socket F Opteron system. The only way to access all of the chip's resources at once is to have enough threads to have threads running on both dies in the package. The overhead of using NUMA means that the OS will only want to move threads to another die only if there are more threads than cores on the current die. Thus the Opterons really will only be running multi-threaded software faster than the Phenom IIs, since die-for-die and clock-for-clock they are similar in performance, except the G34 Opterons are clocked a whole lot slower. Trust me on this, a friend has a Phenom II X4 920 system that I use with some regularity and I have a dual Opteron 6128 system. The 920 is faster than my 6128s in anything using about 5 threads or fewer.

                      similar? i call you an liar in that point

                      thats because my last socket f Opteron with an nforce 3600 pro chip set was worst against the Desktop chipsets

                      i sell it and now i have an desktop board and now i have less bugs...
                      I wasn't talking about NVIDIA chipsets that were several generations old, I was talking about the current AMD server and desktop chipsets that are being sold NOW. The 890FX and SR5690 are all reported as RD890 units by lspci and they have very identical specs with regards to PCIe lanes, process technology, and such. The only features that may be different are that the 890FX explicitly supports CrossFire while the SR5690 does not have explicit support for it, although many have gotten it to work. Also, the SR5690 has an IOMMU while most if not all 890FX boards don't have that function or don't have that function exposed by the BIOS.

                      and you are wrong if you think the amd chipsets on the opteron side are better.

                      no because the opteron chipsets are fix for 5 years now and the desktop chipsets roll out every year this year the opteron chipsets are nearly the same next year the desktop chipssetzs are better again...
                      The current Opteron chipsets are pretty much of the same generation as the current desktop ones. They both use derivatives of the 800-series northbridges and SB7x0 derivatives. Oddly enough, the server line were the very first products to use the new 800-series northbridge silicon. Sure, some desktops have the SB800 southbridge, but there's not much on there over the SB7x0-based SP5100 that will interest server buyers.

                      AMD will also be revamping the Opteron chipset line in 2012 when they release the "Sepang" and "Terramar" Opterons on new sockets with more I/O being controlled on the CPU die rather than by the chipset.

                      Features like PCIe3.0 or USB3.0 or Sata3

                      you can not get an opteron board with usb3.0 and pcie3,0 and Sata3
                      The PCI Express 3.0 standard was only finalized earlier in this month, so nobody has chipsets that provide that functionality yet.

                      USB 3.0 is not very important to server operators as about the only thing it is used for right now over USB 2.0 is to attach external hard drives. You're typically not going to be doing that to a server. Most servers have only two USB ports on the rear I/O panel compared to 6-10 ports on a desktop, so that should give you an idea as to the importance of any USB ports on servers. Also, if you really needed USB 3.0, you can always add in a USB 3.0 card.

                      SATA 3.0 from the southbridge also doesn't matter a whole lot since most server operators that need a lot of disk I/O throughput will be using add-in PCI Express controller cards. Those are frequently 6 Gbps SAS units today, which is compatible with 6 Gbps SATA. Quite a few units have the SAS controller onboard, so they really do have the onboard capability to talk to SATA 6 Gbps HDDs. It doesn't matter if the controller is in the southbridge or in another IC connected over PCIe, the boards still have that function.

                      Comment


                      • #12
                        Thanks for your posts, guys!

                        I'm also aware to large part of (lots of) suggestions and tips given by MU_Engineer, unfortunately they are not the case. Im not using a psu voltmeter to messure. I was using a standard multifunction euro socket meter - similar to this one: http://www.ebreaker.de/images/ab-energiem-01.jpg

                        It does produce fairly accurate result, when connected over a longer period. Only case was calculated, 40W 24" acer monitor was using separate line.

                        The problem is that AMD Cpu power hunger is something that is already well known in the world.

                        The first core2d and second athlon II config were used inside same machine - HDDs, optical, monitor and PSU unchanged.
                        - The first was dual core e5300, asrock p43me, 2x2gb ddr2-800, gf-9800gt-green(rated at max 75W).
                        - The second - athlon II x4 630, gigabyte ga-ma785gmt-ud2h(lastest bios), 2x2gb ddr3-1600, rd-hd4770(rated at max 85w).

                        The psu is 2.5 year old BeQuiet 400W, should be very comparable to enermax in terms of efficiency.

                        Athlon II x4 has nothing more with PhenomII - the L3 cache is physically absent, it is cut down already in design; not after production. First prototypes were, true, phenoms II with disabled cache, but the ones I possess is not, its Propus core.
                        -----------
                        processor : 0
                        vendor_id : AuthenticAMD
                        cpu family : 16
                        model : 5
                        model name : AMD Athlon(tm) II X4 630 Processor
                        stepping : 2
                        cpu MHz : 800.000
                        cache size : 512 KB
                        physical id : 0
                        siblings : 4
                        core id : 0
                        cpu cores : 4
                        apicid : 0
                        initial apicid : 0
                        fpu : yes
                        fpu_exception : yes
                        cpuid level : 5
                        wp : yes
                        flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nonstop_tsc extd_apicid pni monitor cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt npt lbrv svm_lock nrip_save
                        bogomips : 5625.32
                        TLB size : 1024 4K pages
                        clflush size : 64
                        cache_alignment : 64
                        address sizes : 48 bits physical, 48 bits virtual
                        power management: ts ttp tm stc 100mhzsteps hwpstate
                        -----------

                        I think it is highly unprobable that cpu gets overvolted for stability. Why for, when phenom II uses exactly the same Vcc. 6 megabyte of cache are away, why not to drop the voltage...

                        This is pure craziness if AMD think desktop are unimportant for power efficiency! Look, for example my cpufreqinfo for one of the cores:
                        ------
                        driver: powernow-k8
                        CPUs which run at the same hardware frequency: 3
                        CPUs which need to have their frequency coordinated by software: 3
                        maximum transition latency: 8.0 us.
                        hardware limits: 800 MHz - 2.80 GHz
                        available frequency steps: 2.80 GHz, 2.10 GHz, 1.60 GHz, 800 MHz
                        available cpufreq governors: ondemand, performance
                        current policy: frequency should be within 800 MHz and 2.80 GHz.
                        The governor "ondemand" may decide which speed to use
                        within this range.
                        current CPU frequency is 800 MHz.
                        cpufreq stats: 2.80 GHz:2.12%, 2.10 GHz:0.02%, 1.60 GHz:0.07%, 800 MHz:97.79% (9570)
                        -------
                        As you note, unlike server, 97% of time is spent on things that can be done in idle mode anyway. I need the power of four cores much more on-demand (encoding and compiling).

                        I highly doubt that energy efficiency of topmost priority in server systems.

                        Also, I too heard, that AMD opteron has 80W not due to "better cores", but due to adding N while, at same time, droping total GHz ratio of CPU - to maintain TDP. And the additional cache is included in the formula as well.

                        This, combined with previous statement will make Opteron use on my desktop rather unefficient and unused. Well, I dont run server at home(yet), I only need boost of power maybe only some times per day.

                        Qaridarium, you are living in Germany, no? What is your Stromanbieter? It is normal that extra 60W used 12 hour per day, done for the year, will make €53 extra cost!!

                        Also, please please look here: http://techreport.com/articles.x/18448/4

                        You will see now nicely even 8 HT-cored 870 is behaving. The cpu is sleeping if not needed, using all power when required then going into idle again.

                        The nice new things that I know now (and lots due to your help) are:
                        -) Intel is selling unlock cards for extra price for low-level intel cpus, to unlock their original performance. Pretty dirty, when a cpu that is capable at running faster with zero cost, is being added to this scheme(implementing which also ADDS extra cost) and the result is then sold at lower price. Pretty dirty.

                        -) Intel is selling SAS cards on server boards that are also locked! Present, working, using power, but LOCKED unless you purchase the code(600$ or so).

                        -) AMD sorts out bad cores for desktop, where with Intel you are running server quality(in terms of AMD) cpu on desktop system.

                        -) AMD uses 2 chip logic, similar to core i7-9xx. This is very probably adding to idle usage..


                        So from infos you told me, guys, high power drain seems to be combination of:
                        -) AMD not caring about tighter and accurate voltages (idle and load +)
                        -) AMD giving worser cores to desktop (idle and load +)
                        -) Aging 45nm process on CPU and -bridges (idle and load +)
                        -) PCB using 2 chips, instead of one like on 11xx boards (idle +)
                        Seems to pretty much apply here..

                        Comment


                        • #13
                          It's simply a matter of trade-offs in CPU design. Performance vs power consumption in a given fab process. Intel is one step ahead here.

                          AMD releases low-voltage processors that have much lower power consumption, so you might wish to look into that. If you wish to go even lower, you'll need to move into mobile territory (AMD's new Zacate platform is awesome, dual core CPU + GPU in < 18W max power).

                          Then again, neither Intel nor AMD can hold a candle to ARM in power consumption.

                          Comment


                          • #14
                            Originally posted by crazycheese View Post

                            The first core2d and second athlon II config were used inside same machine - HDDs, optical, monitor and PSU unchanged.
                            - The first was dual core e5300, asrock p43me, 2x2gb ddr2-800, gf-9800gt-green(rated at max 75W).
                            - The second - athlon II x4 630, gigabyte ga-ma785gmt-ud2h(lastest bios), 2x2gb ddr3-1600, rd-hd4770(rated at max 85w).

                            The psu is 2.5 year old BeQuiet 400W, should be very comparable to enermax in terms of efficiency.
                            If I were you, I'd put the 9800GT in the Athlon II system or the HD 4770 in the Intel system and see how that affects the readings. That's the largest variable that is unrelated to the CPU and can be directly isolated, so I would definitely do that if you still have the parts.

                            I think it is highly unprobable that cpu gets overvolted for stability. Why for, when phenom II uses exactly the same Vcc. 6 megabyte of cache are away, why not to drop the voltage...
                            Cache doesn't use nearly as much power as logic, especially when the L3 cache runs at a completely different (and lower) speed and voltage. Phenom II L3 caches use somewhere right around 1.00 V and run at 1.80-2.00 GHz, compared to 1.35-1.40 V and 2.60-3.50 GHz for the cores. Your Athlon II X4 has virtually the same amount of logic as a Phenom II X4 and it is made on the same process and stepping, so I would expect the core voltage to be the same as a Phenom II. I do agree AMD is really over-speccing voltage, but it's very frequently and relatively easily correctable by the user like you have said.

                            This is pure craziness if AMD think desktop are unimportant for power efficiency!

                            I highly doubt that energy efficiency of topmost priority in server systems.
                            It actually is a top priority. Just look at what's on the front of an Opteron box:



                            If energy efficiency weren't a priority, why would they put "Outstanding performance per watt" on the front of their box instead of "Outstanding performance" or "Outstanding performance for the price?"

                            Also, I too heard, that AMD opteron has 80W not due to "better cores", but due to adding N while, at same time, droping total GHz ratio of CPU - to maintain TDP. And the additional cache is included in the formula as well.
                            The biggest reason the Opterons can have that many cores and still have the same TDP as desktop chips is that their clock speed is so much slower. The lower clock speed in itself will lead to a lower thermal dissipation. But it really comes from the fact that you don't need as much voltage to drive a chip at ~2 GHz as you do as ~3 GHz, and the thermal dissipation increases linearly with clock speed but with the square of voltage. So if you can drop the voltage to 90% of what you had before, your chip will consume 81% as much power as before with everything else being equal.

                            This, combined with previous statement will make Opteron use on my desktop rather unefficient and unused. Well, I dont run server at home(yet), I only need boost of power maybe only some times per day.
                            Opterons idle at 800 MHz and the standard-voltage ones use right about 1.000 V at idle. The HE ones use less voltage and the EE ones less yet. However, the e-series Athlon IIs with a 45-watt TDP also have a lower idle voltage than your standard 95-watt unit.

                            The nice new things that I know now (and lots due to your help) are:
                            -) Intel is selling unlock cards for extra price for low-level intel cpus, to unlock their original performance. Pretty dirty, when a cpu that is capable at running faster with zero cost, is being added to this scheme(implementing which also ADDS extra cost) and the result is then sold at lower price. Pretty dirty.

                            -) Intel is selling SAS cards on server boards that are also locked! Present, working, using power, but LOCKED unless you purchase the code(600$ or so).
                            Intel locks a lot of their stuff, this is nothing new. No Core i-series supports ECC RAM, the i5 7xx series have HyperThreading locked off, some of the lower-end i-series chips have no Turbo Boost or virtualization, and many of the Xeons have cache locked off, cores locked off, HyperThreading and Turbo Boost locked off, and the RAM and QPI speeds limited.

                            -) AMD sorts out bad cores for desktop, where with Intel you are running server quality(in terms of AMD) cpu on desktop system.
                            Yes, if only because Intel's desktop chips tend to be less-locked than their server counterparts and AMD doesn't lock anything besides cores on their Opterons.

                            -) AMD uses 2 chip logic, similar to core i7-9xx. This is very probably adding to idle usage..
                            It also leads to greater I/O capacity and performance. The lower-capability AMD northbridges use less power than the higher-capability ones, so go for something very basic like the 870 northbridge if you want a minimal power draw.

                            Comment


                            • #15
                              Originally posted by MU_Engineer View Post
                              We can get the 8-core Opteron 6128 here for about $270. However, the TDP on the 6128 and all standard-TDP G34 chips is 115 watts. The 80-watt figure is "Average CPU Power," which is supposed to be the highest power consumption "on average workloads." It ended up being roughly similar to Intel's TDP in both numerical values and that the CPUs can certainly exceed them under heavy loads. The tests I've seen have put the actual power consumption of the standard-wattage G34 CPUs at somewhere between the ACP and TDP, so the 115 W TDP chips consume about 100 W at full load. I'd be tempted to agree since my 6128s run slightly warmer at load with similar heatsinks than my file server's old 92-watt "Gallatin" Xeons.
                              There are 8-core Opterons with 85-watt TDPs, but they are the HE models and cost $455 and $523 for the 6124 HE and 6128 HE, respectively.
                              well yes.. but my point is true any opteron beat an desktop dualcore cpu in performance per watt usage




                              Originally posted by MU_Engineer View Post
                              The G34 Opterons are multi-chip modules consisting of two six-core dies with either four or six cores per die active. Each die is similar to a Phenom II X6 die with 6 MB of L3 cache per die (although 1 MB is claimed by HT Assist and not visible to the OS)
                              you can turn of the ht assist in the bios

                              not all apps run faster with ht assist in my point of view ht assist is a benchmark feature for syntetic benchmarks





                              Originally posted by MU_Engineer View Post
                              and a two-channel DDR3-1333 controller. The two dies in the CPU communicate over an HT link using NUMA, just like a dual Socket F Opteron system. The only way to access all of the chip's resources at once is to have enough threads to have threads running on both dies in the package. The overhead of using NUMA means that the OS will only want to move threads to another die only if there are more threads than cores on the current die. Thus the Opterons really will only be running multi-threaded software faster than the Phenom IIs, since die-for-die and clock-for-clock they are similar in performance, except the G34 Opterons are clocked a whole lot slower. Trust me on this, a friend has a Phenom II X4 920 system that I use with some regularity and I have a dual Opteron 6128 system. The 920 is faster than my 6128s in anything using about 5 threads or fewer.
                              not anything is about raw speed the opteron is abaut latency in an game like ARMA2 an Opteron system do have much better Latency time

                              means if the systems runs slower because of the first theat is slowed down by the 2ghz overall the system reacts faster than the PhenomII system with more fps...

                              fore FPS or reacting faster i prever the reacting faster

                              thats because of the parralell ram latency 4 channels do have less parraell latency than 2 channels.

                              Originally posted by MU_Engineer View Post
                              I wasn't talking about NVIDIA chipsets that were several generations old, I was talking about the current AMD server and desktop chipsets that are being sold NOW. The 890FX and SR5690 are all reported as RD890 units by lspci and they have very identical specs with regards to PCIe lanes, process technology, and such. The only features that may be different are that the 890FX explicitly supports CrossFire while the SR5690 does not have explicit support for it, although many have gotten it to work. Also, the SR5690 has an IOMMU while most if not all 890FX boards don't have that function or don't have that function exposed by the BIOS.
                              right now wait 6 monds or so then the desktop boards do have better chipsets again ...




                              Originally posted by MU_Engineer View Post
                              The current Opteron chipsets are pretty much of the same generation as the current desktop ones. They both use derivatives of the 800-series northbridges and SB7x0 derivatives. Oddly enough, the server line were the very first products to use the new 800-series northbridge silicon. Sure, some desktops have the SB800 southbridge, but there's not much on there over the SB7x0-based SP5100 that will interest server buyers.
                              in clear words right now the desktop is better

                              maybe because of the sata3



                              Originally posted by MU_Engineer View Post
                              AMD will also be revamping the Opteron chipset line in 2012 when they release the "Sepang" and "Terramar" Opterons on new sockets with more I/O being controlled on the CPU die rather than by the chipset.
                              yes 2012... means 2011 the desktop chipsets beat the server chipsets again...




                              Originally posted by MU_Engineer View Post
                              The PCI Express 3.0 standard was only finalized earlier in this month, so nobody has chipsets that provide that functionality yet.
                              yet is an very danger word in the computer world the yet can be over every minute--


                              Originally posted by MU_Engineer View Post
                              USB 3.0 is not very important to server operators as about the only thing it is used for right now over USB 2.0 is to attach external hard drives. You're typically not going to be doing that to a server. Most servers have only two USB ports on the rear I/O panel compared to 6-10 ports on a desktop, so that should give you an idea as to the importance of any USB ports on servers. Also, if you really needed USB 3.0, you can always add in a USB 3.0 card.
                              opterons are not only used by server think about workstations



                              Originally posted by MU_Engineer View Post
                              SATA 3.0 from the southbridge also doesn't matter a whole lot since most server operators that need a lot of disk I/O throughput will be using add-in PCI Express controller cards. Those are frequently 6 Gbps SAS units today, which is compatible with 6 Gbps SATA. Quite a few units have the SAS controller onboard, so they really do have the onboard capability to talk to SATA 6 Gbps HDDs. It doesn't matter if the controller is in the southbridge or in another IC connected over PCIe, the boards still have that function.
                              it does matter i know tests that the latency over the southbridge is better than over an PCIe card.

                              so you buy an SSD for faster latency then you burn the latency over the PCIe bus LOL.. FAIL

                              Comment

                              Working...
                              X