Announcement

Collapse
No announcement yet.

AMD processors/-based systems power hunger

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • crazycheese
    started a topic AMD processors/-based systems power hunger

    AMD processors/-based systems power hunger

    I'm big AMD supporter, by major factor because of their opensource policy. However no-matter which processor comparison test I look, AMD systems are always draining a lot of power.

    For desktop - about 15-20 Watts more in idle, and about 20-40 Watts more under load.


    For example
    , lets us take Athlon II X4 630 and Intel core i5-750.
    A almost cache-less 630 consumes way more juice than 750, whilst also performing worser. In my country juice is not cheap and eventually, when using the mentioned processors in long 2-3 year window, renders 750 cheaper in total cost than athlon II x4.

    I found a way to undervolt my athlon II x4 630, from ridiculously high 1.4 volt Vcore (only found on full-blown Phenom II) to 1.25v, leading to consumption drop of around 25 watts in load (120w instead of 145w) and 10 in idle(100w->90w), with zero impact on stability. The logic of my mainboard allowed to reduce via percentage, not value, so that reduction scales down very well when CPU is going into cool'n'quiet mode.

    Prior to my switch from intel e5300/gf9800gt to full amd system, I had an opportunity to play with intels Speedstep, which basically reduced CPU multiplier to 6. Both cores still run @ 1.2 Ghz, where on Athlon II x4 due to CoolnQuiet(and on-demand governor) all four run at just 0,8Mhz with Vcore reduced too.

    My main questions are:

    1) Why is AMD K10 draining so much more in performance per watt than Core or even Core2Duo? Whats the reason behind so much difference?
    2) Will there be any change with Bulldozer?
    3) Why is Athlon II X4 spec'ed at 1.4v when it runs with 1.25v (or even 1.20v if u do internet search) just fine?

    Please no intel fanboyism. Ty.

  • Qaridarium
    replied
    Originally posted by MU_Engineer View Post
    You would still need to check clearances carefully no matter what board you mount that Scythe Mugen on. It's simply an enormous heatsink that the only real reason to get it would be to passively cool the CPUs. There are certainly other heatsinks out there that aren't quite so huge that would work on an Opteron board but are still pretty quiet. You could also look at water cooling as that is quiet, water blocks are small and have few clearance issues, and there are blocks specifically designed to bolt to Socket F/C32 and G34 out there, so you don't need to use the clamp-on heatsink retention brackets.

    yes right.

    in generall an watercooling system is NOT passiv means the water pump can go down and an water pump makes noises.

    an Mugen never let you go down and water isn't a good idear with electronic..





    You can also look on eBay for retention brackets if your board does not come with one. They cost $4-10 and I'll bet that some of the sellers even ship to Germany.


    Originally posted by MU_Engineer View Post
    Depends on your definition of "loud." If you demand pretty much total and complete silence from your machine (basically an SPL < 20 dB) then yes, they're all loud. All of them will also be louder than your enormous heatsink as well.
    o well nice you get the point i just wana have highspeed with 0dB of noice




    Originally posted by MU_Engineer View Post
    But most people I've seen with Socket F boards (which would use the same heatsinks as C32) have made some pretty quiet machines out of 92 mm or carefully-selected 120 mm desktop heatsinks. Machines using 2U/3U server heatsinks with PWM-controlled fans 70 mm or larger with their speed controlled by the BIOS are very similar to your typical corporate office PC in noise level.
    right... my OC-E6600@3.6ghz do have this one right now: http://geizhals.at/a464251.html


    with 2 120cm fan's

    and i have an mugen for an bulldozer system in 2011





    Originally posted by MU_Engineer View Post
    You are just trying to use a heatsink that is very far beyond any size and weight specifications of heatsinks designed for that socket. You shouldn't be surprised that you would have trouble getting it to fit. You probably will have trouble mounting that heatsink on 90+% of desktop boards as well.
    uuhhh i'm so sorry i just allways do the wrong thing




    Originally posted by MU_Engineer View Post
    ASUS says it is a 12" by 10" ATX board on their website. They also do not have the product listed on their German website.
    ok ok.-.. but if you go eatx you can buy an dualsocket g34 board

    and you can pull 64gb of desktop non ecc ram into





    Originally posted by MU_Engineer View Post
    The KCMA-D8 is about $290 over here compared to about $250 for the H8SGL-F.
    You can't really just divide the price of a 6-core chip by 2/3 to get a price of a quad-core chip. The closest C32 equivalent to the 6128 would be two Opteron 4122s, which are 2.2 GHz quad-cores. Two of them cost $200, compared to $270 for the 6128. Two 4122s + the KCMA-D8 will run you $490, while a 6128 and an H8SGL will run you $520, so the C32 solution is a little less expensive and a little faster. Yes, it will likely be a wash after you buy heatsinks, but remember that the only heatsinks that will fit on G34 boards are server heatsinks or Koolance's $85 CPU-360 water block. That's it. You can at least use some more reasonably-sized desktop heatsinks on C32 boards that will be quieter than the server heatsinks for G34.
    i don't get the point... the c32 is not cheaper and you can get much more ram with an g34 on an dualsocket mainboard and on the max an g34 system is more powerfull.

    the only problem is the fan cooler yes maybe c32 is better for putting an cooler on.




    Originally posted by MU_Engineer View Post
    I have a similar history and games are the buggiest programs with the highest propensity to lock up Linux systems in my opinion. If they're Windows games being run with WINE, it's even worse. Fortunately most locked-up games or X sessions can be killed with the magic SysRq keys, which dumps you into a text terminal to restart X without rebooting. But they're still pretty awful and apparently you play a lot of Windows games, so I imagine you see pretty frequent glitches and bugs.
    i never count an crash if any game run

    Leave a comment:


  • crazycheese
    replied
    Guys, thank you for input! Especially MU_Engineer!
    The true reason behind Athlon II x4 inefficient Vcore setting is simple - they are masking defect (turned off or defect cache, defect cores etc) Phenoms II as Athlons II. Phenom II is unable to work under 1.4Vcore and same as X3 vs X4, it seems AMD is unabIt runs and consumes electricity, but its not internally addressed.le to remove the power off completely from the defect chip or cache. On the contrast true Athlon II Propus cores only need as much as 1.15 Vcc to function stable at 2.8Ghz. Which was one of the reason I switched not only the graphics card to AMD, but the system itself - the chip may cost cheap but it should definitely not waste megawatts via deactivated cache or similar. As an "Otto-normal Burger" I don't own a nuclear powerplant in the backyard, nor am I interested in investing into pretty inefficient high-cost solar panels.

    Core on other side, although as it seems pretty cut down at factory, does not utilize the cut-down parts nor needs to supply them with power.

    I hope AMD will improve in next generation, ty for input!

    Leave a comment:


  • Qaridarium
    replied
    Originally posted by devius View Post
    Where did you get those 20W figures from? No 7200rpm desktop hard drive from the last 5 years uses that much power. Typical figures are in the max 8W-10W for 3,5" 7200rpm HDDs. In the world of 2,5" laptop drives the power consumption is already very very close to SSDs. Take a look at the seagate momentus 5400.6 drives: 0,8W idle and 2,85W write power. Something like a Corsair Force SSD has 0,5W idle and 2W operating power.
    i just pull out some numbers

    but yes you are right an SSD saves power.

    the biggest profit is the time to get from idle to action status HDD's allways slow to do that.

    and if you don't compare an ssd to super slow notebook hdds then you need to compare an ssd to an 15000 upm sas hdd because the speed matters

    then ssd win much higher ...

    Leave a comment:


  • MU_Engineer
    replied
    Originally posted by Qaridarium View Post
    PCIe1.0,useless SLI, bios bugs,chipset to hot and the multisocked incompatility to cpu coolers becouse the first cpu blocks the seconds ones clousding mechanism.

    means next time i buy a single socket and nothing blocks my super big fat 1kg Mugen cooler

    its funny because one of my opteron board in the past do not have the "appropriate mounting brackets" and this is not funny if you wana have an bigblock silence cooler in your system and all server coolers are 5000UPM very loud coolers... and no one in germany sells: "appropriate mounting brackets"
    You would still need to check clearances carefully no matter what board you mount that Scythe Mugen on. It's simply an enormous heatsink that the only real reason to get it would be to passively cool the CPUs. There are certainly other heatsinks out there that aren't quite so huge that would work on an Opteron board but are still pretty quiet. You could also look at water cooling as that is quiet, water blocks are small and have few clearance issues, and there are blocks specifically designed to bolt to Socket F/C32 and G34 out there, so you don't need to use the clamp-on heatsink retention brackets.

    You can also look on eBay for retention brackets if your board does not come with one. They cost $4-10 and I'll bet that some of the sellers even ship to Germany.

    and hey all server heatsinks are just BAD BAD BAD BAD! and loud!

    bad and loud just because they save "place" and for my use i have place over place all over...
    Depends on your definition of "loud." If you demand pretty much total and complete silence from your machine (basically an SPL < 20 dB) then yes, they're all loud. All of them will also be louder than your enormous heatsink as well. But most people I've seen with Socket F boards (which would use the same heatsinks as C32) have made some pretty quiet machines out of 92 mm or carefully-selected 120 mm desktop heatsinks. Machines using 2U/3U server heatsinks with PWM-controlled fans 70 mm or larger with their speed controlled by the BIOS are very similar to your typical corporate office PC in noise level.

    my mugen cooler is 160cm high and this kind of heatsinks cool an opteron passiv

    so really the server heatsinks are so fucked up.
    You are just trying to use a heatsink that is very far beyond any size and weight specifications of heatsinks designed for that socket. You shouldn't be surprised that you would have trouble getting it to fit. You probably will have trouble mounting that heatsink on 90+% of desktop boards as well.

    wrong??? the germany shop sites says EATX and not ATX and hey just calculate an exampel
    ASUS says it is a 12" by 10" ATX board on their website. They also do not have the product listed on their German website.

    in germany the cheapest price for that board is 270?

    the cheapest Supermicro H8SGL-F is 227?

    means you save 43? if you don't buy an C32 dualsocket board!
    The KCMA-D8 is about $290 over here compared to about $250 for the H8SGL-F.

    and an AMD Opteron 4170 2.1ghz costs 170? and 2 of them 340?

    yes thats 12 cores if you calculate that for 8 cores it costs 227?

    an AMD Opteron 6128 cots 260? means you win 33?

    and you save 1 cpu cooler a good one costs 50?

    means in real you save 140? if you don't buy an c32 system
    You can't really just divide the price of a 6-core chip by 2/3 to get a price of a quad-core chip. The closest C32 equivalent to the 6128 would be two Opteron 4122s, which are 2.2 GHz quad-cores. Two of them cost $200, compared to $270 for the 6128. Two 4122s + the KCMA-D8 will run you $490, while a 6128 and an H8SGL will run you $520, so the C32 solution is a little less expensive and a little faster. Yes, it will likely be a wash after you buy heatsinks, but remember that the only heatsinks that will fit on G34 boards are server heatsinks or Koolance's $85 CPU-360 water block. That's it. You can at least use some more reasonably-sized desktop heatsinks on C32 boards that will be quieter than the server heatsinks for G34.


    right but its not logical thats because the 12 core do have the same cores in it...

    maybe that 6core are just better selected 'dies'
    The EE parts do use the "cream of the crop" of the dies, according to an AMD rep that frequents a lot of forums.

    i run linux over 6 years now. 3 years with nvidia and 3 years with ati cards.

    but yes my memory can not tell you all my crashes in detail ..
    I have a similar history and games are the buggiest programs with the highest propensity to lock up Linux systems in my opinion. If they're Windows games being run with WINE, it's even worse. Fortunately most locked-up games or X sessions can be killed with the magic SysRq keys, which dumps you into a text terminal to restart X without rebooting. But they're still pretty awful and apparently you play a lot of Windows games, so I imagine you see pretty frequent glitches and bugs.

    Leave a comment:


  • devius
    replied
    Originally posted by Qaridarium View Post
    maybe you combare it between the best hdd and the cheapest/worst ssd ?
    Where did you get those 20W figures from? No 7200rpm desktop hard drive from the last 5 years uses that much power. Typical figures are in the max 8W-10W for 3,5" 7200rpm HDDs. In the world of 2,5" laptop drives the power consumption is already very very close to SSDs. Take a look at the seagate momentus 5400.6 drives: 0,8W idle and 2,85W write power. Something like a Corsair Force SSD has 0,5W idle and 2W operating power.

    Leave a comment:


  • Qaridarium
    replied
    Originally posted by MU_Engineer View Post
    And I said that things like 3D rendering (which would include Blender) are more workstation applications than they are desktop applications.
    really? so i just care the wrong stuff ?

    Originally posted by MU_Engineer View Post
    The games I do rarely play I play in single player vs. the computer mode, which don't require all that much from a system if you aren't running the absolute newest games or demand to run everything on super-high-ultimate settings. The last online multiplayer game I played was the original Counter-Strike, which ran fine on 1 GHz PIIIs.
    CTI/Wafare was orginale an multiplayer map but with the time the KI does well.
    means you can play CTI in OFP and wafare in arma2 single player vs the KI or coop multiplayer vs the KI.
    you don't get the point because arma2 uses databased self learning KI for the singleplayer
    means you can play an single player with over 1500 units

    but for arma2 you need an strong cpu in the cti/wafare mode also if you play in the lowest settings thats because the settings only pulls the graphic down and not the "KI"


    Originally posted by MU_Engineer View Post
    And what are the weaknesses other than the old NVIDIA chipset?
    PCIe1.0,useless SLI, bios bugs,chipset to hot and the multisocked incompatility to cpu coolers becouse the first cpu blocks the seconds ones clousding mechanism.

    means next time i buy a single socket and nothing blocks my super big fat 1kg Mugen cooler


    Originally posted by MU_Engineer View Post
    No need to shell out a bunch of money for C32 heatsinks. You can most likely reuse your Socket F heatsinks on a C32 board, as long as the heatsinks are 3.5" pitch. If they are 4.1" pitch, you can use them on a Socket G34 board. You can also use regular AM2/AM3 desktop heatsinks with C32 systems. Many C32 motherboards include the appropriate mounting brackets, else find a Socket 754 or 939 mounting bracket on eBay or from a dead board somewhere. (AM2 or AM3 won't work as they have four bolt holes, 754/939 and Socket F/C32 have two bolt holes.
    its funny because one of my opteron board in the past do not have the "appropriate mounting brackets" and this is not funny if you wana have an bigblock silence cooler in your system and all server coolers are 5000UPM very loud coolers... and no one in germany sells: "appropriate mounting brackets"

    one of the mainboard an tyan one dies and i sold the oner one (asus)later after that.

    and hey all server heatsinks are just BAD BAD BAD BAD! and loud!

    bad and loud just because they save "place" and for my use i have place over place all over...

    my mugen cooler is 160cm high and this kind of heatsinks cool an opteron passiv

    so really the server heatsinks are so fucked up.


    Originally posted by MU_Engineer View Post
    ASUS's KCMA-D8 dual C32 board is also standard ATX.
    wrong??? the germany shop sites says EATX and not ATX and hey just calculate an exampel

    in germany the cheapest price for that board is 270?

    the cheapest Supermicro H8SGL-F is 227?

    means you save 43? if you don't buy an C32 dualsocket board!

    and an AMD Opteron 4170 2.1ghz costs 170? and 2 of them 340?

    yes thats 12 cores if you calculate that for 8 cores it costs 227?

    an AMD Opteron 6128 cots 260? means you win 33?

    and you save 1 cpu cooler a good one costs 50?

    means in real you save 140? if you don't buy an c32 system


    Originally posted by MU_Engineer View Post
    No, the Opteron 4164 EE should have the most multithreaded performance per watt of the Opteron lineup. It's a 6-core unit at 1.80 GHz with a 35-watt TDP, while the 6164 HE is a 1.70 GHz 12-core with an 85-watt TDP. Two 4164 EEs would have a combined TDP of 70 watts and run 12 cores at 1.80 GHz.
    right but its not logical thats because the 12 core do have the same cores in it...

    maybe that 6core are just better selected 'dies'


    Originally posted by MU_Engineer View Post
    What are you running for an operating system and what kinds of crashes are you talking about? If you're running Windows, that's probably why you are getting crashes and needing to restart all of the time.
    i run linux over 6 years now. 3 years with nvidia and 3 years with ati cards.

    but yes my memory can not tell you all my crashes in detail ..

    Originally posted by MU_Engineer View Post
    What do you mean by "check the RAM," run Memtest86+ after a reboot? ECC memory is used mostly to detect and correct soft errors that result from bit flipping during RAM operation due to background radiation and such. Cutting the power to the memory during a hard reboot would "fix" the flipped bit and you will see nothing in Memtest86+. The only thing you'll see in Memtest86+ are generally hard errors due to flaky/failing RAM or motherboard. ECC will certainly pick that up too, but you're really looking at two different things there.
    right... but its my personal feeling some desktops with non ecc rams are just more stable than my ECC pc


    Originally posted by MU_Engineer View Post
    The system stability depends on a lot of things besides RAM. Software and drivers are an obvious culprit, as is the power supply and the noisiness of the power coming from the outlet. You could be running your ECC RAM in Chipkill mode with an 8-hour DRAM scrub, but if you're running Windows Me and powering that system from a $20 cheap Chinese PSU optimistically rated at 300 watts, you're going to be horribly unstable. That's obviously an exaggeration, but you get my point.
    i got your point but you don't got my point.. my point is the OS/driver and the psu is more importand than the 'RAM'

    is more importand means i don't have the money to waste my money on less importand stuff

    Leave a comment:


  • Qaridarium
    replied
    Originally posted by deanjo View Post
    most of the time i read german news so yes the german news about that is a little bit slower: http://www.computerbase.de/news/hard...station-markt/

    workstation means openGL and openGL mostly can not use more than 1 theat for putting the graphic into the gpu
    means a faster single theatet cpu wins means intel wins
    DX11 for exampel fix that on dx11 you can use more than 1 theat for putting graphic into the gpu..

    i don't know the status of opengl4 and multitask graphik pulling..

    amd just lose on bad/old software

    its just not the time for 24/32 cores on an dualsocket workstation system..

    Leave a comment:


  • MU_Engineer
    replied
    Originally posted by Qaridarium View Post
    i don't care about bad software

    but i care on good software and last time i check cpu cores in use on an raytracing engine the count goes to 64 theats... in blender
    And I said that things like 3D rendering (which would include Blender) are more workstation applications than they are desktop applications.


    i think the real clue is "I don't have the game"
    and you never be a fan of OFP-CTI or arma2-wafare
    you never touch an war game with over 1600 ki's and 128 human players and 10 000 view distance on an 225km? map with the highest skilled KI over the world.
    "but I am pretty sure it does not need 12 cores or 64 GB of RAM to run."
    need? well it does not need but if you wana do what i wana do in the game you really wana have that stuff because you don't wana die in the game.

    in the end arma2 supports 12 cores and 64gb ram i don't care about the minimum pc hardware rate or an optimum hardware rate for the single player missions.

    i only care about the maximum on the CTI/wafare multiplayer map with 128 players and 1408 Ki s with all settings on max.
    The games I do rarely play I play in single player vs. the computer mode, which don't require all that much from a system if you aren't running the absolute newest games or demand to run everything on super-high-ultimate settings. The last online multiplayer game I played was the original Counter-Strike, which ran fine on 1 GHz PIIIs.

    c32 is the next step of the socket f 1207 my last opteron system

    so i really know the weakness and i don't wana have that again.
    And what are the weaknesses other than the old NVIDIA chipset?

    the g34 is much better because you save money on the cooling solution 50? every socked against an c32 system.
    No need to shell out a bunch of money for C32 heatsinks. You can most likely reuse your Socket F heatsinks on a C32 board, as long as the heatsinks are 3.5" pitch. If they are 4.1" pitch, you can use them on a Socket G34 board. You can also use regular AM2/AM3 desktop heatsinks with C32 systems. Many C32 motherboards include the appropriate mounting brackets, else find a Socket 754 or 939 mounting bracket on eBay or from a dead board somewhere. (AM2 or AM3 won't work as they have four bolt holes, 754/939 and Socket F/C32 have two bolt holes.

    and an single socket g34 is ATX and not Eatx so you can build smaler systems.
    ASUS's KCMA-D8 dual C32 board is also standard ATX.

    and the g34 opterons do have more speed per watt usage.
    No, the Opteron 4164 EE should have the most multithreaded performance per watt of the Opteron lineup. It's a 6-core unit at 1.80 GHz with a 35-watt TDP, while the 6164 HE is a 1.70 GHz 12-core with an 85-watt TDP. Two 4164 EEs would have a combined TDP of 70 watts and run 12 cores at 1.80 GHz.

    i got zero benefit out of my last ECC system

    the system also crashes and need restards
    What are you running for an operating system and what kinds of crashes are you talking about? If you're running Windows, that's probably why you are getting crashes and needing to restart all of the time.

    and an non ECC system works well if you check the ram from time to time.
    What do you mean by "check the RAM," run Memtest86+ after a reboot? ECC memory is used mostly to detect and correct soft errors that result from bit flipping during RAM operation due to background radiation and such. Cutting the power to the memory during a hard reboot would "fix" the flipped bit and you will see nothing in Memtest86+. The only thing you'll see in Memtest86+ are generally hard errors due to flaky/failing RAM or motherboard. ECC will certainly pick that up too, but you're really looking at two different things there.

    i call you an laier in that point because my last opteron crashes with ECC ram
    The system stability depends on a lot of things besides RAM. Software and drivers are an obvious culprit, as is the power supply and the noisiness of the power coming from the outlet. You could be running your ECC RAM in Chipkill mode with an 8-hour DRAM scrub, but if you're running Windows Me and powering that system from a $20 cheap Chinese PSU optimistically rated at 300 watts, you're going to be horribly unstable. That's obviously an exaggeration, but you get my point.

    Leave a comment:


  • Qaridarium
    replied
    Originally posted by deanjo View Post
    thx for the link i really don't know that

    Leave a comment:

Working...
X