Announcement

Collapse
No announcement yet.

AMD processors/-based systems power hunger

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by crazycheese View Post
    aridarium, you are living in Germany, no? What is your Stromanbieter? It is normal that extra 60W used 12 hour per day, done for the year, will make €53 extra cost!!
    yes in germany power consuming is expensiv

    but your thinking about amd burn my money is wrong right now i have an OC-E6600@3,6ghz with Idle 160watt consuming...

    my last Opteron system with quatcore only burns 140watt in idle

    last week we chance the "stromanbieter"

    but we have a 10Kw Solarpowerplant and we have a 5kw gas combined heat and power Power station at home

    and i do not pay for power consuming

    Comment


    • #17
      Originally posted by Qaridarium View Post
      well yes.. but my point is true any opteron beat an desktop dualcore cpu in performance per watt usage
      It all depends on what exact chips you're comparing and what you are testing for a program.

      you can turn of the ht assist in the bios

      not all apps run faster with ht assist in my point of view ht assist is a benchmark feature for syntetic benchmarks
      You can't turn off HT Assist on all platforms, nor would you want to. My dual G34 board doesn't have the option to turn off HT Assist, and from what I've read, you would want to leave HT Assist turned on for all platforms with more than two dies. A dual G34 has four, which is probably why the board has it on all of the time.

      not anything is about raw speed the opteron is abaut latency in an game like ARMA2 an Opteron system do have much better Latency time

      means if the systems runs slower because of the first theat is slowed down by the 2ghz overall the system reacts faster than the PhenomII system with more fps...
      Not when you have a Phenom II X4 or X6 on a game with only two or three heavy threads like most common games have today. You have unused cores in all cases, except the Phenom II ones are clocked considerably higher and thus give better performance.

      fore FPS or reacting faster i prever the reacting faster

      thats because of the parralell ram latency 4 channels do have less parraell latency than 2 channels.
      No. G34 Opterons are MCM units with two dies. Each die has only two RAM channels, so the "four-channel" interface is really two channels + two channels accessed off-die over NUMA. Going through the HT link to the other die, retrieving the data from RAM, and then coming back to the original die takes longer than a local RAM access. The advantage to the extra memory channels would be if you simply need a lot of bandwidth, but most tests for desktop apps I've seen show little improvement in performance with higher RAM speed once you're using DDR3-1066. Most Phenom II systems have at least DDR3-1333, so you're not hurting for RAM bandwidth.

      right now wait 6 monds or so then the desktop boards do have better chipsets again ...
      You mean the 900-series chipsets that are supposed to have nearly the exact same specs as the existing 800-series ones? They don't have USB 3.0 onboard, nor PCI Express 3.0, nor do they add any more PCIe lanes or anything like that.

      in clear words right now the desktop is better

      maybe because of the sata3
      Says somebody who's never used much for a disk subsystem on a southbridge. Southbridge-based SATA controllers are not all that great at handling a lot of disk I/O compared to discrete PCIe disk controllers. This is especially true on NVIDIA's SATA controllers such as the one in your NF3600 unit. Besides, server boards with onboard SAS/SATA 6 Gbps controllers are widespread and have good PCIe-linked controller ICs that greatly outperform AMD's SB800 SATA controller.

      yes 2012... means 2011 the desktop chipsets beat the server chipsets again...
      They only "beat" them in largely meaningless ways.

      yet is an very danger word in the computer world the yet can be over every minute--
      No. There is a pretty well-known period of time required for product development. If somebody started on a PCIe 3.0 chipset today, industry people could pretty well predict when it would get to market, and the rumor mill will pick it up well before we see shipping parts. So it's not "can be over any minute," it's "will be over sometime at least several months from now."

      opterons are not only used by server think about workstations
      AMD has publicly said it will not concentrate on the workstation market any more; workstation users can use server parts. Quite a few server parts generally don't require too much tweaking to become good workstations. They usually come with a lot of PCIe I/O and slots, so you can very easily add in a $30 USB 3.0 card if it is THAT important to you. Ditto with a 6 Gbps SATA controller. Using desktop parts as a workstation is much harder as there are very few desktop boards that support something as simple as ECC RAM, and every last new one of those is an AMD board and only one vendor sells them (ASUS.) Desktop boards either tend to be cheap or loaded up with lamer-gamer gimmicks like a remote control to adjust your overclock rather than having reliability features needed in a workstation.

      it does matter i know tests that the latency over the southbridge is better than over an PCIe card.
      Are you certain about that? AMD's southbridges connect to the northbridge over a PCI Express x4 link. That's all the A-Link Express is, a PCI Express link. I highly doubt that the latency to the southbridge over those PCIe lanes is much if any different than the latency to an expansion card over other PCIe lanes.

      so you buy an SSD for faster latency then you burn the latency over the PCIe bus LOL.. FAIL
      So oh my god, you may have to wait a few nanoseconds longer for data! I sure hope you don't have any magnetic disks in your system, else you're waiting for millions of painful nanoseconds to get data off of those! Ditto with getting anything off the Internet. You have to wait tens to hundreds of thousands of nanoseconds just to get a reply from a Web server. Maybe you should get a Killer NIC to help you out with that, huh?

      Comment


      • #18
        Originally posted by MU_Engineer View Post
        It all depends on what exact chips you're comparing and what you are testing for a program.
        7zip,x264 encoding,arma2,surprime commander for exampel

        arma2 needs 12 cores and 64gb ram and you can not put 64gb ram into an Phenom2 system





        Originally posted by MU_Engineer View Post
        You can't turn off HT Assist on all platforms, nor would you want to. My dual G34 board doesn't have the option to turn off HT Assist, and from what I've read, you would want to leave HT Assist turned on for all platforms with more than two dies. A dual G34 has four, which is probably why the board has it on all of the time.
        well if i buy an g34 socked board it will be a supermicro single socket board.

        and then i can turn of that feature withour any hurt




        Originally posted by MU_Engineer View Post
        Not when you have a Phenom II X4 or X6 on a game with only two or three heavy threads like most common games have today. You have unused cores in all cases, except the Phenom II ones are clocked considerably higher and thus give better performance.
        the opteron can win if an app use at minimum 4 cores because of the higher ram bandwith on 4 theats






        Originally posted by MU_Engineer View Post
        No. G34 Opterons are MCM units with two dies. Each die has only two RAM channels, so the "four-channel" interface is really two channels + two channels accessed off-die over NUMA. Going through the HT link to the other die, retrieving the data from RAM, and then coming back to the original die takes longer than a local RAM access. The advantage to the extra memory channels would be if you simply need a lot of bandwidth, but most tests for desktop apps I've seen show little improvement in performance with higher RAM speed once you're using DDR3-1066. Most Phenom II systems have at least DDR3-1333, so you're not hurting for RAM bandwidth.
        with an phenomII you can use at max 1600mhz ddr3 dimms

        1600*2 vs 1333*4 means the opteron is faster

        and the ram latency goes down if you have parrell tasks if 4 tasks ask for an ram request the phenomII do this seriel and the opteron can handle 4 ram requests at the same time on differend tasks..





        Originally posted by MU_Engineer View Post
        You mean the 900-series chipsets that are supposed to have nearly the exact same specs as the existing 800-series ones? They don't have USB 3.0 onboard, nor PCI Express 3.0, nor do they add any more PCIe lanes or anything like that.
        i think the 900 series consumes less power

        means the desktop chipset wins again..-






        Originally posted by MU_Engineer View Post
        Says somebody who's never used much for a disk subsystem on a southbridge. Southbridge-based SATA controllers are not all that great at handling a lot of disk I/O compared to discrete PCIe disk controllers. This is especially true on NVIDIA's SATA controllers such as the one in your NF3600 unit. Besides, server boards with onboard SAS/SATA 6 Gbps controllers are widespread and have good PCIe-linked controller ICs that greatly outperform AMD's SB800 SATA controller.
        They only "beat" them in largely meaningless ways.
        No. There is a pretty well-known period of time required for product development. If somebody started on a PCIe 3.0 chipset today, industry people could pretty well predict when it would get to market, and the rumor mill will pick it up well before we see shipping parts. So it's not "can be over any minute," it's "will be over sometime at least several months from now."
        AMD has publicly said it will not concentrate on the workstation market any more; workstation users can use server parts. Quite a few server parts generally don't require too much tweaking to become good workstations. They usually come with a lot of PCIe I/O and slots, so you can very easily add in a $30 USB 3.0 card if it is THAT important to you. Ditto with a 6 Gbps SATA controller. Using desktop parts as a workstation is much harder as there are very few desktop boards that support something as simple as ECC RAM, and every last new one of those is an AMD board and only one vendor sells them (ASUS.) Desktop boards either tend to be cheap or loaded up with lamer-gamer gimmicks like a remote control to adjust your overclock rather than having reliability features needed in a workstation.
        Are you certain about that? AMD's southbridges connect to the northbridge over a PCI Express x4 link. That's all the A-Link Express is, a PCI Express link. I highly doubt that the latency to the southbridge over those PCIe lanes is much if any different than the latency to an expansion card over other PCIe lanes.
        So oh my god, you may have to wait a few nanoseconds longer for data! I sure hope you don't have any magnetic disks in your system, else you're waiting for millions of painful nanoseconds to get data off of those! Ditto with getting anything off the Internet. You have to wait tens to hundreds of thousands of nanoseconds just to get a reply from a Web server. Maybe you should get a Killer NIC to help you out with that, huh?
        i never run any servers. i only interested in the opterons because of the gread workstation useages and gaming usages.

        means i don't care about your SAS hdd stuff over an PCIe lane

        in my point of view the opteron6000 series beats the desktop one on the RAM side you can have 32/64gb of ram with normal desktop ddr3 'rams'

        means you can play an streaming game like arma2 in an ramdrive withour reload stuttering

        Comment


        • #19
          Originally posted by Qaridarium View Post
          an 3,5" hdd consums 15-20watt an SSD only 0,5 watt.
          No it doesn't, with the possible exception of initial spin-up when booting. My 40GB SSD is actually rated as higher power consumption when writing than my 2TB HDD... of course because it doesn't rotate and have long seek times it spends very little time writing and most of the time idle.

          Comment


          • #20
            Originally posted by Qaridarium View Post
            7zip,x264 encoding,arma2,surprime commander for exampel
            x264 encoding certainly does use a lot of cores, and so can some file compression programs. Other than those, there aren't too many that you'd run on a desktop that are highly multithreaded. There are a ton of workstation applications like CFD, molecular modeling, 3D rendering, and code compilation that are thread-heavy, but they're not desktop applications. The only game I know of that uses a boatload of CPU cores is Microsoft Flight Simulator. There may be more, but most use one to three "heavy" threads and that's about it.

            arma2 needs 12 cores and 64gb ram and you can not put 64gb ram into an Phenom2 system
            No. I don't have the game, but I am pretty sure it does not need 12 cores or 64 GB of RAM to run. First of all, the only people that could even run it would be running at a bare minimum an Opteron 6168 with all eight RAM slots filled. That's a $750 chip and 8 GB DIMMs cost $220 a pop. Requiring the user to spend over $3000 on hardware just to play the game is a recipe for nobody buying the game. Secondly, the recommended hardware from the publisher says an A64 4400+ or faster with 2 GB of RAM. That's a far cry from 12 cores and 64 GB.

            well if i buy an g34 socked board it will be a supermicro single socket board.

            and then i can turn of that feature withour any hurt
            Why would you do that? The big advantage of Socket G34 systems are their ability to be run in multiprocessor systems and secondly to provide four dies' worth of cores on an EATX/SSI EEB -sized system. A single G34 will be slower than an equivalently-priced dual C32 setup and have no more RAM bandwidth or memory capacity. You can even get dual C32 systems in a standard ATX format (ASUS KCMA-D8), so there's really no reason to go single G34 over dual C32s.

            the opteron can win if an app use at minimum 4 cores because of the higher ram bandwith on 4 theats
            Like I said above, very few if any desktop applications besides RAM benchmarks are bottlenecked by two channels of DDR3-1333 on a Phenom II X4. A Phenom II X4 is going to be quite a bit faster running four threads than a G34 Opteron running at 50-80% the clockspeed of the Phenom II. Anyway, your precious latency is much higher in going off-die to a remote NUMA node for memory access than in having local memory access. Thus the scheduler will keep all of the threads on one of the G34's dies until it has more than 4 or 6 threads, and then it will start scheduling some on the other die.

            with an phenomII you can use at max 1600mhz ddr3 dimms

            1600*2 vs 1333*4 means the opteron is faster
            Yes, Opterons have higher platform bandwidth. But as I keep saying, DESKTOP APPLICATIONS ARE GENERALLY NOT RAM-BANDWIDTH-LIMITED.

            and the ram latency goes down if you have parrell tasks if 4 tasks ask for an ram request the phenomII do this seriel and the opteron can handle 4 ram requests at the same time on differend tasks..
            No, RAM latency would probably go up in that case. In a Phenom II system, the processes simply look in the caches to see if data is there and then get queued up to retrieve data from RAM if the data is not in cache. A multiprocessor NUMA system involves snooping of both local and remote caches (although HT Assist helps with this) as well as potentially having to retrieve data from remote dies over HT, all of which adds latency. Thus it is no surprise that any NUMA scheduler worth a crap is trying hard to keep data in RAM local to the die the thread is running on to minimize latency, which means you're only getting that die's dual-channel IMC's bandwidth for the most part.


            i think the 900 series consumes less power

            means the desktop chipset wins again..-
            I heard it's similar if not identical, but I can't find any authoritative source that says one way or another.

            i never run any servers. i only interested in the opterons because of the gread workstation useages and gaming usages.

            means i don't care about your SAS hdd stuff over an PCIe lane
            Apparently you do, since you were whining about there being no 6 Gbps SATA support in the SP5100 server southbridge. That 6 Gbps SAS controller also does 6 Gbps SATA as well (SATA 3.0), and will handle more HDD aggregate bandwidth than any southbridge controller.

            in my point of view the opteron6000 series beats the desktop one on the RAM side you can have 32/64gb of ram with normal desktop ddr3 'rams'
            Yes, but only an idiot would run that much non-ECC RAM in a system that supports ECC. I guess not having ECC in the RAM would make a desktop user feel right at home, since you can't overclock current Opteron gear and the boards are made to be more reliable than standard desktop gear. Something has to take the place of flaky overclocked CPUs and cheap components causing errors to require frequent reboots, so I guess RAM errors are as good of a reason as any.

            means you can play an streaming game like arma2 in an ramdrive withour reload stuttering
            Yes, until the game crashes on you because you have a ton of non-ECC RAM in the system and a bit got flipped somewhere, corrupting the game data in that RAM.

            Also, game level load times are usually only a handful of seconds. Are you as impatient as this kid when it comes to load times?

            Comment


            • #21
              I also noticed that you are using DDR3-1600 on the athlon II system, which should add a few more watts compared to the DDR2 on the pentium. Other than that I think you already got all the reasons why the athlon consumes more power.
              BTW, I got my X4 630 stable at 1,23V. It was stable at 1,22V, but a few weeks ago I started getting sporadic error messages during boot saying that the "overvoltage" had failed, but that's probably due to my crappy PSU.

              Comment


              • #22
                Originally posted by movieman View Post
                No it doesn't, with the possible exception of initial spin-up when booting. My 40GB SSD is actually rated as higher power consumption when writing than my 2TB HDD... of course because it doesn't rotate and have long seek times it spends very little time writing and most of the time idle.
                maybe you combare it between the best hdd and the cheapest/worst ssd ?

                if speed count you need an 7200 or 10000 upm harddrive if you wana compare it into the same speed class.

                then the SSD beats the hdd in power consuming be sure

                Comment


                • #23
                  Originally posted by MU_Engineer View Post
                  x264 encoding certainly does use a lot of cores, and so can some file compression programs. Other than those, there aren't too many that you'd run on a desktop that are highly multithreaded. There are a ton of workstation applications like CFD, molecular modeling, 3D rendering, and code compilation that are thread-heavy, but they're not desktop applications. The only game I know of that uses a boatload of CPU cores is Microsoft Flight Simulator. There may be more, but most use one to three "heavy" threads and that's about it.
                  i don't care about bad software

                  but i care on good software and last time i check cpu cores in use on an raytracing engine the count goes to 64 theats... in blender




                  Originally posted by MU_Engineer View Post
                  No. I don't have the game, but I am pretty sure it does not need 12 cores or 64 GB of RAM to run. First of all, the only people that could even run it would be running at a bare minimum an Opteron 6168 with all eight RAM slots filled. That's a $750 chip and 8 GB DIMMs cost $220 a pop. Requiring the user to spend over $3000 on hardware just to play the game is a recipe for nobody buying the game. Secondly, the recommended hardware from the publisher says an A64 4400+ or faster with 2 GB of RAM. That's a far cry from 12 cores and 64 GB.
                  i think the real clue is "I don't have the game"
                  and you never be a fan of OFP-CTI or arma2-wafare
                  you never touch an war game with over 1600 ki's and 128 human players and 10 000 view distance on an 225km² map with the highest skilled KI over the world.
                  "but I am pretty sure it does not need 12 cores or 64 GB of RAM to run."
                  need? well it does not need but if you wana do what i wana do in the game you really wana have that stuff because you don't wana die in the game.

                  in the end arma2 supports 12 cores and 64gb ram i don't care about the minimum pc hardware rate or an optimum hardware rate for the single player missions.

                  i only care about the maximum on the CTI/wafare multiplayer map with 128 players and 1408 Ki s with all settings on max.






                  Originally posted by MU_Engineer View Post
                  Why would you do that? The big advantage of Socket G34 systems are their ability to be run in multiprocessor systems and secondly to provide four dies' worth of cores on an EATX/SSI EEB -sized system. A single G34 will be slower than an equivalently-priced dual C32 setup and have no more RAM bandwidth or memory capacity. You can even get dual C32 systems in a standard ATX format (ASUS KCMA-D8), so there's really no reason to go single G34 over dual C32s.
                  c32 is the next step of the socket f 1207 my last opteron system

                  so i really know the weakness and i don't wana have that again.

                  the g34 is much better because you save money on the cooling solution 50€ every socked against an c32 system.

                  and an single socket g34 is ATX and not Eatx so you can build smaler systems.

                  and the g34 opterons do have more speed per watt usage.



                  Originally posted by MU_Engineer View Post
                  Yes, but only an idiot would run that much non-ECC RAM in a system that supports ECC. I guess not having ECC in the RAM would make a desktop user feel right at home, since you can't overclock current Opteron gear and the boards are made to be more reliable than standard desktop gear. Something has to take the place of flaky overclocked CPUs and cheap components causing errors to require frequent reboots, so I guess RAM errors are as good of a reason as any.
                  i got zero benefit out of my last ECC system

                  the system also crashes and need restards

                  and an non ECC system works well if you check the ram from time to time.

                  my next system will not have ECC again be sure



                  Originally posted by MU_Engineer View Post
                  Yes, until the game crashes on you because you have a ton of non-ECC RAM in the system and a bit got flipped somewhere, corrupting the game data in that RAM.
                  i call you an laier in that point because my last opteron crashes with ECC ram

                  Comment


                  • #24
                    Originally posted by Qaridarium View Post
                    opterons are not only used by server think about workstations
                    *cough* http://jonpeddie.com/blogs/comments/...m-workstation/ *cough*

                    Comment


                    • #25
                      Originally posted by deanjo View Post
                      thx for the link i really don't know that

                      Comment


                      • #26
                        Originally posted by Qaridarium View Post
                        i don't care about bad software

                        but i care on good software and last time i check cpu cores in use on an raytracing engine the count goes to 64 theats... in blender
                        And I said that things like 3D rendering (which would include Blender) are more workstation applications than they are desktop applications.


                        i think the real clue is "I don't have the game"
                        and you never be a fan of OFP-CTI or arma2-wafare
                        you never touch an war game with over 1600 ki's and 128 human players and 10 000 view distance on an 225km² map with the highest skilled KI over the world.
                        "but I am pretty sure it does not need 12 cores or 64 GB of RAM to run."
                        need? well it does not need but if you wana do what i wana do in the game you really wana have that stuff because you don't wana die in the game.

                        in the end arma2 supports 12 cores and 64gb ram i don't care about the minimum pc hardware rate or an optimum hardware rate for the single player missions.

                        i only care about the maximum on the CTI/wafare multiplayer map with 128 players and 1408 Ki s with all settings on max.
                        The games I do rarely play I play in single player vs. the computer mode, which don't require all that much from a system if you aren't running the absolute newest games or demand to run everything on super-high-ultimate settings. The last online multiplayer game I played was the original Counter-Strike, which ran fine on 1 GHz PIIIs.

                        c32 is the next step of the socket f 1207 my last opteron system

                        so i really know the weakness and i don't wana have that again.
                        And what are the weaknesses other than the old NVIDIA chipset?

                        the g34 is much better because you save money on the cooling solution 50€ every socked against an c32 system.
                        No need to shell out a bunch of money for C32 heatsinks. You can most likely reuse your Socket F heatsinks on a C32 board, as long as the heatsinks are 3.5" pitch. If they are 4.1" pitch, you can use them on a Socket G34 board. You can also use regular AM2/AM3 desktop heatsinks with C32 systems. Many C32 motherboards include the appropriate mounting brackets, else find a Socket 754 or 939 mounting bracket on eBay or from a dead board somewhere. (AM2 or AM3 won't work as they have four bolt holes, 754/939 and Socket F/C32 have two bolt holes.

                        and an single socket g34 is ATX and not Eatx so you can build smaler systems.
                        ASUS's KCMA-D8 dual C32 board is also standard ATX.

                        and the g34 opterons do have more speed per watt usage.
                        No, the Opteron 4164 EE should have the most multithreaded performance per watt of the Opteron lineup. It's a 6-core unit at 1.80 GHz with a 35-watt TDP, while the 6164 HE is a 1.70 GHz 12-core with an 85-watt TDP. Two 4164 EEs would have a combined TDP of 70 watts and run 12 cores at 1.80 GHz.

                        i got zero benefit out of my last ECC system

                        the system also crashes and need restards
                        What are you running for an operating system and what kinds of crashes are you talking about? If you're running Windows, that's probably why you are getting crashes and needing to restart all of the time.

                        and an non ECC system works well if you check the ram from time to time.
                        What do you mean by "check the RAM," run Memtest86+ after a reboot? ECC memory is used mostly to detect and correct soft errors that result from bit flipping during RAM operation due to background radiation and such. Cutting the power to the memory during a hard reboot would "fix" the flipped bit and you will see nothing in Memtest86+. The only thing you'll see in Memtest86+ are generally hard errors due to flaky/failing RAM or motherboard. ECC will certainly pick that up too, but you're really looking at two different things there.

                        i call you an laier in that point because my last opteron crashes with ECC ram
                        The system stability depends on a lot of things besides RAM. Software and drivers are an obvious culprit, as is the power supply and the noisiness of the power coming from the outlet. You could be running your ECC RAM in Chipkill mode with an 8-hour DRAM scrub, but if you're running Windows Me and powering that system from a $20 cheap Chinese PSU optimistically rated at 300 watts, you're going to be horribly unstable. That's obviously an exaggeration, but you get my point.

                        Comment


                        • #27
                          Originally posted by deanjo View Post
                          most of the time i read german news so yes the german news about that is a little bit slower: http://www.computerbase.de/news/hard...station-markt/

                          workstation means openGL and openGL mostly can not use more than 1 theat for putting the graphic into the gpu
                          means a faster single theatet cpu wins means intel wins
                          DX11 for exampel fix that on dx11 you can use more than 1 theat for putting graphic into the gpu..

                          i don't know the status of opengl4 and multitask graphik pulling..

                          amd just lose on bad/old software

                          its just not the time for 24/32 cores on an dualsocket workstation system..

                          Comment


                          • #28
                            Originally posted by MU_Engineer View Post
                            And I said that things like 3D rendering (which would include Blender) are more workstation applications than they are desktop applications.
                            really? so i just care the wrong stuff ?

                            Originally posted by MU_Engineer View Post
                            The games I do rarely play I play in single player vs. the computer mode, which don't require all that much from a system if you aren't running the absolute newest games or demand to run everything on super-high-ultimate settings. The last online multiplayer game I played was the original Counter-Strike, which ran fine on 1 GHz PIIIs.
                            CTI/Wafare was orginale an multiplayer map but with the time the KI does well.
                            means you can play CTI in OFP and wafare in arma2 single player vs the KI or coop multiplayer vs the KI.
                            you don't get the point because arma2 uses databased self learning KI for the singleplayer
                            means you can play an single player with over 1500 units

                            but for arma2 you need an strong cpu in the cti/wafare mode also if you play in the lowest settings thats because the settings only pulls the graphic down and not the "KI"


                            Originally posted by MU_Engineer View Post
                            And what are the weaknesses other than the old NVIDIA chipset?
                            PCIe1.0,useless SLI, bios bugs,chipset to hot and the multisocked incompatility to cpu coolers becouse the first cpu blocks the seconds ones clousding mechanism.

                            means next time i buy a single socket and nothing blocks my super big fat 1kg Mugen cooler


                            Originally posted by MU_Engineer View Post
                            No need to shell out a bunch of money for C32 heatsinks. You can most likely reuse your Socket F heatsinks on a C32 board, as long as the heatsinks are 3.5" pitch. If they are 4.1" pitch, you can use them on a Socket G34 board. You can also use regular AM2/AM3 desktop heatsinks with C32 systems. Many C32 motherboards include the appropriate mounting brackets, else find a Socket 754 or 939 mounting bracket on eBay or from a dead board somewhere. (AM2 or AM3 won't work as they have four bolt holes, 754/939 and Socket F/C32 have two bolt holes.
                            its funny because one of my opteron board in the past do not have the "appropriate mounting brackets" and this is not funny if you wana have an bigblock silence cooler in your system and all server coolers are 5000UPM very loud coolers... and no one in germany sells: "appropriate mounting brackets"

                            one of the mainboard an tyan one dies and i sold the oner one (asus)later after that.

                            and hey all server heatsinks are just BAD BAD BAD BAD! and loud!

                            bad and loud just because they save "place" and for my use i have place over place all over...

                            my mugen cooler is 160cm high and this kind of heatsinks cool an opteron passiv

                            so really the server heatsinks are so fucked up.


                            Originally posted by MU_Engineer View Post
                            ASUS's KCMA-D8 dual C32 board is also standard ATX.
                            wrong??? the germany shop sites says EATX and not ATX and hey just calculate an exampel

                            in germany the cheapest price for that board is 270€

                            the cheapest Supermicro H8SGL-F is 227€

                            means you save 43€ if you don't buy an C32 dualsocket board!

                            and an AMD Opteron 4170 2.1ghz costs 170€ and 2 of them 340€

                            yes thats 12 cores if you calculate that for 8 cores it costs 227€

                            an AMD Opteron 6128 cots 260€ means you win 33€

                            and you save 1 cpu cooler a good one costs 50€

                            means in real you save 140€ if you don't buy an c32 system


                            Originally posted by MU_Engineer View Post
                            No, the Opteron 4164 EE should have the most multithreaded performance per watt of the Opteron lineup. It's a 6-core unit at 1.80 GHz with a 35-watt TDP, while the 6164 HE is a 1.70 GHz 12-core with an 85-watt TDP. Two 4164 EEs would have a combined TDP of 70 watts and run 12 cores at 1.80 GHz.
                            right but its not logical thats because the 12 core do have the same cores in it...

                            maybe that 6core are just better selected 'dies'


                            Originally posted by MU_Engineer View Post
                            What are you running for an operating system and what kinds of crashes are you talking about? If you're running Windows, that's probably why you are getting crashes and needing to restart all of the time.
                            i run linux over 6 years now. 3 years with nvidia and 3 years with ati cards.

                            but yes my memory can not tell you all my crashes in detail ..

                            Originally posted by MU_Engineer View Post
                            What do you mean by "check the RAM," run Memtest86+ after a reboot? ECC memory is used mostly to detect and correct soft errors that result from bit flipping during RAM operation due to background radiation and such. Cutting the power to the memory during a hard reboot would "fix" the flipped bit and you will see nothing in Memtest86+. The only thing you'll see in Memtest86+ are generally hard errors due to flaky/failing RAM or motherboard. ECC will certainly pick that up too, but you're really looking at two different things there.
                            right... but its my personal feeling some desktops with non ecc rams are just more stable than my ECC pc


                            Originally posted by MU_Engineer View Post
                            The system stability depends on a lot of things besides RAM. Software and drivers are an obvious culprit, as is the power supply and the noisiness of the power coming from the outlet. You could be running your ECC RAM in Chipkill mode with an 8-hour DRAM scrub, but if you're running Windows Me and powering that system from a $20 cheap Chinese PSU optimistically rated at 300 watts, you're going to be horribly unstable. That's obviously an exaggeration, but you get my point.
                            i got your point but you don't got my point.. my point is the OS/driver and the psu is more importand than the 'RAM'

                            is more importand means i don't have the money to waste my money on less importand stuff

                            Comment


                            • #29
                              Originally posted by Qaridarium View Post
                              maybe you combare it between the best hdd and the cheapest/worst ssd ?
                              Where did you get those 20W figures from? No 7200rpm desktop hard drive from the last 5 years uses that much power. Typical figures are in the max 8W-10W for 3,5" 7200rpm HDDs. In the world of 2,5" laptop drives the power consumption is already very very close to SSDs. Take a look at the seagate momentus 5400.6 drives: 0,8W idle and 2,85W write power. Something like a Corsair Force SSD has 0,5W idle and 2W operating power.

                              Comment


                              • #30
                                Originally posted by Qaridarium View Post
                                PCIe1.0,useless SLI, bios bugs,chipset to hot and the multisocked incompatility to cpu coolers becouse the first cpu blocks the seconds ones clousding mechanism.

                                means next time i buy a single socket and nothing blocks my super big fat 1kg Mugen cooler

                                its funny because one of my opteron board in the past do not have the "appropriate mounting brackets" and this is not funny if you wana have an bigblock silence cooler in your system and all server coolers are 5000UPM very loud coolers... and no one in germany sells: "appropriate mounting brackets"
                                You would still need to check clearances carefully no matter what board you mount that Scythe Mugen on. It's simply an enormous heatsink that the only real reason to get it would be to passively cool the CPUs. There are certainly other heatsinks out there that aren't quite so huge that would work on an Opteron board but are still pretty quiet. You could also look at water cooling as that is quiet, water blocks are small and have few clearance issues, and there are blocks specifically designed to bolt to Socket F/C32 and G34 out there, so you don't need to use the clamp-on heatsink retention brackets.

                                You can also look on eBay for retention brackets if your board does not come with one. They cost $4-10 and I'll bet that some of the sellers even ship to Germany.

                                and hey all server heatsinks are just BAD BAD BAD BAD! and loud!

                                bad and loud just because they save "place" and for my use i have place over place all over...
                                Depends on your definition of "loud." If you demand pretty much total and complete silence from your machine (basically an SPL < 20 dB) then yes, they're all loud. All of them will also be louder than your enormous heatsink as well. But most people I've seen with Socket F boards (which would use the same heatsinks as C32) have made some pretty quiet machines out of 92 mm or carefully-selected 120 mm desktop heatsinks. Machines using 2U/3U server heatsinks with PWM-controlled fans 70 mm or larger with their speed controlled by the BIOS are very similar to your typical corporate office PC in noise level.

                                my mugen cooler is 160cm high and this kind of heatsinks cool an opteron passiv

                                so really the server heatsinks are so fucked up.
                                You are just trying to use a heatsink that is very far beyond any size and weight specifications of heatsinks designed for that socket. You shouldn't be surprised that you would have trouble getting it to fit. You probably will have trouble mounting that heatsink on 90+% of desktop boards as well.

                                wrong??? the germany shop sites says EATX and not ATX and hey just calculate an exampel
                                ASUS says it is a 12" by 10" ATX board on their website. They also do not have the product listed on their German website.

                                in germany the cheapest price for that board is 270€

                                the cheapest Supermicro H8SGL-F is 227€

                                means you save 43€ if you don't buy an C32 dualsocket board!
                                The KCMA-D8 is about $290 over here compared to about $250 for the H8SGL-F.

                                and an AMD Opteron 4170 2.1ghz costs 170€ and 2 of them 340€

                                yes thats 12 cores if you calculate that for 8 cores it costs 227€

                                an AMD Opteron 6128 cots 260€ means you win 33€

                                and you save 1 cpu cooler a good one costs 50€

                                means in real you save 140€ if you don't buy an c32 system
                                You can't really just divide the price of a 6-core chip by 2/3 to get a price of a quad-core chip. The closest C32 equivalent to the 6128 would be two Opteron 4122s, which are 2.2 GHz quad-cores. Two of them cost $200, compared to $270 for the 6128. Two 4122s + the KCMA-D8 will run you $490, while a 6128 and an H8SGL will run you $520, so the C32 solution is a little less expensive and a little faster. Yes, it will likely be a wash after you buy heatsinks, but remember that the only heatsinks that will fit on G34 boards are server heatsinks or Koolance's $85 CPU-360 water block. That's it. You can at least use some more reasonably-sized desktop heatsinks on C32 boards that will be quieter than the server heatsinks for G34.


                                right but its not logical thats because the 12 core do have the same cores in it...

                                maybe that 6core are just better selected 'dies'
                                The EE parts do use the "cream of the crop" of the dies, according to an AMD rep that frequents a lot of forums.

                                i run linux over 6 years now. 3 years with nvidia and 3 years with ati cards.

                                but yes my memory can not tell you all my crashes in detail ..
                                I have a similar history and games are the buggiest programs with the highest propensity to lock up Linux systems in my opinion. If they're Windows games being run with WINE, it's even worse. Fortunately most locked-up games or X sessions can be killed with the magic SysRq keys, which dumps you into a text terminal to restart X without rebooting. But they're still pretty awful and apparently you play a lot of Windows games, so I imagine you see pretty frequent glitches and bugs.

                                Comment

                                Working...
                                X