Announcement

Collapse
No announcement yet.

AMD FX-8350 "Vishera" Linux Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by SavageX View Post
    No, this is measured on the 12 Volt CPU supply line. It includes only the CPU and voltage regulators, so not *everything* is CPU power consumption (the voltage regulators do not ooperate at 100% efficiency), but certainly most is.
    A quick bit of math:
    *CPU is really 125W
    *12V line is 168W
    if those are true, then the VRM eff is 74.4%. That seems a bit low, but not crazy where I dismiss it outright. Considering the temps the VRMs on my AM3 run I won't be surprised if they need to get rid of ~40 watts.

    Heise.de does not state the actual voltage on the 12V line (it's not exactly 12V) nor do they specify the quality of measuring device, or how it was measured. Is it +/-3%? +/-5%? +/-0.001%? what sort of voltage drop is there over the ammeter? Were they accidentally over-volting the CPU? what is the 12V ripple (get out your ocilliscope to measure that) and use that to calculate the RMS voltage. Anyways, there is enough possible error in here that the real TDP could be 125W or so and we have no way of knowing for sure.

    Comment


    • #72
      In defend of heise.de

      I think the other poster that mentioned the heise.de article, did not provide all information
      that is included in the original article.
      (in German: http://www.heise.de/ct/meldung/AMD-A...i-1734298.html )

      - All tests were done using Cinebench
      - They wrote that the CPU used up to 168 Watts, while talking -indeed- about TDP
      (but also said that the voltage (so called bucket) converters draw some power).
      - They said that "125 Watt TDP ist pures Wunschdenken" meaning that, 125 Watt
      TDP is plain wishful thinking on AMDs side.

      So, at least for me, heise says that "using the 12V CPU supply connector as sample point on our
      mainboard this particular AMD CPU used well over 125 TDP which AMD claims
      it has". They did not mention however, what their test rig exactly is. I would
      very much like to know what the integration time for power measurements
      were.

      (and by the way: Given above is correct, I agree to the OP, that AMD's CPU suck, powerwise. :-D
      )

      Mind you that TDP is per (Intel) definition a MAXIMUM figure. As otherwise
      you weren't be able to properly design and dimension a cooling solution for
      a given processor. Note: Power drawn might be higher than TDP for a (very) short
      time if the average power is =TDP.

      Greeting from Germany, and please forgive bad English you encounter.
      Cheers
      Last edited by multics; 23 October 2012, 10:02 PM.

      Comment


      • #73
        Originally posted by necro-lover View Post
        now calculate again with the german price : 0,25euro per 1kw.

        11.52kWh per month=138,24kWh per year

        138,24*0,25=34,56

        now your price difference: 150dollar is ~ 120 euro

        now 120euro/34,56euro= 3,47222222222 years.

        in fact if you use your super new PC 3,5 years the intel one is cheaper.
        Again, you are probably saving power going with the AMD chip because of the lower idle power usage. Very few people run their machines at 100% and then shut them off immediately when finished. Most people are running at idle 99% of the time.

        The reason a high TDP is bad news, is because it indicates that AMD is clocking the chip up as fast as they possibly can, meaning there may not be a lot of extra headroom easily available under this design. But as far as this chip itself, that shouldn't be a problem.

        Comment


        • #74
          Originally posted by cynyr View Post
          A quick bit of math:
          *CPU is really 125W
          *12V line is 168W
          if those are true, then the VRM eff is 74.4%. That seems a bit low, but not crazy where I dismiss it outright. Considering the temps the VRMs on my AM3 run I won't be surprised if they need to get rid of ~40 watts.

          Heise.de does not state the actual voltage on the 12V line (it's not exactly 12V) nor do they specify the quality of measuring device, or how it was measured. Is it +/-3%? +/-5%? +/-0.001%? what sort of voltage drop is there over the ammeter? Were they accidentally over-volting the CPU? what is the 12V ripple (get out your ocilliscope to measure that) and use that to calculate the RMS voltage. Anyways, there is enough possible error in here that the real TDP could be 125W or so and we have no way of knowing for sure.
          - I asked somebody from heise, in the heise forum, whether they are sure that
          only the cpu is fed from the 12V cpu power supply. He says yes. He even
          said that sometimes small parts of the cpu (e.g. DRAM drivers) get power from the /other/ power
          supply connector, which means the real cpu power might even be higher than
          this.

          - The VRMs (in this case bucket converters) do certainly much(!) better than 74.4%.

          - "Heise.de does not state the actual voltage on the 12V line" => they use a
          The LMG95 single-phase precision power meter is an outstanding product in the proven LMG series of ZES ZIMMER precision power measuring devices.

          for measuring. Which means they do a proper "multiply volts by amps at any given time giving watts".

          And here is how they do it:
          Die elektrische Leistungsaufnahme eines PC hat sich in den letzten Jahren zu einer entscheidenden Kenngröße entwickelt. Mit unserem selbst konstruierten ATX-Messplatz fühlen wir der Leistungsaufnahme sämtlicher Stromkreise eines Rechners auf den Zahn.


          Regards
          Multics.
          Last edited by multics; 24 October 2012, 04:14 AM.

          Comment


          • #75
            These are measurements from Anandtech:




            Intel gets the same job done in about the same time, but the whole system consumes about half the power. Idle power consumption is also lower.

            Comment


            • #76
              A more modern way to do the testing

              Since with specific workloads there's a huge difference between the two in some tests as shown in an older article, I'd like to see all these tests both on intel and amd cpus also done with the code compiled using -Ofast instead of -O3 (also wouldn't mind -O2 since for workloads that don't involve raytracing, computational fluid dynamics or other applications using very large data sets which use a memory addressing pattern which results in a lot of cache misses it seems to be faster over -O3, even postgresql test in mentioned article shows -O2 as better than -O3 and it would be something significant for servers, also some graphicsmagick operations which use adjacent memory locations thus resulting in fewer cache misses).

              Comment


              • #77
                I was wondering why Anand's results were not brought up yet. Michael's tests are heavily mutithreaded, but Anand's seem to be more balanced/representative, imho. Plus, the power consumption graphs are pretty clear-cut.

                Comment


                • #78
                  Cores

                  I develop CAE post processing software for a living. There are days when I spent a LOT of time waiting for code to recompile after catching up my local source tree with our source code repository. At work I have an i7-x990 CPU, which is insanely expensive. However, it's SO worth it to run "make -j10" to do parallel builds, and still be able to get other background tasks done while it grinds away.

                  We're also working very hard to leverage those multiple cores in our products. It's not easy!

                  Anyway, I'd love to be able to do something similar when I work from home. There's NO WAY I can afford a high end Intel CPU on my personal budget. I'd be VERY interested to see how the 8250 or 8350 performs running parallel builds. I have a Phenom 2 1090 now, and am wondering if I'd get a significant performance boost from 8 cores. I don't overclock much, because I don't want to risk an unstable overclock causing erratic behavior in our code.

                  Comment


                  • #79
                    Originally posted by jjmcwill2003 View Post
                    I develop CAE post processing software for a living. There are days when I spent a LOT of time waiting for code to recompile after catching up my local source tree with our source code repository. At work I have an i7-x990 CPU, which is insanely expensive. However, it's SO worth it to run "make -j10" to do parallel builds, and still be able to get other background tasks done while it grinds away.

                    We're also working very hard to leverage those multiple cores in our products. It's not easy!

                    Anyway, I'd love to be able to do something similar when I work from home. There's NO WAY I can afford a high end Intel CPU on my personal budget. I'd be VERY interested to see how the 8250 or 8350 performs running parallel builds. I have a Phenom 2 1090 now, and am wondering if I'd get a significant performance boost from 8 cores. I don't overclock much, because I don't want to risk an unstable overclock causing erratic behavior in our code.
                    You're exaggerating. You can have the latest from intel (i7-3770) for less than $300. Of course, if you're happy with an 8350, that will be cheaper.

                    Comment


                    • #80
                      So how would a i7-3770 compare to the AMD 8350 on something like "make -j8" ?

                      I was referring to what I paid for the i7-990X when I had my work PC built. Is the 3770 considered a high end Intel CPU? I thought only the 3930K and 3960X fit that category?

                      Comment

                      Working...
                      X