Announcement

Collapse
No announcement yet.

AMD FX-8350 "Vishera" Linux Benchmarks

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #91
    Originally posted by crazycheese View Post
    I have corrected Anandtech fake graph.
    Could you fix it so it shows the end time for the Intel 95W part? It only has the end for the 77W part.

    Also, does the x264 HD 5.0.1 benchmark use Microsoft's compiler? Cinebench uses ICC or OpenMP, and as the earlier LLVM/Open64/GCC tests show, compiler matters. (Note: I was looking for an ICC benchmark on Phoronix but couldn't find it.)

    Extremetech's prime95 benchmark is very misleading. The core i5 can't work as hard because it's limited to 1/2 the number of workers as the fx-8350. I'm unfamilar with ET's site, is there an 8-thread comparison? This is important because at a transistor increase of 200M, it would help indicate the Ivy Bridge architecture efficiency.

    Legitreview's prime95 page with a chart lists Battlefield 3, CPU load, and idle. The text above says it's supposed to be 3dMark11, which is still a black box as far as optimization is concerned.

    @Phoronix
    I like the review. I can now see why early speculation said Bulldozer was going to catch up to SB -- and why AMD resigned from BAPCo. I imagine if AMD went under then you'd have the last benchmark site standing; no one else is capable of comparing as many benchmarks on POWER and ARM.

    Comment


    • #92
      Originally posted by crazycheese View Post
      8350 Vishera ended at 900 seconds and used average of 200W, where a rival 3570K ended at 1100 and consumed 120W.

      200W x 900s / (60*60) = 50 Wh
      120W x 1100s / (60*60) = 36,7 Wh

      Thats 73,4% of Intel electro efficiency, intel is just 26% more efficient. Not 100% or 50%, but mere 26%.

      Minimum price in europe:
      3570k = 198,48
      8350 = 179,16

      8350 supports ecc ram, vtd and is unlocked. Also scales very well at load when downclocked.

      I think its VERY competitive CPU compared to Intel. My plan for AMD - cut management income and get more *good* engineers. They need only to improve power management at LOAD and probably add 16 minicores version for enthusiasts on this technology (with several cores to run at full speed and offload unimportant lowpriority tasks to lowerspeed cores) - they need engineers at linux and offtopicos kernel to implement this and they are back competitive.
      You're actually comparing Vishera to Sandy Bridge 2500k. Ivy Bridge never reaches 110W. A difference of 25% in power usage is not "mere", considering the CPU is one of the most power hungry components in a PC.

      Comment


      • #93
        AMD fool us even more : "resonant Clock-Mesh" is not in the FX8350 also no 32-Byte-Paket-Front-End only a 16byte-paket-Front-End

        source: http://www.planet3dnow.de/vbulletin/...408737&garpg=3

        Because of this the FX8350 burn so much power because only "Trinity" get all core features of "Piledriver"

        Yes AMD fool another one.

        Comment


        • #94
          Proper English please!

          Originally posted by necro-lover View Post
          AMD fool us even more : "resonant Clock-Mesh" is not in the FX8350 also no 32-Byte-Paket-Front-End only a 16byte-paket-Front-End

          source: http://www.planet3dnow.de/vbulletin/...408737&garpg=3

          Because of this the FX8350 burn so much power because only "Trinity" get all core features of "Piledriver"

          Yes AMD fool another one.
          Hey mate, please write in proper English. You're really hard to understand.

          Regards and greetings from Germany.
          Multics.

          Comment


          • #95
            Memory configuration?

            The review only says that 8 Gib of memory were used for all the chips, but timings are not reported.

            Would I assume that the review run the i7-3770k at stock speed (1600) but the A10 and FX-8350 were run with underclocked ram (1600) instead of stock speed (1866)? If this is so, then one would add some score more for the AMD chips.

            Also memory brand and profiles are not reported. I assume that both Intel and AMD chips used some Intel optimized memory kit (XMP enabled) but not memory kits AMP enabled. It would be interesting to see how the AMD chips perform with an AMD performance kit memory kit.

            Without this info I cannot evaluate/reproduce completely the review.

            In any case the review is very good and helpful for me. Thanks!

            Comment


            • #96
              Originally posted by necro-lover View Post
              heise.de: AMD's FX-8350 125Watt TDP pure fake number 168 watts measured

              http://www.heise.de/newsticker/meldu...i-1734298.html

              AMD is just try to fool us.
              The values claimed in that website cannot be evaluated because they do not provide any relevant information.

              What did they measure and how? Did they measure current and next calculated power from assuming constant 12V?

              What PSU they used? Some PSU use one 12V rail to power both CPU and GPU.

              What form factor they used? I have seen comparisons where the AMD was run on a micro-ATX mobo, whereas the Intel used mini-ITX (about 20W extra on the AMD side were due to the different form factor).

              What motherboard they used? The same FX-chip can consume up to 20W more by switching from an Asus to a MSI micro-ATX AM3+ motherboard



              And so on. You cannot compare AMD Intel power consumption without those details.

              Comment


              • #97
                Originally posted by crazycheese View Post
                Dafuq Anandtech still manipulates graphs by picking base POWER value of 50 instead of 0 ??!

                Anyone doing his is *ALREADY* biased.


                From the graphs, Intel does the job 1/3 longer. Vishera comes first.
                Also, from many other tests, Vishera idle is on paar to SB - 60W vs 70W.
                And it costs less.
                And has much more features.
                And it overclocks.
                And fits old socket.
                And its better for multithreading.

                Its very attractive CPU. Eats more, yet costs less and offers more.



                Matter of buying CPU, installing PTS and performing timed kernel compile.
                If giving misleading graphs was the only bias that anandtech shows against AMD... people would not call them biased

                Comment


                • #98
                  Originally posted by juanrga View Post
                  The review only says that 8 Gib of memory were used for all the chips, but timings are not reported.

                  Would I assume that the review run the i7-3770k at stock speed (1600) but the A10 and FX-8350 were run with underclocked ram (1600) instead of stock speed (1866)? If this is so, then one would add some score more for the AMD chips.
                  Memory timings matters very little for most benchmarks, nothing for practical use. The reason for this is the fact that no program writes to RAM and then immediately after reads it back in the next few CPU instructions. And if so, it would be in CPU cache, so it would not matter any way.

                  People keep complaining these kinds of tests are run without DDR3-1866, but that does normally not matter. In general the improvements will be less than 2%, sometimes none at all. You could actually run the memory at 1333 MHz and it would still not hurt too much. The exception is the APUs, which are a bit more sensitive to memory bandwidth.

                  Originally posted by juanrga View Post
                  Also memory brand and profiles are not reported. I assume that both Intel and AMD chips used some Intel optimized memory kit (XMP enabled) but not memory kits AMP enabled. It would be interesting to see how the AMD chips perform with an AMD performance kit memory kit.
                  Do you actually know what XMP is? XMP would not affect performance at all, it's just stored recommended settings embedded in the flash on the memory chip. The user still has to select it in the BIOS menu, and there is nothing preventing the user from running the same settings on an AMD board. "AMD Performance kit" is just marketing bullshit, any module following specs will do. And for your information, many SB/IB boards actually defaults to running DDR3-1333 even as CPU and Memory support more, so this should be an disadvantage for Intel!

                  Comment


                  • #99
                    it depends on the board, when you use oem boards as you get in retail pcs/laptops ram is most likely running @ 1333 for Intel systems. If you buy oc boards then those support of course xmp profiles or at least manual overrides for timing and speed settings and voltage control. Basically you can prove many things with benchmarks, if you use a test that is highly ram sensitive or with gpus running with shared memory then you can see a diff. I am sure many would not even correctly identify if dual or single channel ram is used with onboard gpus - that usually gives a nice boost compared to 1 piece - but you need to run benchmarks.

                    Comment


                    • Even my High End Workstation P9X79 WS defaults to 1333 MHz, I had to manually adjust speed and timings. And if my computer reboots for whatever reason (e.g. power outage), it says "overclock failure" and defaults back to 1333 MHz.

                      Comment


                      • Originally posted by efikkan View Post
                        Memory timings matters very little for most benchmarks, nothing for practical use. The reason for this is the fact that no program writes to RAM and then immediately after reads it back in the next few CPU instructions. And if so, it would be in CPU cache, so it would not matter any way.

                        People keep complaining these kinds of tests are run without DDR3-1866, but that does normally not matter. In general the improvements will be less than 2%, sometimes none at all. You could actually run the memory at 1333 MHz and it would still not hurt too much. The exception is the APUs, which are a bit more sensitive to memory bandwidth.

                        Do you actually know what XMP is? XMP would not affect performance at all, it's just stored recommended settings embedded in the flash on the memory chip. The user still has to select it in the BIOS menu, and there is nothing preventing the user from running the same settings on an AMD board. "AMD Performance kit" is just marketing bullshit, any module following specs will do. And for your information, many SB/IB boards actually defaults to running DDR3-1333 even as CPU and Memory support more, so this should be an disadvantage for Intel!
                        I know that memory can affect AMD FX-8150 CPU. For instance, gaming under windows you would lost up to a 8% performance if you run memory at 1333 MHz instead of at stock speeds



                        And the performance lost would be larger in more memory intensive tasks.

                        Moreover, how many users will be overclocking the FX chip @ 4.8 but underclocking the ram? Memory @ 2133 seems a more adequate timing and then differences would be a bit larger.

                        As you say, the APUs are much more sensitive to memory bandwidth. I read reports where the gain in performance is of up to 20% by using faster memory under windows. I know AMD chips usually run faster under linux. That is why I asked about what memory is being used in phoronix tests. This is important info (at least for me) which is lacking.

                        So far as I know AMD mobos support XMP via emulation. Enthusiasts users say me that the best results are obtained with AMD optimized RAM. I do not know more about this issue, and this is why asked to test some AMP profile.

                        Comment


                        • For APU tests Michael did test the RAM scaling.

                          Comment


                          • Originally posted by curaga View Post
                            For APU tests Michael did test the RAM scaling.
                            Yes, and he wrote:

                            It's just not with the graphics performance though where the AMD A10-5800K APU performance really desires fast memory, but for memory-intensive applications there is also a big impact when moving to DDR3-2133MHz speeds.
                            I would like to know what memory timings were used in the Vishera Benchmarks for the FX, the APUs, and the Intel chips. My impression is that the Intel chips got an extra advantage from the AMD chips running with underclocked RAM.

                            Comment


                            • You are wrong because its not a short period of time you can force this output all the time.

                              Comment


                              • ddr3-2133 is not really cheap, the money you spend for that you could instead invest in a nv gfx card without onboard vga faster ram over ddr3-1333/1600 is hardly noticeable. OEM boards often run with ddr3-1333 setting all the time (you can not select the ram speed with those boards). Lately i got 4gb ddr3-2133, it was 1-2 fps faster with TF2/mesa 9.1 and intel hd 4000. I would not say that when it jumps from 51 fps to 53 fps it was a needed upgrade (usually i use 8gb ddr3-1600 with that board) - well I switched back to the slower but more ram after some benchmarks.

                                Comment

                                Working...
                                X