Page 10 of 12 FirstFirst ... 89101112 LastLast
Results 91 to 100 of 117

Thread: AMD FX-8350 "Vishera" Linux Benchmarks

  1. #91
    Join Date
    Oct 2009
    Posts
    13

    Default

    Quote Originally Posted by crazycheese View Post
    I have corrected Anandtech fake graph.
    Could you fix it so it shows the end time for the Intel 95W part? It only has the end for the 77W part.

    Also, does the x264 HD 5.0.1 benchmark use Microsoft's compiler? Cinebench uses ICC or OpenMP, and as the earlier LLVM/Open64/GCC tests show, compiler matters. (Note: I was looking for an ICC benchmark on Phoronix but couldn't find it.)

    Extremetech's prime95 benchmark is very misleading. The core i5 can't work as hard because it's limited to 1/2 the number of workers as the fx-8350. I'm unfamilar with ET's site, is there an 8-thread comparison? This is important because at a transistor increase of 200M, it would help indicate the Ivy Bridge architecture efficiency.

    Legitreview's prime95 page with a chart lists Battlefield 3, CPU load, and idle. The text above says it's supposed to be 3dMark11, which is still a black box as far as optimization is concerned.

    @Phoronix
    I like the review. I can now see why early speculation said Bulldozer was going to catch up to SB -- and why AMD resigned from BAPCo. I imagine if AMD went under then you'd have the last benchmark site standing; no one else is capable of comparing as many benchmarks on POWER and ARM.

  2. #92
    Join Date
    Dec 2009
    Posts
    492

    Default

    Quote Originally Posted by crazycheese View Post
    8350 Vishera ended at 900 seconds and used average of 200W, where a rival 3570K ended at 1100 and consumed 120W.

    200W x 900s / (60*60) = 50 Wh
    120W x 1100s / (60*60) = 36,7 Wh

    Thats 73,4% of Intel electro efficiency, intel is just 26% more efficient. Not 100% or 50%, but mere 26%.

    Minimum price in europe:
    3570k = 198,48
    8350 = 179,16

    8350 supports ecc ram, vtd and is unlocked. Also scales very well at load when downclocked.

    I think its VERY competitive CPU compared to Intel. My plan for AMD - cut management income and get more *good* engineers. They need only to improve power management at LOAD and probably add 16 minicores version for enthusiasts on this technology (with several cores to run at full speed and offload unimportant lowpriority tasks to lowerspeed cores) - they need engineers at linux and offtopicos kernel to implement this and they are back competitive.
    You're actually comparing Vishera to Sandy Bridge 2500k. Ivy Bridge never reaches 110W. A difference of 25% in power usage is not "mere", considering the CPU is one of the most power hungry components in a PC.

  3. #93
    Join Date
    Aug 2012
    Posts
    292

    Default

    AMD fool us even more : "resonant Clock-Mesh" is not in the FX8350 also no 32-Byte-Paket-Front-End only a 16byte-paket-Front-End

    source: http://www.planet3dnow.de/vbulletin/...408737&garpg=3

    Because of this the FX8350 burn so much power because only "Trinity" get all core features of "Piledriver"

    Yes AMD fool another one.

  4. #94
    Join Date
    Jul 2007
    Posts
    9

    Default Proper English please!

    Quote Originally Posted by necro-lover View Post
    AMD fool us even more : "resonant Clock-Mesh" is not in the FX8350 also no 32-Byte-Paket-Front-End only a 16byte-paket-Front-End

    source: http://www.planet3dnow.de/vbulletin/...408737&garpg=3

    Because of this the FX8350 burn so much power because only "Trinity" get all core features of "Piledriver"

    Yes AMD fool another one.
    Hey mate, please write in proper English. You're really hard to understand.

    Regards and greetings from Germany.
    Multics.

  5. #95
    Join Date
    Mar 2013
    Posts
    63

    Default Memory configuration?

    The review only says that 8 Gib of memory were used for all the chips, but timings are not reported.

    Would I assume that the review run the i7-3770k at stock speed (1600) but the A10 and FX-8350 were run with underclocked ram (1600) instead of stock speed (1866)? If this is so, then one would add some score more for the AMD chips.

    Also memory brand and profiles are not reported. I assume that both Intel and AMD chips used some Intel optimized memory kit (XMP enabled) but not memory kits AMP enabled. It would be interesting to see how the AMD chips perform with an AMD performance kit memory kit.

    Without this info I cannot evaluate/reproduce completely the review.

    In any case the review is very good and helpful for me. Thanks!

  6. #96
    Join Date
    Mar 2013
    Posts
    63

    Default

    Quote Originally Posted by necro-lover View Post
    heise.de: AMD's FX-8350 125Watt TDP pure fake number 168 watts measured

    http://www.heise.de/newsticker/meldu...i-1734298.html

    AMD is just try to fool us.
    The values claimed in that website cannot be evaluated because they do not provide any relevant information.

    What did they measure and how? Did they measure current and next calculated power from assuming constant 12V?

    What PSU they used? Some PSU use one 12V rail to power both CPU and GPU.

    What form factor they used? I have seen comparisons where the AMD was run on a micro-ATX mobo, whereas the Intel used mini-ITX (about 20W extra on the AMD side were due to the different form factor).

    What motherboard they used? The same FX-chip can consume up to 20W more by switching from an Asus to a MSI micro-ATX AM3+ motherboard



    And so on. You cannot compare AMD Intel power consumption without those details.

  7. #97
    Join Date
    Mar 2013
    Posts
    63

    Default

    Quote Originally Posted by crazycheese View Post
    Dafuq Anandtech still manipulates graphs by picking base POWER value of 50 instead of 0 ??!

    Anyone doing his is *ALREADY* biased.


    From the graphs, Intel does the job 1/3 longer. Vishera comes first.
    Also, from many other tests, Vishera idle is on paar to SB - 60W vs 70W.
    And it costs less.
    And has much more features.
    And it overclocks.
    And fits old socket.
    And its better for multithreading.

    Its very attractive CPU. Eats more, yet costs less and offers more.



    Matter of buying CPU, installing PTS and performing timed kernel compile.
    If giving misleading graphs was the only bias that anandtech shows against AMD... people would not call them biased


  8. #98
    Join Date
    Jun 2010
    Posts
    150

    Default

    Quote Originally Posted by juanrga View Post
    The review only says that 8 Gib of memory were used for all the chips, but timings are not reported.

    Would I assume that the review run the i7-3770k at stock speed (1600) but the A10 and FX-8350 were run with underclocked ram (1600) instead of stock speed (1866)? If this is so, then one would add some score more for the AMD chips.
    Memory timings matters very little for most benchmarks, nothing for practical use. The reason for this is the fact that no program writes to RAM and then immediately after reads it back in the next few CPU instructions. And if so, it would be in CPU cache, so it would not matter any way.

    People keep complaining these kinds of tests are run without DDR3-1866, but that does normally not matter. In general the improvements will be less than 2%, sometimes none at all. You could actually run the memory at 1333 MHz and it would still not hurt too much. The exception is the APUs, which are a bit more sensitive to memory bandwidth.

    Quote Originally Posted by juanrga View Post
    Also memory brand and profiles are not reported. I assume that both Intel and AMD chips used some Intel optimized memory kit (XMP enabled) but not memory kits AMP enabled. It would be interesting to see how the AMD chips perform with an AMD performance kit memory kit.
    Do you actually know what XMP is? XMP would not affect performance at all, it's just stored recommended settings embedded in the flash on the memory chip. The user still has to select it in the BIOS menu, and there is nothing preventing the user from running the same settings on an AMD board. "AMD Performance kit" is just marketing bullshit, any module following specs will do. And for your information, many SB/IB boards actually defaults to running DDR3-1333 even as CPU and Memory support more, so this should be an disadvantage for Intel!

  9. #99
    Join Date
    Aug 2007
    Posts
    6,614

    Default

    it depends on the board, when you use oem boards as you get in retail pcs/laptops ram is most likely running @ 1333 for Intel systems. If you buy oc boards then those support of course xmp profiles or at least manual overrides for timing and speed settings and voltage control. Basically you can prove many things with benchmarks, if you use a test that is highly ram sensitive or with gpus running with shared memory then you can see a diff. I am sure many would not even correctly identify if dual or single channel ram is used with onboard gpus - that usually gives a nice boost compared to 1 piece - but you need to run benchmarks.

  10. #100
    Join Date
    Jun 2010
    Posts
    150

    Default

    Even my High End Workstation P9X79 WS defaults to 1333 MHz, I had to manually adjust speed and timings. And if my computer reboots for whatever reason (e.g. power outage), it says "overclock failure" and defaults back to 1333 MHz.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •