Announcement

Collapse
No announcement yet.

AMD FX-8350 "Vishera" Linux Benchmarks

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    Originally posted by JS987 View Post
    AMD isn't bad deal in case of 2 hour of full load per day and discrete graphics card
    3770K - 318 Euro
    FX-8350 - 190 Euro
    1 kWh - 0.2 Euro
    According to XBitLabs
    3770K - 132 Watt at full load
    FX-8350 - 213 Watt at full load
    0.08 * 2 * 365 * 0.2 = 11.68 Euro per year more in case of AMD
    128/11.68 = 11 years - time of equal costs
    You approximate the parameter values, yet give exact result? For example, you won't find energy under 0.25 now and under 0.3 very soon (due to ecology tax).
    The 80 Watt difference is also different from my calculations of 113. Here 1/3 cut, there 1/3 cut and you magically get 50% off

    Its very good they at least reduced idle to great degree.

    By the way,.. we are missing one more important point - not every CPU cuts power exactly when it finishes the task. So, it could be that Intel or AMD cpu is actually consuming energy *much* above the "idle" for cirtain period of time, even if they have already finished the task. You need a consumption graph to analyse that...

    If I recall correctly, there was a good review of power/task at hexxus some way ago, comparing now old intel corei660 and athlon II x4/phenom II x4.

    Comment


    • #47
      Originally posted by crazycheese View Post
      You approximate the parameter values, yet give exact result? For example, you won't find energy under 0.25 now and under 0.3 very soon (due to ecology tax).
      I checked price list of my energy supplier:
      minimal rate - 0.10637 Euro
      maximal rate - 0.21161 Euro
      It is possible that prices are higher in other countries.

      Comment


      • #48
        Originally posted by necro-lover View Post
        no its even more you can have unlocked cpu+virtualization+ECC non-reg ram.
        These (parent and grandparent post) are some of the more significant posts I've read. Intel is capable of excellent technical work as they have proven many times, and are currently proving with the entire Banias - through - Ivy Bridge line. But they also have a strong marketing division that frequently seems to work at odds with directions that chip-head users present here might like. It is important to realize that from a corporate decision making point of view, it appears that at Intel, marketing trumps technical design.

        Hence not that long ago we got the "Oooh, Fast!" NetBurst designs, and the "clone proof" IA64 designs. Because the Core-X line has been hammering AMD to the edge of existence, we're now seeing "revenue maximizing" stunts like disabling on-chip features unless you've paid extra. Kick AMD all you like, but if they're gone Intel has proven multiple times that, absent meaningful competition, they wander way off-target and we the customers lose. I don't know if ARM will provide proper competition for Intel in the future, at least partly because Microsoft has managed to lock down ARM-based hardware to be Windows-only, preventing it from growing into a true general-purpose platform.

        Comment


        • #49
          Originally posted by bug77 View Post
          The problems is, best case scenario they'll regain performance parity with Ivy Bridge. Intel will be releasing Haswell anyway.
          Well, they're approaching parity with Ivy Bridge right now, with current tech. Haswell is bound to bring great improvements, but Steamroller too.

          It's hard to come up with competitive products when profit remains elusive. Not to mention the engineers AMD plans to lay off. Managers by themselves do not build much
          It's pretty clear AMD can't beat intel at its own game. What they need is another revolutionary idea like when they chose to up IPC rather than MHz or when they cleverly adding 64bit capability to 32bit CPUs. But these ideas are the stuff of legends...
          The whole Bulldozer architecture was an attempt to do things differently, actually. It looked like it was a big fail, but apparently after fixing the kinks from the first generation, it might be a promising architecture after all.

          We'll see.

          Comment


          • #50
            losers

            I can't believe some of the dorks that haunt this forum with nothing better to do than trash the particular brand name that they don't worship. Is your life so meaningless and pathetic (you don't have to answer we know already) that this is your contribution to humanity? No wonder you're alone and your keyboard is sticky.

            Comment


            • #51
              I'll echo pingufunkybeat. Beating Intel's flagship even at something, at close to half price, is excellent.

              Anyone know if piledriver opterons are already available?

              Originally posted by phred14 View Post
              Because the Core-X line has been hammering AMD to the edge of existence, we're now seeing "revenue maximizing" stunts like disabling on-chip features unless you've paid extra.
              It's not new, it's been going on at least since the Pentium 2 days. Just back then it was with something slightly more justifiable (L2 cache) than a blown e-fuse, but still way overpriced.

              Comment


              • #52
                Am I the only one who noticed that virtually all of the tests where the intel had any significant lead, were tests where the software optimization would necessarily exclude one cpu to the advantage of the other?

                Comment


                • #53
                  Originally posted by bug77 View Post
                  The problem is, you don't need 8 cores unless you're doing a lot of 3D rendering or movie encoding. And nobody does this too often at home. I'd settle for an upper-range quad core and even that will be overkill for browsing and many games. Single core performance is still pretty important.
                  I had to register. I have to laugh out-loud on this assertion.

                  As an avid LLVM/Clang user and former NeXT/Apple Engineer it never ceases to amaze me the Intel fans who know nothing of what Intel is working on regarding SMP/OpenCL and how important Multi-core designs are for their future and the future of Operating Systems to Application designs, at all levels of computing.

                  My daily grind has me wanting more and more Cores, more and more GPGPU cores/streams while working on multiple projects, in parallel.

                  I relish the notion of running background processes on Finite Element Analysis, x264, writing in LaTeX/XeTeX/ePub on OS X and Linux, to working in the likes of Inkscape, GIMP, Handbrake, and working in Blender or Maya.

                  All from home.

                  How home makes a difference in a globally interconnected world versus at an office is new. Many of Apple's engineering teams are working remotely and all from HOME. Same with Intel, AMD, on and on.

                  Sure, flying in or driving in for necessary group hugs instead of Remote Conferencing and catching up on major meetings happens, but unless one absolutely is in the top core of the daily grind much of the work is flexible and from HOME.

                  Sorry, but Vishera is a WIN/WIN on Price/Performance and even Power relative to one's own HOME POWER CONSUMPTION.

                  My Home Theater or Bosch Re-heat Agent sucks more power. Don't whine to me about splitting hairs on KWh consumption when people smoke or drink more in a week than they spend in a year on power differentials between Intel and AMD and where you live and the added VAT this or VAT that.

                  More and more apps are moving to all that multi-threaded aware/multi-core worlds and testing against single-threaded apps is truly pathetic.

                  AMD has a winner.

                  Comment


                  • #54
                    @Marc , yeah pretty much my feelings. I want to add that AMD may be under par but I like their products and buy them regardless if a $500 Intel chip can destroy it... However with what your saying, GPU computing (Open CL) is coming into it's own and the APU(SoC if it materializes) line may start bringing parity quicker than we all would think.

                    Comment


                    • #55
                      Originally posted by droidhacker View Post
                      Am I the only one who noticed that virtually all of the tests where the intel had any significant lead, were tests where the software optimization would necessarily exclude one cpu to the advantage of the other?
                      But he recompiled all the benchmarks for all processors using -march native. So it's the best the processor can do, given processor-specific optimisations.

                      Comment


                      • #56
                        Originally posted by pingufunkybeat View Post
                        But he recompiled all the benchmarks for all processors using -march native. So it's the best the processor can do, given processor-specific optimisations.
                        Yes, but the compiler's automated tuning is nothing compared to what somebody can do in assembly language.

                        It's common for some of these kinds of workloads to write a critical section of code in assembly language that targets a specific CPU capability (ie: AVX2) because the compiler does not know how to do a good job at compiling it in all situations as there is just too much analysis to do..

                        If these apps do have such assembly language in them, then of course they're going to show a huge boost in performance for Intel chips and nothing on the AMD side, or vice versa. Simply because AMD runs AVX3 which is not backwards compatible to AVX2 that the current Intel chips run.
                        Intel's next generation chips will be running AVX3. If the devs manually write their critical section in AVX2 and let the compiler handle the AVX3 for AMD chips (or vice-versa), guess who's going to win on the benchmarks ?

                        You'll find platform favoritism (AMD vs. Intel) in software development just as you would find it here on these forums.

                        Since the CPU feature sets are different, you should expect performance to be all over the place in comparisons of AMD vs. Intel chips. Choosing the best CPU will be down to what workloads you're doing and how they're optimized.
                        Last edited by Sidicas; 10-23-2012, 05:21 PM.

                        Comment


                        • #57
                          Originally posted by bug77 View Post
                          The problem is, you don't need 8 cores unless you're doing a lot of 3D rendering or movie encoding.
                          make -j8

                          .
                          .
                          .

                          Comment


                          • #58
                            I also created an account just to point something out - a lot of people are complaining about the max power under max load this chip is drawing, and yet most things I have seen put the idle power consumption at less than IB, and, generally, a cpu idles a hell of a lot more often then pinging at 100%.

                            Comment


                            • #59
                              Actually, I don't think there is a single thing I do on my computer that's single threaded, can't be multi-threaded, and where performance is an issue. Of course such workloads exist, but they don't play a role for me.

                              I compile a lot, do lots of image processing, run scientific simulations, encode or decode video occasionally, that's about it. All of that is easily parallelisable, often embarrasingly so. Also, I run a multi-seat set-up, so lots of parallel processes.

                              I rarely run one process on one core only and have to wait for it to finish. I know that many games suffer from this, but I don't play games.

                              Lots of work has gone into making parallel algorithms easier. Most workloads can gain from more cores, and it is trivially easy to implement nowadays (OpenMP). A process running on only one core in this day and age is usually a sign of developer laziness (though there are some algorithms where there's not much you can do). I only expect this trend to continue.

                              Comment


                              • #60
                                Originally posted by Marc Driftmeyer View Post
                                My daily grind has me wanting more and more Cores, more and more GPGPU cores/streams while working on multiple projects, in parallel.

                                I relish the notion of running background processes on Finite Element Analysis, x264, writing in LaTeX/XeTeX/ePub on OS X and Linux, to working in the likes of Inkscape, GIMP, Handbrake, and working in Blender or Maya.

                                All from home.
                                And there I was thinking users were just firing up a browser, Word and Excel sometimes and a game when they had time on their hands. Silly me, apparently the average user is a prodigious engineer/artist these days.

                                Comment

                                Working...
                                X