Announcement

Collapse
No announcement yet.

AMD FX-8350 "Vishera" Linux Benchmarks

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    I'll echo pingufunkybeat. Beating Intel's flagship even at something, at close to half price, is excellent.

    Anyone know if piledriver opterons are already available?

    Originally posted by phred14 View Post
    Because the Core-X line has been hammering AMD to the edge of existence, we're now seeing "revenue maximizing" stunts like disabling on-chip features unless you've paid extra.
    It's not new, it's been going on at least since the Pentium 2 days. Just back then it was with something slightly more justifiable (L2 cache) than a blown e-fuse, but still way overpriced.

    Comment


    • #52
      Am I the only one who noticed that virtually all of the tests where the intel had any significant lead, were tests where the software optimization would necessarily exclude one cpu to the advantage of the other?

      Comment


      • #53
        Originally posted by bug77 View Post
        The problem is, you don't need 8 cores unless you're doing a lot of 3D rendering or movie encoding. And nobody does this too often at home. I'd settle for an upper-range quad core and even that will be overkill for browsing and many games. Single core performance is still pretty important.
        I had to register. I have to laugh out-loud on this assertion.

        As an avid LLVM/Clang user and former NeXT/Apple Engineer it never ceases to amaze me the Intel fans who know nothing of what Intel is working on regarding SMP/OpenCL and how important Multi-core designs are for their future and the future of Operating Systems to Application designs, at all levels of computing.

        My daily grind has me wanting more and more Cores, more and more GPGPU cores/streams while working on multiple projects, in parallel.

        I relish the notion of running background processes on Finite Element Analysis, x264, writing in LaTeX/XeTeX/ePub on OS X and Linux, to working in the likes of Inkscape, GIMP, Handbrake, and working in Blender or Maya.

        All from home.

        How home makes a difference in a globally interconnected world versus at an office is new. Many of Apple's engineering teams are working remotely and all from HOME. Same with Intel, AMD, on and on.

        Sure, flying in or driving in for necessary group hugs instead of Remote Conferencing and catching up on major meetings happens, but unless one absolutely is in the top core of the daily grind much of the work is flexible and from HOME.

        Sorry, but Vishera is a WIN/WIN on Price/Performance and even Power relative to one's own HOME POWER CONSUMPTION.

        My Home Theater or Bosch Re-heat Agent sucks more power. Don't whine to me about splitting hairs on KWh consumption when people smoke or drink more in a week than they spend in a year on power differentials between Intel and AMD and where you live and the added VAT this or VAT that.

        More and more apps are moving to all that multi-threaded aware/multi-core worlds and testing against single-threaded apps is truly pathetic.

        AMD has a winner.

        Comment


        • #54
          @Marc , yeah pretty much my feelings. I want to add that AMD may be under par but I like their products and buy them regardless if a $500 Intel chip can destroy it... However with what your saying, GPU computing (Open CL) is coming into it's own and the APU(SoC if it materializes) line may start bringing parity quicker than we all would think.

          Comment


          • #55
            Originally posted by droidhacker View Post
            Am I the only one who noticed that virtually all of the tests where the intel had any significant lead, were tests where the software optimization would necessarily exclude one cpu to the advantage of the other?
            But he recompiled all the benchmarks for all processors using -march native. So it's the best the processor can do, given processor-specific optimisations.

            Comment


            • #56
              Originally posted by pingufunkybeat View Post
              But he recompiled all the benchmarks for all processors using -march native. So it's the best the processor can do, given processor-specific optimisations.
              Yes, but the compiler's automated tuning is nothing compared to what somebody can do in assembly language.

              It's common for some of these kinds of workloads to write a critical section of code in assembly language that targets a specific CPU capability (ie: AVX2) because the compiler does not know how to do a good job at compiling it in all situations as there is just too much analysis to do..

              If these apps do have such assembly language in them, then of course they're going to show a huge boost in performance for Intel chips and nothing on the AMD side, or vice versa. Simply because AMD runs AVX3 which is not backwards compatible to AVX2 that the current Intel chips run.
              Intel's next generation chips will be running AVX3. If the devs manually write their critical section in AVX2 and let the compiler handle the AVX3 for AMD chips (or vice-versa), guess who's going to win on the benchmarks ?

              You'll find platform favoritism (AMD vs. Intel) in software development just as you would find it here on these forums.

              Since the CPU feature sets are different, you should expect performance to be all over the place in comparisons of AMD vs. Intel chips. Choosing the best CPU will be down to what workloads you're doing and how they're optimized.
              Last edited by Sidicas; 23 October 2012, 05:21 PM.

              Comment


              • #57
                Originally posted by bug77 View Post
                The problem is, you don't need 8 cores unless you're doing a lot of 3D rendering or movie encoding.
                make -j8

                .
                .
                .

                Comment


                • #58
                  I also created an account just to point something out - a lot of people are complaining about the max power under max load this chip is drawing, and yet most things I have seen put the idle power consumption at less than IB, and, generally, a cpu idles a hell of a lot more often then pinging at 100%.

                  Comment


                  • #59
                    Actually, I don't think there is a single thing I do on my computer that's single threaded, can't be multi-threaded, and where performance is an issue. Of course such workloads exist, but they don't play a role for me.

                    I compile a lot, do lots of image processing, run scientific simulations, encode or decode video occasionally, that's about it. All of that is easily parallelisable, often embarrasingly so. Also, I run a multi-seat set-up, so lots of parallel processes.

                    I rarely run one process on one core only and have to wait for it to finish. I know that many games suffer from this, but I don't play games.

                    Lots of work has gone into making parallel algorithms easier. Most workloads can gain from more cores, and it is trivially easy to implement nowadays (OpenMP). A process running on only one core in this day and age is usually a sign of developer laziness (though there are some algorithms where there's not much you can do). I only expect this trend to continue.

                    Comment


                    • #60
                      Originally posted by Marc Driftmeyer View Post
                      My daily grind has me wanting more and more Cores, more and more GPGPU cores/streams while working on multiple projects, in parallel.

                      I relish the notion of running background processes on Finite Element Analysis, x264, writing in LaTeX/XeTeX/ePub on OS X and Linux, to working in the likes of Inkscape, GIMP, Handbrake, and working in Blender or Maya.

                      All from home.
                      And there I was thinking users were just firing up a browser, Word and Excel sometimes and a game when they had time on their hands. Silly me, apparently the average user is a prodigious engineer/artist these days.

                      Comment

                      Working...
                      X