Announcement

Collapse
No announcement yet.

AMD Phenom II X6 1100T versus FX-8120 Performance Guide

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Originally posted by curaga View Post
    The AMD GCC patches I've seen usually appeared a year before the hw was for sale. Late

    Kaveri GCC, Oct 2012: http://www.phoronix.com/scan.php?pag...tem&px=MTIwNDY
    Kaveri for sale: Feb 2014

    Trinity GCC, Jul 2011: http://gcc.gnu.org/ml/gcc-patches/2011-07/msg00842.html
    Trinity for sale: Oct 2012
    You're right. I remained with the impression maybe because of worse support for some (video) hardware, and I thought is the same story on the CPU. I am glad that they are really better now! My last AMD CPU was a phenom with a bug, and after that I bought one without a bug, but still they had lagging support at that time. I bought once an AMD video card with an Intel laptop, and on Linux I'm using just the Intel integrated hardware, as it runs much better.

    So once again, you're right, I stand corrected.

    Comment


    • #17
      On Linux Bulldozer beats Phenom III x4 big time in Kdenlive

      In Kdenlive, processor loads come frokm Libx264 encoding and decoding when source and destination files are H264 video. On my FX-8120 overclocked to 4.4 GHZ I can do two simultanious renders from the same project, one to 1080p for archiving and one to 720p for publicaton, in just over 2x realtime. A single rendering out to 1080p takes about 1.4x realtime,
      and does not fully load the CPU. On the Phenom II x4 overclocked to just under 3.8 GHZ, it takes barely under 2x realtime to render out a single 1080p video from a Kdenlive project and almost fully loads the CPU. I forgot exactly how long a double render took on the Phenom, but it was close to 4x realtime, which makes sense as a single render that fully loads the CPUI gives about 2x realtime.

      A lot of this comes from clock speed, I tested the FX 8120 in in "one core per module" 4 thread mode and also with two modules disabled. From the difference in results it appears that for Kdenlive rendering to H264, the 2 core per module "hyperthreading" is about 35% faster that those same modules at one core per module. Also got 15% + higher clock speed overclocked to add to that.

      It's hard to fully load Bulldozer, which is why it can't do a single rendering job in real time. Full load to full load with all threads in use, it seems almost twice as fast as my Phenom II x4. Theoretical if the 8 cores were as good as 8 true discrete cores plus the higher clock speed would be 2.3x as fast, but either beats the just over 1.5x as fast as a Phenom II x4 that I would expect from a Phenom II x6. This is very different from the Windows results posted.

      Keep in mind video rendering (my most demanding use) is one of the applications Bulldozer and Piledriver are really good at. They are claimed to suck for gaming, but the newest Intel I have for comparison is a Pentium 4 so I cannot evaluate that.

      Comment


      • #18
        Anyway, FX cpus not as good as Phenom II was. Just imagine the performance that could have 32nm 8-core Phenom with FX's memory controller?

        Comment


        • #19
          Originally posted by leonmaxx View Post
          Anyway, FX cpus not as good as Phenom II was. Just imagine the performance that could have 32nm 8-core Phenom with FX's memory controller?
          Care to explain why my old FX-8120 is 50% faster than my Phenom II X6 1100T machines at x264 encoding (and a lot of other different heavy computational tasks) while having the same number of transistors per core? The main flaw with FX was having too little cache and a weak decoding unit. Both of these issues have been addressed in Steamroller. If you divide the number of transistors by each core, both FX-8350 and Phenom II have the same number of transistors -- yet FX architecture can scale frequencies much higher with a greater degree of instruction sets. Considering a FX-8350 is a good 15-30% faster per MHz in comparison to my FX-8120, and Steamroller is an unknown good margin faster than that, sounds like FX isn't so bad after all. AMD simply just needs to release a good 16 core FX CPU.

          Comment


          • #20
            Originally posted by mmstick View Post
            Care to explain why my old FX-8120 is 50% faster than my Phenom II X6 1100T machines at x264 encoding.
            Better memory controller?

            Originally posted by mmstick View Post
            Considering a FX-8350 is a good 15-30% faster per MHz in comparison to my FX-8120, and Steamroller is an unknown good margin faster than that, sounds like FX isn't so bad after all.
            Compare single core performance of FX and Phenom II at same speeds, FX still looses to Phenom II in all except memory bound tests.

            Check out review on THG: http://www.tomshardware.com/reviews/...ew,3328-3.html
            The old Phenom II beats FX-8350 and I7-3770K in 3D Studio Max rendering test.

            Intel had made a mistake with NetBurst arch. in past, but they dropped it in favor of Core 2 CPUs. May be AMD will do the same, time will tell.

            Comment


            • #21
              Originally posted by leonmaxx View Post
              Compare single core performance of FX and Phenom II at same speeds, FX still looses to Phenom II in all except memory bound tests.
              You must be new to CPU architectures. Rule #1, clock frequency means nothing when comparing different architectures. Some architectures are incapable of high frequencies, but execute more in less time, other architectures are capable of incredibly high frequencies at the cost of executing slightly slower. The goal of CPU architecture design is not to do something as idiotic as getting the lowest IPC as that can ruin the potential of the design.

              The goal is not to achieve the lowest IPC, but to achieve the best performance -- a balance between frequency and IPC -- in the same amount of real clock time. If you can execute 20 instructions in 5 cycles within 1 second, but at the cost of doing so, you could not run your processor at higher frequencies, why would that be better than a design executing 30 instructions in 10 cycles within 1 second?

              In other words, you can't make the amateur mistake of comparing two entirely different architectures with one measurement alone. There's more to a CPU than IPC...

              The rest of your junk is nonsense. This is Linux, and this is Phoronix -- this is not the place for your Windows-based website which is more than likely already running an Intel-biased Microsoft OS with Intel-biased benchmarking software.

              Comment


              • #22
                Originally posted by mmstick View Post
                The goal is not to achieve the lowest IPC, but to achieve the best performance -- a balance between frequency and IPC -- in the same amount of real clock time. If you can execute 20 instructions in 5 cycles within 1 second, but at the cost of doing so, you could not run your processor at higher frequencies, why would that be better than a design executing 30 instructions in 10 cycles within 1 second?
                Power = freq * voltage^2, remember?

                Frequency (and so IPC) clearly matters. Especially so as higher frequencies also need higher voltages to achieve. You can only discount freq if your power happens to be free, but that's a rare situation to be in.

                It's also a historical fact that the losers are generally the high frequency/low ipc ones, while the winners are the opposite. Power tends to matter.

                Comment


                • #23
                  Originally posted by leonmaxx View Post
                  AMD made a bad decision switching to bulldozer architecture, now they are deep in the ass. Bulldozer/Vishera/Steamroller all of them loosing to Core I* in multi-core performance. Also seems like Vishera (FX-8350) is their last 8-core CPU, as they stated there will be no Steamroller for AM3+ socket, they lost this fight. And soon they'll loose discrete video card market to Nvidia.

                  R.I.P. AMD.
                  Lmao, if I had a dollar for every time I've heard this... People said the same thing in 1993 when intel released the new Pentium Processor. It took AMD three full years to respond, with the AMD K5 in 1996.

                  If you recall, AMD had performance wins over intel with the K6-2 processor, the Athlon, the early Opterons. Remember, AMD invented x86-64, which effectively killed intel's IA64 aka Itanium.

                  Intel had performance wins over AMD with the Pentium, C2D, i7, and later Xeon models.

                  Also, absolute performance doesn't mean as much as the benchmark sites would have you believe. Consumers and businesses alike want value. Compare the price/performance of comparable AMD and intel chips, and the AMD chip always costs less. In the case of Opteron vs. Xeon, AMD comes in way way cheaper for the same level of performance.

                  All this tells us, is that there's healthy competition between the two, each one repeatedly leap-frogging the other over the years. Intel happens to have the absolute performance crown at this point in time. Don't forget that the consumer (that's you and I) are the winners in all of this.
                  Last edited by torsionbar28; 03-10-2014, 11:52 AM.

                  Comment


                  • #24
                    Originally posted by curaga View Post
                    Power = freq * voltage^2, remember?

                    Frequency (and so IPC) clearly matters. Especially so as higher frequencies also need higher voltages to achieve. You can only discount freq if your power happens to be free, but that's a rare situation to be in.

                    It's also a historical fact that the losers are generally the high frequency/low ipc ones, while the winners are the opposite. Power tends to matter.
                    In other words, you've never heard of IBM processors, of which FX came about as a joint research between IBM and AMD who find higher frequencies to be a more worthwhile approach. It's blatantly obvious that running at a higher frequency requires a higher voltage, but that is entirely dependent on the architecture of the processor. You can't compare two entirely different architectures.

                    Comment


                    • #25
                      Lmao, if I had a dollar for every time I've heard this... People said the same thing in 1993 when intel released the new Pentium Processor. It took AMD three full years to respond, with the AMD K5 in 1996.

                      If you recall, AMD had performance wins over intel with the K6-2 processor, the Athlon, the early Opterons. Remember, AMD invented x86-64, which effectively killed intel's IA64 aka Itanium.

                      Intel had performance wins over AMD with the Pentium, C2D, i7, and later Xeon models.
                      Don't get me wrong. I'm not an AMD hater, I still remember K6-3 (Sharptooth), Athlon XP (Barton) and Athlon 64 (Clawhammer and Venice) - all of them was perfect CPUs from AMD.
                      Last 10 years I used mostly AMD hardware, now I have 2 PCs with AMD CPUs, one is FX-8350, and second is Phenom II X6, both PCs are with AMD video cards (and I'm using Ubuntu and Linux Mint, not M$ Windows like someone said). Both PCs are used for software development, and their value (e.g. price/performance) is highly acceptable.

                      absolute performance doesn't mean as much as the benchmark sites would have you believe.
                      3D Studio Max is not a benchmark at all, it is a real-world application. Also, I can confirm that Blender renders some scenes faster on Phenom II than on FX (that depends on scene complexity).

                      All this tells us, is that there's healthy competition between the two, each one repeatedly leap-frogging the other over the years. Intel happens to have the absolute performance crown at this point in time. Don't forget that the consumer (that's you and I) are the winners in all of this.
                      But, AMD decided not to release any new CPUs for AM3+ socket up to 2015-2016 (see AMD product roadmap), this could mean that Piledriver is last of 8-core workstation CPU from AMD. Me and my colleagues who need a fast workstation CPU, in near future will have no other choice but buying Intel CPUs.

                      Also, their Graphics Card drivers for Linux is mostly bugged, it's very annoying when in the middle of work process my cursor pointer gets corrupted and i have to store all my work and reboot a PC (relogin didn't helps), this bug annoys me for 4 months or so with Catalyst 13.9 betas, 14.1 beta, 14.2 beta. If they not fix bugs in drivers, I'll have no choice but switching to Nvidia graphics cards, as most of my colleagues already did.

                      Sorry for my bad english.
                      Last edited by leonmaxx; 03-10-2014, 01:20 PM.

                      Comment

                      Working...
                      X