Announcement

Collapse
No announcement yet.

AMD Releases FX-Series Bulldozer Desktop CPUs

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • deanjo
    replied
    Originally posted by Kano View Post
    @deanjo

    that means you will switch to ivi bridge next year or what
    Maybe, maybe not. Still undecided, it depends if intel stops putting artificial restrictions on their product for items like virtualization or not. For my workloads core count does seem to make a greater factor then anything else and I may just go with a workstation setup and skip the consumer level all together. I may even just say screw it and pick up a next gen MacPro since it is workstation class and would allow me to continue on with Win/Lin/OS X and iOS development with minimal hassle. Right now going with another AMD system would be really hard to justify without a much improved chipset for me.

    Leave a comment:


  • deanjo
    replied
    Originally posted by droidhacker View Post
    i3/i5/i7 are all the same architecture with different performance grades.
    I realize that, that is why I went i3/i5/i7 instead of i3,i5,i7. I should have just said iSeries to avoid confusion.

    Intel has also been known in the past for building-to-benchmarks. AMD has a history of ignoring the benchmarks and just building the best chip they can.
    That is fine and all but when those benchmarks are not synthetic benchmarks but benchmarks of real world usage then it does matter and BD falls short in all areas.

    K7 is a different kind of case, from a different era. It wasn't just an architectural change, it was also just plain MUCH MUCH FASTER. You'll recall that it had a DDR FSB. The benchmarks at that time certainly would have been made for K6/P2, but even with that against it, it was ***SO MUCH FASTER*** that it didn't matter.
    Right and that is what typically goes along with a new architecture. Truthfully this is the first new architecture in I can recall where it did not out perform the previous going back to the 8088 even without code optimization.

    Leave a comment:


  • Fenrin
    replied
    Originally posted by blackshard View Post
    But this don't really explains why BD single core performance is much worse than Stars single core performance.
    If I remember correctly, AMD told about a year ago that one Bulldozer core would offer at the same frequency about 90% throughput performance of an Phenom K10 core. This CPU is clearly not for people who want single core performance.

    Leave a comment:


  • Alejandro Nova
    replied
    Originally posted by mcirsta View Post
    Taking a look at the numbers I wish they just launched some AM3+ Phenom II x4 and x6 ... manufactured in 32 nm this could actually be better than Bulldozer. It's sad because the Phenom II is very old stuff but that's the way it is.
    AMD screwed up big time and they should do their best to fix it. Intel did the same with the P4 way back but they could afford to, I'm not sure AMD can. And when they did fix it they came up with something that's very good , the Core CPUs.
    AFAIK the Pentium 4 was never fixed, but entirely scrapped. Core CPUs are based on Pentium IIIs, not Pentium 4s. So this is worrysome for AMD.

    Leave a comment:


  • mirv
    replied
    Originally posted by locovaca View Post
    We've been hearing this since the Pentium 4 got Hyperthreading in 2002. That's a decade of "everything will be written with parallelism in mind". Most applications do not need to be massively parallel, nor deal with the complexities that come with it. Fast single threaded performance will remain one of the most important aspects for years to come on the desktop.
    It's one of those things that's gradual, not instant. Also, for anything which is single threaded, single process, dependent, then anything on the market right now is more than sufficient. Having a super-powerful beast that does text editing is kind of a waste of time. Having a super-powerful beast running various virtual machines, web servers, etc, is another thing entirely.
    There's a good deal of middle ground - games, for one, video encode/decode, web browsers, etc, are still increasing their support multithreaded/multiprocess capabilities.

    Leave a comment:


  • blackshard
    replied
    Originally posted by skies View Post
    Take these tests by X-bit labs, Anandtech, etc with a big grain of salt.

    Most of these tools used for testing (Sisandra, various games, etc) are compiled using Intels C/C++ compiler which generates fast and optimized codepath's for Intels own processors but very bad and inefficient codepaths for AMD processors. Very unfair to AMD and Bulldozer.

    Ofcouse these tests will show Intel as a big leader over AMD as the Intel code runs optimized and AMD does not.

    Do the tests using AMD's own Open64 C/C++ compiler and you will get a different result.
    But this don't really explains why BD single core performance is much worse than Stars single core performance.

    Leave a comment:


  • locovaca
    replied
    Originally posted by mirv View Post
    Bulldozer was designed with heavily threaded/multi-process environments (aka server systems) in mind. Many desktop applications are headed that way too, but they're not there yet.
    We've been hearing this since the Pentium 4 got Hyperthreading in 2002. That's a decade of "everything will be written with parallelism in mind". Most applications do not need to be massively parallel, nor deal with the complexities that come with it. Fast single threaded performance will remain one of the most important aspects for years to come on the desktop.

    Leave a comment:


  • psycho_driver
    replied
    Most of you are missing the point of this processor. Look at the die size compared to performance (just ignore per core/process performance for the moment). Bulldozer actually beats current Sandy Bridges by a good margin here, and this is a very important metric in the server market. The only downside is that the Interlagos chips must be pulling more power than their Xeon competitors.

    The Bulldozer looks pretty lackluster as a desktop part, but it will probably get AMD some big wins in the server market which, along with the no-brainer Llano wins for the mobile segment, will keep them humming along in their usual role as Intel's red-headed stepchild.

    Overall, PC enthusiasts represent a pretty small market. If they didn't, AMD would have gained a lot more market share than they did in the Athlon64/P4 days. It sucks for us because it will keep retail prices for the good stuff from Intel higher, and their low-end chips artificially crippled.
    Last edited by psycho_driver; 10-12-2011, 11:36 AM.

    Leave a comment:


  • a7v-user
    replied
    Originally posted by droidhacker View Post
    You don't necessarily need to optimize *all* your binaries. Probably the kernel by itself will make a big difference.
    That's not going to save BD unless the kernel does something very stupid when it encounters an unknown AMD cpu.
    BD is slower then an equally clocked Phenom II when it comes to single-threaded applications as well as most multithreaded applications.
    That is by design, as BD was designed to have lower IPC then previous generations but have higher clock rate to compensate.
    Unfortunately the reality is that the transistor techs and materials used today doesn't allow BD to overclock that much further then Phenom II.
    [Edit:]Sweclockers.com reached 4.7-4.8 GHz with watercooling.Google translated to english/Swedish original text

    I bet a 32nm Phenom II X6 would reach just as high clock rates but would outperform BD apart from some specialized application.
    If you're a full-time user of those applications, then BD will be better then a Phenom II but shouldn't you be looking in the server segments of the CPU market instead?
    Compile-time or realtime optimizations of the workload sounds good on paper, we heard it when Intel talked about P4, but unless you have a very
    static workload it takes a lot of work to optimize all the workload.

    Sorry, I didn't mean to sound so harsh and negative but I didn't expect my Phenom II X6 to be just as good or better then BD.
    Last edited by a7v-user; 10-12-2011, 11:43 AM. Reason: Added overclocking result

    Leave a comment:


  • Raven3x7
    replied
    Originally posted by droidhacker View Post
    i3/i5/i7 are all the same architecture with different performance grades.
    Intel has also been known in the past for building-to-benchmarks. AMD has a history of ignoring the benchmarks and just building the best chip they can.

    K7 is a different kind of case, from a different era. It wasn't just an architectural change, it was also just plain MUCH MUCH FASTER. You'll recall that it had a DDR FSB. The benchmarks at that time certainly would have been made for K6/P2, but even with that against it, it was ***SO MUCH FASTER*** that it didn't matter.
    With the craptastic performance the cache seems to have i highly doubt optimizations are gonna get you that much. For BD to even be a decent arch it needs to improve by at least 20%, which is not going to happen. I mean an 8 core 32nm "Llano core(Is Stars the codename?)" CPU would have performed significantly better while probably also consuming less power. The only explanation i can think of is that there is some bottleneck that didn't show up in the simulations and caused AMD to miss their performance target by a large degree. Otherwise i really can't understand why they didn't simply cancel the project. BTW a while back the guy that came up with MCM idea had commented that the design didn't really come out that great.

    Leave a comment:

Working...
X