Announcement

Collapse
No announcement yet.

AMD Releases FX-Series Bulldozer Desktop CPUs

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #71
    Originally posted by duby229 View Post
    Lets just say I disagree heartily and leave it at that... The past may not be a mirror for the future, but it certainly has an uncanny resemblance..

    And your exactly right that AMD's customers are not you and me they are the OEM's and Channel Partners. But they are going to buy what they feel is best for THEIR customers... Their customers buy simply what they have available. And that is my point about mindshare...

    Oh and the reason the 6150 is still being sold today is nothing more then because there is still stock.. Thats nVidias fault and noone elses.

    EDIT: If AMD wants to improve product lineups then they need to get their marketing guys off of the engineering floor and on to the marketing floor. And instead of targeting consumers they need to target their customers. If they want to improve consumer interest, then they need to build a word of mouth campaign... As the situation stands right now, when some kids mom says, "hey son, what do you think about this computer?". Hes gonna look at it and say, "Oh that has a Bulldozer processor, dont even think about it."

    And that is what mindshare does. It's far more pervasive and powerful then what people think. A word of mouth campaign is powerful, but it needs a product worthy to support it.

    I'll simply point you to look at marketshare trends in the months following new product releases. If you look at the line graphs you'll see a clear relationship between those products considered to have poor performance and those considered to have good performance with respect to marketshare. That is the power of mindshare.

    You are giving far to much credit to massive multinational megacorp OEMs that do everything in their power to cut corners at all costs, just look at the whole "Vista Ready" debacle of incredibly old hardware still being sold as new with at most 512Mb of ram causing heaps of hate mistakenly placed on MS, not that Vista wasn't a turd sandwich, but allot of it's problems where sorted at 2Gb of ram and a dual core 64 bit CPU.

    As for the 6150, I don't buy that, it's a woefully outdated chipset that if I'm not mistaken, it as well as it's Geforce 7 derived 7050 chipset have been put into life support only driver set, while the 8200/ION and 9200/ION2 chipsets are compatible with the same hardware, offer much better performance across the board, with updated video codec acceleration, Flash scaling acceleration and a boost to 3D that makes mass market games like The Sims playable. So no, I don't believe that marketing plays any role at all, especially with OEMs as they only want to cut another penny out of the cost be it in crappy mobos, shitty PSUs, janky ram or the lives of the people that they hire in 3rd world countries who's working conditions are so bad they feel death is their only choice.
    Last edited by Kivada; 10-13-2011, 11:07 PM.

    Comment


    • #72
      Originally posted by Kivada View Post
      You are giving far to much credit to massive mmultinational megacorp OEMs that do everytihng in their power to cut corners at all costs, just look at the whole "Vista Ready" debacle of incredibly old hardware still being sold as new with at most 512Mb of ram causing heaps of hate mistakenly placed on MS, not that Vista wasn't a turd sandwich, but allot of it's problems where sorted at 2Gb of ram and a dual core 64 bit CPU.
      No vista was the first OS with a GPU scheduler. It sucked so bad because it's a latency slow turd sandwich nightmare. Not just a regular turd sandwich. The problems weren't sorted by 2gb of ram and dual core 64 bit cpu's. The problems were sorted at throwing 8 times more processing power at the turd sandwich scheduler. Windows 7 is supposed to be a whole lot better. It at least manages sub 100 microsecond latencies with modern overclocked cpu's and gpu's. Something vista probably can't even manage to this date. The problems on both vista and windows 7 is that when it doesn't have a turd sandwich driver for the turd sandwich hardware it runs home to momma and runs whatever function it can't do in silicon off a software rasterizer. So windows vista pretty much ate your cpu for the first year it was out. Windows 7 pretty much ate your cpu for the first 9 months it was out. Now it just eats you cpu whenever new hardware comes out or some new game engine starts using junk that the hardware driver people haven't gotten around to fixing just right yet. Which is why I still run xp. Because 5 to 7 microsecond latency beats the crap out of 80 to 120 microsecond latency and I don't give one rats ass about directx 10 or 11 because directx 10 is pretty much nothing. 10.1 does low ball tesselation and 11 won't work worth a crap till 14 or maybe 20nm gpu's. And we were supposed to get 28nm gpu's in 2010, I mean 2011, I mean 2012.

      Comment


      • #73
        Originally posted by alexThunder View Post
        I'm planning to get one of those. Can anyone recommend a good AM3+ board (i.e. with fully supported onboard-audio) for Linux? Right now I have an Asus Striker Extreme (plz don't ask me why ), which actually works fine, but I always have some sound issues :/
        Unfortunately, all "onboard-audio" is crappy on every new(er) MB board for Linux, and even some in Win* !?.
        You're much better off getting a cheap (older)used PCI sound card, if you really want quality (some of these actually have really good op-amp's, and much better/stabler Drivers').
        I have had generally better quality/results using those, than these absolute shite "onboard-audio" that "ALL" MB-manufacturers stick on all their new MB's.
        Last edited by scjet; 10-14-2011, 05:19 AM.

        Comment


        • #74
          now I'm just quoting almost every review out there, like Amandtexh,...
          But they all sound like Bulldozer is a confirmed flop.

          I don't mind if AMD is behind Intel a bit, becuase AMD's price (bang for buck) "ALWAYS" made up for it. But this time, apparently my Phenom II X6-1090T is better and cheaper than the newest Bulldozer respectively. wtf !?
          Well, AMD better be making some money with their GPU's/Llano's/and hopefully Trinity's. 'cause this Desktop-CPU decision for me is now over.

          Unless, of course, Michael can get a good sample Bulldozer from AMD.
          Last edited by scjet; 10-14-2011, 05:15 AM.

          Comment


          • #75
            Originally posted by scjet View Post
            now I'm just quoting almost every review out there, like Amandtexh,...
            But they all sound like Bulldozer is a confirmed flop.

            I don't mind if AMD is behind Intel a bit, becuase AMD's price (bang for buck) "ALWAYS" made up for it. But this time, apparently my Phenom II X6-1090T is better and cheaper than the newest Bulldozer respectively. wtf !?
            Well, AMD better be making some money with their GPU's/Llano's/and hopefully Trinity's. 'cause this Desktop-CPU decision for me is now over.

            Unless, of course, Michael can get a good sample Bulldozer from AMD.
            We'll see about Bulldozer on servers before calling it a flop - but it's definitely looking bad on the desktop. There's some hope of performance improvements (new architecture, so drivers/schedulers/whatever still need tweaking for it), but otherwise for a desktop wait for the next set of revisions (aka Trinity).

            Comment


            • #76
              Originally posted by Dresdenboy View Post
              There is one French review of the FX where they used the Phoronix Test Suite (PTS in their table):
              http://www.pcinpact.com/articles/amd-fx-8150/420-5.htm
              Uh, I see. This has been linked one page before.

              Anyway - recompiled code seems to show a significant advantage for BD. Same in HT4U's cray test (German).

              BD has strong rules for instruction grouping, which might lower decode bandwidth to 1-2 inst/cycle, thus not only limiting performance of one thread but indirectly reducing decode throughput of the second thread.

              So far if AMD includes their "Branch Redirect Recovery Cache" (a ?op buffer) in Trinity, this might even help legacy code after they surpassed the decode stage bottleneck.

              Comment


              • #77
                Originally posted by Dresdenboy View Post
                Uh, I see. This has been linked one page before.

                Anyway - recompiled code seems to show a significant advantage for BD. Same in HT4U's cray test (German).

                BD has strong rules for instruction grouping, which might lower decode bandwidth to 1-2 inst/cycle, thus not only limiting performance of one thread but indirectly reducing decode throughput of the second thread.

                So far if AMD includes their "Branch Redirect Recovery Cache" (a ?op buffer) in Trinity, this might even help legacy code after they surpassed the decode stage bottleneck.
                If 4 core cpu's came out in 2003 they would have flopped as bad. The problem is 8 cores is not twice as hard to get out of lockdown. It's orders of magnitude harder. It'll eventually get better and bulldozer could probably run 70 percent faster than 4 cores but most likely only 50 percent faster. Everyone on this thread will be dead before 16 core desktops actually work well in general computing environments.
                Look at gpu's. As the core counts go up on them. They keep simultaneously forcing higher and higher resolution screens. Because they can't improve performance much on normal resolution screens but they can give the gpu's more to do on each frame.

                Computer manufactures are stuck in 2 strategies. Doing what is feasible and workable and hoping it's accepted by buyers. Or working towards nearly unachievable goals and simply lying about progress as they go with customers buying and giving a son i'm dissapoint reaction at every stage of it. So it's either 2560x1600 3d screens that give you headaches or nihilism and converting your early adopter groups into wait and see groups over time.

                The problem is they went many cores to get around the clock speed problem. Now they need clock speed to get around the many cores problem because only speed will bring the busses out of lock out faster. So you need what you can't get to fix what you got but can't get.

                http://www.youtube.com/watch?v=FTeWGD4Q9T4

                Comment


                • #78
                  Originally posted by Kivada View Post
                  Do you even know how CPU cache works? Do you realize that Intel has put as much as 12Mb of cache on their Core series CPUs? The chips are fully capable of that 4.5Ghz, just the same as Intel's, via speed step detecting that you are running a heavy single threaded app and nothing else it will up clock one module to that range. Though you can just put on high end cooling and push as much as 4.9Ghz on really good air or on a self contained liquid kit in a box. Though you can likely do much better with a DIY liquid kit that is overbuilt to the point you never go more then 5c over ambivalent temperatures.

                  Intel mobos are still generally more expensive then AMD boards, add to the fact that if your AM3 board has a BIOS update too support these it'll be a drop in upgrade from anything in the Phenom2 line.

                  Go google it before repeating canned crap.
                  Pretty sure i know more than you do. I've seen the overclocking results, but Intel chips can OC as well. The point is that AMD doesn't feel comfortable shipping CPUs at that speed, whether it's due to manufacturing issues, power requirements, or whatever. I can guarantee you they aren't specifically crippling their hardware just for the fun of it. And what does MB cost have to do with this discussion? I've already stated Bulldozer is fairly cheap.

                  As far as the cache situation - cache isn't bad because there's so much of it (obviously - can't believe i have to make this point). It's bad because it adds to latency when you have to access memory outside the cache. If the latency of the L3 cache is 30ns, that automatically adds 30ns each and every time the application needs to grab data from the system memory. L3 cache is great for server applications that tend to access the same code over and over again. For desktop apps, many of them tend to have random access patterns that are unable to utilize the L3 cache very well, and latency ends up being much more important than capacity. Intel's cache latencies are noticeably faster than Bulldozer, by the way.

                  Comment


                  • #79
                    Originally posted by smitty3268 View Post
                    Pretty sure i know more than you do. I've seen the overclocking results, but Intel chips can OC as well. The point is that AMD doesn't feel comfortable shipping CPUs at that speed, whether it's due to manufacturing issues, power requirements, or whatever. I can guarantee you they aren't specifically crippling their hardware just for the fun of it. And what does MB cost have to do with this discussion? I've already stated Bulldozer is fairly cheap.

                    As far as the cache situation - cache isn't bad because there's so much of it (obviously - can't believe i have to make this point). It's bad because it adds to latency when you have to access memory outside the cache. If the latency of the L3 cache is 30ns, that automatically adds 30ns each and every time the application needs to grab data from the system memory. L3 cache is great for server applications that tend to access the same code over and over again. For desktop apps, many of them tend to have random access patterns that are unable to utilize the L3 cache very well, and latency ends up being much more important than capacity. Intel's cache latencies are noticeably faster than Bulldozer, by the way.
                    Maybe it's because you forget that loading from the CPU cache is faster then loading from system ram? If I remember right since it's been a few years, if all other things are equal doubling the amount of CPU cache should improve performance by 3-5% across the board, so those of you claiming that it might perform better without the L3 cache are basing your arguments on what exactly?

                    As before, these weren't designed for the desktop market, because CPU performance on the desktop as been "good enough" for several years now, servers are where the money is these days, so theres point in designing a CPU specifically for the consumer market when your server CPU will work just fine for the task, thus why bother pouring time and money into chasing a stagnant market where the difference in perceived performance by the end user will essentially be identical to those using a machine from 2006?

                    Seriously, put any of your non tech relatives in front of a machine with a Core2 or Athlon2 system and an i5 or i7, can they tell the difference, especially if both machines have identical graphics drivers and amounts of ram? My guess would be no, they can't tell the machines apart, the seconds saved off by the i7 in their day to day tasks would be completely unnoticed. Welcome to 95% of the computing market. Seriously, they are either suffering with a terrible GPU and no SSD, having those would have a more noticeable impact then a faster CPU.

                    Comment


                    • #80
                      Originally posted by smitty3268 View Post
                      The point is that AMD doesn't feel comfortable shipping CPUs at that speed, whether it's due to manufacturing issues, power requirements, or whatever.
                      Not?
                      http://www.youtube.com/watch?v=8rDwXuAINJk
                      See AMD FX Processors with stunning performance go head to head in gaming, multi-tasking, video processing, and image processing. This video shows AMD FX processors against a number of competitive processors. Unlocked. Unrivaled. Unbelievable.

                      Comment

                      Working...
                      X