Announcement

Collapse
No announcement yet.

Preview: AMD's FX-9590 Eight-Core At Up To 5.0GHz On Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    I had a board DESTROYED by VRM heat issues with an FX8150

    Originally posted by ParticleBoard View Post
    I can confirm this, I have a 8350 as well with a GA-990FXA-UD3 and the VRMs get BLAZING hot. The inadequate stock heatsinks which are a borderline joke will not cut it (go over to the 990FXA-UD3 thread on overclockers if you want hard proof). I ended up buying an H60 closed loop water cooler which barely cuts it (tiny OC to rock solid 4.4) and a copper vrm heatsink with a fan right over top just to prevent throttling. Luckily this thing is on tight and works to prevent warping which sadly is a real issue. All in all I am happy with it as I splurged and bought all the cooling day one (wanted the quiet route, didn't even know about the heat issues at the time). And with Intel being used for Apple products, fawk supporting that mess...
    In early 2012 I had one of these boards die and take out the FX8150 on it. It was obvious what had happened, one of the voltage regulator IC's was blown apart! I suspect due to that heatsink being held on only with two spring pins and thermal tape, then being bumped out of position during a teardown(my machines are often apart for changes). It was overclocked to 4.6 GHZ, could be stable at 4.7 but made too much heat even for the FriOCK 6 heatpipe cooler to control when full throttled. Bottom of the "voltage wall" was 4.5 GHZ with much less heat.

    I got the same board again and an FX8120 (then a lot cheaper), decided to limit overclocks to the bottom of the "voltage wall" to control heat-and to always push the VFM heatsink down on the board after any removal of the motherboard or the cooler exhaust shroud I use to force all hot air directly ot of the case. This one usually won't go over 4.4 GHZ unless I push the hell out of the voltage AND the fans as stability becomes very temperature dependent, this is as expected on a lower-binned chip. When I got it it could run 4.5 GHZ without going over 1.39V, but after some usage it needs to stay at 4.4 to hold a 100% load on all 8 cores. Staying at the bottom of the voltage wall would have not allowed more than 4.5GHZ on the 8150 I previously had either, so no great loss: another 100 mhz would not have been worth another $50. At maximum stable overclocks there would be a 200 mhz difference, still not a lot.

    BTW, during the half of the year the house is heated Bulldozer can heat your room, this gives a nice warm spot in winter that could allow turning down the furnace. On the other hand, rendering a long video in summer requires letting the whole room get hot if you don't want to add A/C power wastage to Bulldozer's appetite at full throttle. A small room can be heated by anything bigger than a laptop, for summer use I should go with water cooling plumbed to a radiator outside the house At idle the power use is supposed to be only half of what a Phenom II X4 idles at, but I still notice the heat in the warm season. I got rid of 300W worth of incandescent bulbs for about 30W worth of LEDs, so the whole room should use less power other than when rendering video, a job never exceeding 20 minutes or so. Still feels like more heat with the computer than the bulbs though.

    Comment


    • #32
      well

      we need a strong amd to compete with intel in all markets, this is not the way ,bulldozer is flop, amd need something new fast, amd pls stop with this non sense

      Comment


      • #33
        I don't see myself willing buying Intel ever, at least in the foreseeable future. Don't get me wrong though, I'm sure Intel's performance and power-efficiency isn't too bad at all, but I much like AMD's policies (if that's what you'd call it?) better.

        Intel has upset me two different times. The first time happened long ago, back when the Intel 950GMA and 965GMA chips were pretty mainstream. A volunteer group I worked with modified Intel's Windows graphics driver to boost performance and add some features to some of Intel's graphics hardware (the 950GMA was most notable iirc). Note, it was mainly just small stuff as-simple as modifying the inf so the 965GMA driver installed on 950GMA hardware; no reverse-engineering.

        We offered the driver for free for others. Intel threatened to sue though and told us to discontinue it. Was pretty bs in my opinion, but I guess Intel is in their legal right to do so.

        The second time was when I ordered my 4670K. Firstly, I didn't realize paying a premium for an unlocked CPU also meant losing out on virtualization features, and got screwed on that. The Phenom II X3 720 and ASRock 970 Extreme3 motherboard I had supported IOMMU. How does a CPU way newer and more expensive than both of those things not support such a feature?

        I also kind of question why Intel doesn't allow all their CPUs to do Hyperthreading. I know there's an obvious marketing reason (have to give people a reason to spend more money by setting artificial limits), but from a technical standpoint, I see no reason.

        The second issue was the crap temperature issue Intel caused on Haswell (the thick black glue and using thermal paste thing). Skimping out on the manufacturing process and still charging a premium is total bs.

        Another slightly-ignorable issue is the idea that Intel bothered to make a Z97 and H97 chipset. The Z97 is fine. The H97 however is just a Z97 chipset stripped of features. Wtf? OEMs have clever ways of re-implementing such features to make the H97 chipset more "viable" but this also comes at a cost of weird conditions (PCI/PCI-E slots having different speeds based on what's plugged in, overclocking ability is questionable, etc).

        So basically, I don't like the way Intel runs their business, and won't willingly support them. So far AMD hasn't screwed me over, and that's where I plan on throwing my money still.

        (as for why I had a 4670K; I got a motherboard to review, and needed a CPU for it)

        Originally posted by ParticleBoard View Post
        I can confirm this, I have a 8350 as well with a GA-990FXA-UD3 and the VRMs get BLAZING hot. The inadequate stock heatsinks which are a borderline joke will not cut it (go over to the 990FXA-UD3 thread on overclockers if you want hard proof). I ended up buying an H60 closed loop water cooler which barely cuts it (tiny OC to rock solid 4.4) and a copper vrm heatsink with a fan right over top just to prevent throttling. Luckily this thing is on tight and works to prevent warping which sadly is a real issue. All in all I am happy with it as I splurged and bought all the cooling day one (wanted the quiet route, didn't even know about the heat issues at the time). And with Intel being used for Apple products, fawk supporting that mess...
        What revision of the GA-990FXA-UD3 did you have? As I understand, revisions 1 and 2 were garbage. 3 can handle the FX-series a bit better, and 4 (current) is the most ideal.

        Comment


        • #34
          Those of you comparing Intel and AMD's TDP...don't. They don't describe the same thing.

          Anyway, I'm fine with my A10-6800k, OC'd to 4.4Ghz at stock voltages (had it undervolted, but it was a little unstable). Planning on eventually upgrading to a kaveri or later gen, but for now this is fine. Think I'll wait until they release a sub-100W a10 or price drops under $100. o_o

          Comment


          • #35
            Originally posted by CrystalGamma View Post
            I was just saying that this change in Intel policy is not THAT big IMO



            Hopefully SkyBridge will make them more competitive again ... and even if their new x86 cores are not much better, thanks to pin-compatibility (or so it was said) you can try out the ARM versions as well
            Only if the mobo manufacturers give you an ARM version of the BIOS ROM too. Otherwise pin-compatibility won't mean much to anyone.

            Comment


            • #36
              Originally posted by highlandsun View Post
              Only if the mobo manufacturers give you an ARM version of the BIOS ROM too. Otherwise pin-compatibility won't mean much to anyone.
              PS: Down with closed-source BIOS! Mobo manufacturers should ship with coreboot and provide source code.

              Comment


              • #37
                Originally posted by cbxbiker61 View Post
                These are my results for a stock FX-9590 in s Sabertooth 990FX R2 motherboard.

                OpenBenchmarking.org, Phoronix Test Suite, Linux benchmarking, automated benchmarking, benchmarking results, benchmarking repository, open source benchmarking, benchmarking test profiles


                These results are more realistic in that they don't dumbly compare Intel processors running with the performance governor and AMD processors running with the on-demand governor. CFLAGS and CXXFLAGS are set to "-O3 -march=native" as per the settings in the original benchmark.

                Come on Michael! I'd accuse you of kickbacks from Intel, if I didn't know better.
                1. I use the kernel default governor settings... Modern kernels default to Intel P-State while AMD just uses CPUfreq and generally defaults to ondemand. Even so, I have a separate set of tests coming out tomorrow that show that regardless of ondemand vs. performance governors for the 9590, the performance is still the same in these tests, only for gaming tests was there any difference.

                2. I did use -O3 -march=native and rebuilt for each processor...
                Michael Larabel
                https://www.michaellarabel.com/

                Comment


                • #38
                  Originally posted by Nobu View Post
                  Those of you comparing Intel and AMD's TDP...don't. They don't describe the same thing.

                  Anyway, I'm fine with my A10-6800k, OC'd to 4.4Ghz at stock voltages (had it undervolted, but it was a little unstable). Planning on eventually upgrading to a kaveri or later gen, but for now this is fine. Think I'll wait until they release a sub-100W a10 or price drops under $100. o_o
                  Wait, all things I've heard recently say AMD has got something new coming out soon since DDR4 is now out on the market.

                  With Intel's new chips having quad channel DDR4 ram I wonder if AMD will put out a high end APU with the same support, the APU's are generally memory bandwidth starved so it cold be a way to increase performance, or is the increased latency going to hurt it?

                  Comment


                  • #39
                    Originally posted by Michael View Post
                    1. I use the kernel default governor settings... Modern kernels default to Intel P-State while AMD just uses CPUfreq and generally defaults to ondemand. Even so, I have a separate set of tests coming out tomorrow that show that regardless of ondemand vs. performance governors for the 9590, the performance is still the same in these tests, only for gaming tests was there any difference.

                    2. I did use -O3 -march=native and rebuilt for each processor...
                    And who is spending that much on their hardware to run it with stock software settings? Go look around at WHO is buyingthat hardware, they aren't running it stock, to do so is just being lazy and the reason Phoronix is a "dead man walking" if the Windows hardware review sites start benching SteamOS, because I've seen growing support among the guys on those sites for Linux now that theres Steam, Desura and GoG and those guys are far from lazy when it comes to tweaking things for optimal results.

                    Comment


                    • #40
                      Originally posted by Kivada View Post
                      Wait, all things I've heard recently say AMD has got something new coming out soon since DDR4 is now out on the market.

                      With Intel's new chips having quad channel DDR4 ram I wonder if AMD will put out a high end APU with the same support, the APU's are generally memory bandwidth starved so it cold be a way to increase performance, or is the increased latency going to hurt it?
                      Intel uses eDRAM on-package to essentially turn HD 4600 into amazing Iris Pro graphics. AMD had mentioned doing this in the past, and they kinda already do it on the PS4 and Xbone.

                      Honestly I'm not looking forward to DDR4, cause it doesn't do much except lower voltage and can use quad channel. That means sticking 4 memory sticks into your PC to get max performance. That's going to get really old really fast. They could have done something smart like increase the bit rate up to 128-bit. That way 2 sticks is equivalent to 4 sticks of memory, but that obviously doesn't make as much money for ram manufacturers.

                      Comment

                      Working...
                      X