Originally posted by Vim_User
View Post
Announcement
Collapse
No announcement yet.
Another reason for me not to buy AMD anymore
Collapse
X
-
-
I haven't looked at the packages, but I suspect the spec sheets say something similar about Turbo Core. Read up on T-states for the Intel equivalent (clock frequency stays the same but the clock is stopped in short bursts to give the same effect as reducing frequency).
>>In reality, it should be as simple as that: If the CPU is not able to sustain its nominal clockspeed at 100% load under default conditions (as described by me above) then don't sell it as a CPU with exactly that clockspeed, since it simply isn't such a CPU.
I think it is that simple with Turbo Core disabled. That's what the article suggests anyways.
Let's see if our CPU folks respond any further to the article.
EDIT -- looks like the article was already updated and your response was to the AMD feedback. Sorry, I missed that.Last edited by bridgman; 05 April 2013, 01:56 AM.
Leave a comment:
-
Originally posted by bridgman View PostMore precisely, processor manufacturers decide what applications people are *likely* to run during the product's market window and optimize for those applications and workloads. This includes power consumption as well as a lot of other aspects.
if you turn Turbo Core off then the processor runs at full speed.
In reality, it should be as simple as that: If the CPU is not able to sustain its nominal clockspeed at 100% load under default conditions (as described by me above) then don't sell it as a CPU with exactly that clockspeed, since it simply isn't such a CPU.
Leave a comment:
-
Originally posted by Grogan View PostDon't tell me some benchmark stresses the CPU more than a "make -j10" where all cores are 100% utilized. (Try that for a Chromium build... it's relentless for the whole job, for a good 25 minutes.
Originally posted by Grogan View PostSince when does the processor manufacturer decide what applications people are going to run?
Originally posted by Grogan View PostI do not give one tapered turd about power consumption... I don't care how clever it is. The bottom line is that the processors deliver lower performance than advertised.Last edited by bridgman; 05 April 2013, 12:28 AM.
Leave a comment:
-
When I've got all cores blazing during a compile job, the last thing I would want is my CPU to drop its frequency. Don't tell me some benchmark stresses the CPU more than a "make -j10" where all cores are 100% utilized. (Try that for a Chromium build... it's relentless for the whole job, for a good 25 minutes. I chose that example because it has a lot of objects that can be compiled in parallel, unlike some builds that wait more on dependencies when you use more jobs). I compile software pretty much every day. If that's outside of my CPU's "market segment" then I would want my money back. Since when does the processor manufacturer decide what applications people are going to run?
I don't believe their PR bullshit for a second, that it's only Linpack. I too consider that akin to "fraud". Also disingenuous is the "it would be a problem and unfair if the tests ruined the lifespan of consumers' CPUs" statement. It's more of a problem, and unfair that AMD sells CPUs that can't even sustain running at their rated clock speeds. If that "ruins the lifespan" of the product, then it's a faulty product.
I do not give one tapered turd about power consumption... I don't care how clever it is. The bottom line is that the processors deliver lower performance than advertised.
Leave a comment:
-
The drop to 3.4 happens only in linpack which is an extreme case.
They got it from intel.com which means that who knows what it does when it detects AMD cpu.
Leave a comment:
-
Originally posted by bridgman View PostOK, so you're saying "standard" should be defined as "minimum under any and all conditions, even with degraded cooling and running a synthetic stress test" rather than "normal/usual/whatever" ? That would mean you basically have minimum** and maximum clocks with no indication of where the processsor normally runs, which seems like a big step backwards to me.
Server parts (GPU and CPU) do tend to have more conservative specs than client parts (for all vendors AFAICS), and I believe operation under sustained workloads is part of the reason for that.
** I know "minimum" isn't quite the right word here either because the processor can also be put into lower power states but it's tough even finding the right words given how aggressively modern hardware manages its own clocks.
Leave a comment:
-
Goodness, what is happening in the world that CPU manufacturers are pulling out any excuse they can to use some pretty number for marketing purposes? I remember the good old days, when... oh yeah, wait a minute, they have always done that. Nothing new here, moving right along.
Leave a comment:
-
Originally posted by varikonniemi View PostThe same reason you don't advertise the turbo clock as standard, you don't advertise the standard clock as standard in this case; it cannot sustain it in all workloads in manufacturer specified operating conditions. It is a lie. It is a 3.4ghz processor, not 3.8 ghz.
Leave a comment:
-
OK, so you're saying "standard" should be defined as "minimum under any and all conditions, even with degraded cooling and running a synthetic stress test" rather than "normal/usual/whatever" ? That would mean you basically have minimum** and maximum clocks with no indication of where the processsor normally runs, which seems like a big step backwards to me.
Server parts (GPU and CPU) do tend to have more conservative specs than client parts (for all vendors AFAICS), and I believe operation under sustained workloads is part of the reason for that.
** I know "minimum" isn't quite the right word here either because the processor can also be put into lower power states but it's tough even finding the right words given how aggressively modern hardware manages its own clocks.Last edited by bridgman; 04 April 2013, 12:09 PM.
Leave a comment:
Leave a comment: