Originally posted by Kivada
View Post
Cache is only faster if it's being used. Adding more cache doesn't automatically add 3-5% across the board. That's an estimate, because most apps used to use the extra cache about that much. As caches grow larger, the number of apps that can take advantage of it but didn't previously will decrease. And note that rule of thumb is talking about doubling the amount of cache (in the same level) which means there is no additional latency involved that wasn't already there. In this case, you are adding an entire additional level of cache, increasing latencies, which means that some of the speedup is then offset by that slowdown.
Some apps will get no gain at all while others will see massive gains. It all depends on their memory usage patterns. If they are randomly accessing memory, no amount of cache will help because nothing is cached the first (and last) time you access it. On the other hand, if you have an app that really needs to frequently access < 8MB of data but > 1MB, this extra cache is going to be a huge help. Lots of server apps fall under that category. Not many desktop apps. L3 cache is also used in modern designs to directly share data across multiple cores - which again is more relevant to server workloads than a desktop one.
I'm not saying the extra L3 cache is actually hurting anything in these tests. I'm just saying it might be. At least in some of them - I'm sure it's helping in a few as well.
I know a lot of games, for example, are extremely dependent on memory access timings. They loved the fast L2 speeds the old Pentium M and later Core processors had, and didn't particularly take advantage of the L3 cache Phenoms added.
AMD has already said they are planning to release a Bulldozer chip with the L3 cache removed or reduced for the consumer market, so they know all of this far better than you or i do. I'm not sure when it will come out, but my guess is that all non FX series will probably have it removed.
As before, these weren't designed for the desktop market, because CPU performance on the desktop as been "good enough" for several years now, servers are where the money is these days, so theres point in designing a CPU specifically for the consumer market when your server CPU will work just fine for the task, thus why bother pouring time and money into chasing a stagnant market where the difference in perceived performance by the end user will essentially be identical to those using a machine from 2006?
Seriously, put any of your non tech relatives in front of a machine with a Core2 or Athlon2 system and an i5 or i7, can they tell the difference, especially if both machines have identical graphics drivers and amounts of ram? My guess would be no, they can't tell the machines apart, the seconds saved off by the i7 in their day to day tasks would be completely unnoticed. Welcome to 95% of the computing market. Seriously, they are either suffering with a terrible GPU and no SSD, having those would have a more noticeable impact then a faster CPU.
Comment