A few months ago, AMD refined their Socket 939 line of processors
with the E3 and E4 revisions, codenamed Venice and San Diego, respectively,
to replace the D0 Winchester. The Venice and San Diego features the improved
memory controller, and most notably, AMD doubled the amount of L2 Cache for
the San Diego compared to its Venice counterpart. The San Diego immediately
attracted large attention among the enthusiast crowd for its enormous L2 cache;
in fact, the Athlon 64 FX-57 is based upon the San Diego. While the San Diego
looks great on paper, premium performance also comes at a premium price, and
the cheapest San Diego, the 3700+ at 2.2GHz, runs around $330, while a Venice
3500+ running at the same clock speed is $275, and the cheaper versions of Venice
3200+ or 3000+ often overclock easily to 2.6GHz and beyond. Is the price premium
justified? Does the extra 1MB L2 Cache help that much in real world performance?
Just how well do these two cores compare clock-for-clock? In this review we
will be running two of these CPUs at 2.0, 2.2, and 2.6GHz to see the performance
difference between the two AMD cores.
For our performance comparison, we used an AMD 3700+ San Diego
(2.2GHz) and a 3200+ Venice (2.0GHz). To compare the cores at the same frequency,
three frequencies of 2.0 (200x10), 2.2 (220x10), and 2.6 (260x10) GHz were used.
To achieve such frequencies for the San Diego, the VCore remained 1.414V for
all three frequencies; on the other hand, for the Venice, a voltage bump to
1.586V was required to obtain 2.6GHz.
The RAM used was a pair of 512MB Corsair XMS PC3200C2 v1.1 (Winbond
BH-6 chips), and the memory timings were maintained at 2-2-2-5 throughout the
test. To achieve such timings, we kept the VDimm at 3.0V for 200 HTT, 3.2V for
220 HTT, and 3.6V for 260 HTT. To make sure the memory ran stable, we ran Memtest86
extensively prior to running all the benchmarks to make sure it was free from
errors. The CPU was cooled by a Thermalright XP-90 with an Enermax 92mm fan,
while a 92mm fan above the modules actively cooled the memory as well. The video
card used was a XFX GeForce 6600GT left at stock core and memory speeds.
|DFI LanParty UT nF4 Ultra-D
|2 x 512MB Corsair XMS PC3200C2
|XFX 6600GT PCI-E
|Maxtor Diamond Max Plus 8 40GB 7200RPM
|Lite-On DVDRW SOHW-832S
The slew of benchmarks we used to distinguish the performance
level of both cores were mainly concentrated in the area of gaming, archiving,
compiling, encoding, and floating-point performance. The first benchmark to
take the stage in this Venice v. San Diego showdown was id Software's Doom 3.
In this review, we used the latest version of Doom 3, which at this present
time is v1.3.1302. As usual, we used the standard time demo 1 and ran this timed
demo benchmark with three different visual settings. The demo was run at 800
x 600 medium quality, 1024 x 768 high quality, and 1280 x 1024 high quality.
To get in some compiling action, we measured the amount of time it took to compile
LAME 3.96.1 from source using GCC 4.0. On the side of encoding, we measured
the amount of time required to encode a wav file to mp3 format using LAME 3.96.1.
The wav file was a simple song with a size of 34.3MB. On with archiving/extracting,
we measured the time required to archive and then extract the 637.2MB FC4 x86_64
disc 1 ISO. The rest of the benchmarks we used are all centered on testing the
floating-point performance of the CPU by a highly optimized mathematical-based
set of benchmarks. For this CPU core comparison, we ran all three of the BlueSail
Software Opstone benchmarks: Vector Scalar Product, Sparse Vector Scalar Product,
and Singular Value Decomposition. In the Vector Scalar Product benchmark we
took the single-precision meanwhile in the last two benchmarks we used the double-precision
mean. As always, the higher frame-rate the better, the lower time required to
complete a process the better, and the higher Mflops/Gflops the better.