Originally posted by Imroy
View Post
First off, you would need 10-15 PCIe lanes to cover the bandwidth of a single DIMM (more in DualChannel or TripleChannel configurations). Combine that with 4-way SLI, and you'll run out of PCIe lanes pretty soon. I haven't seen any system with 100+ PCIe lanes yet, at least not in the consumer market.
Second, PCIe Latency is horrible. The biggest performance hit today's CPUs take are cache misses, because the CPU will have to idle a hundred cycles until new data arrives. Going through a PCIe link increases that latency by a factor of around 5.
So they boast about 4way SLI, but don't allow SLI Bridges, which are critical to performance on highend GPUs.
They add RAM that has latencies like it's 1995.
And for all that they need a PCIe chip that either doesn't exist, or is a specialized piece for server hardware with a massive price tag.
Until I see more than render graphics, I consider this system to be vaporware.
Comment