Originally posted by kaprikawn
View Post
I've seen a terrible ssd - it was so slow, that to get reasonable performance with it the file system had to be mounted with nobarrier. It was slower than a hard-disk. And in the 120gb space there is quite a lot of variance.
ATM there is a bandwidth limitation on intel onboard sata ports of 2 gigabytes/sec total - even if you have 6 ports that can each do 500mb/sec, you won't be able to read from them all at once in full. in the real world it seems to be more like 1600mb/sec from what I could gather. The same limits apply from sandy bridge to haswell.
maybe PCI-e SSD will mean Intel finally move to PCI-e v3 for peripherals, which would fix most of the bandwidth limitations.
I'd like to see more of the inner ssd works exposed - and more predictability in the system. ATM garbage collection can be kind of random and can add random spikes in latency while it operates. I'd also like to see data being put direct into application memory, from the disk - like infiniband etc support. As you start going up in bandwidth, doing all these memory copies does impact performance.
Even if bandwidth stays the same latency can most likely come down a little and most applications aren't designed to leverage many simultaneous requests at once, and even then sata limitations of 31 requests at once isn't enough to maximise performance.
Comment