gamer2k completely misses both the point and the benchmark again, and states an old argument not relevant to this about "latency vs throughput". So if you increase latency on a c64, does it get more throughout? (*laugh here*)
And indeed financial data, needs computation power and "throughput" aswell. Some pretty hardcore hardconvolution is often run on that aswell. Start multiplying that with 1000 stocks, and you need the kind of power, that Intel sees to be a market for this.
Obviously some people think jitter is "disk-io". If you have reduced "disk-io" with there being the same disk i/o, you know jitter is lower. (lol)
Indeed if jitter on a c64 was 20ms, very little sensible would happen on screen, reducing "throughput" in a ridicolous way.
And ofcourse os-jitter blocks the usage of CPU, meaning the blocked amount of CPU goes up with cpu-cores. Imagine the loss on a 1000cpu computer.
Peace Be With You.
And indeed financial data, needs computation power and "throughput" aswell. Some pretty hardcore hardconvolution is often run on that aswell. Start multiplying that with 1000 stocks, and you need the kind of power, that Intel sees to be a market for this.
Obviously some people think jitter is "disk-io". If you have reduced "disk-io" with there being the same disk i/o, you know jitter is lower. (lol)
Indeed if jitter on a c64 was 20ms, very little sensible would happen on screen, reducing "throughput" in a ridicolous way.
And ofcourse os-jitter blocks the usage of CPU, meaning the blocked amount of CPU goes up with cpu-cores. Imagine the loss on a 1000cpu computer.
Peace Be With You.
Comment