Look at it this way: With lower latency, you guarantee that every thread will run as much as possible when not blocked. However, it will take a longer period of time for tasks to finish, because they spend less time running overall.
When you maximize throughput, you get things done a lot faster, but other background threads will spend a LOT more time waiting to run, increasing overall latency.
Windows is designed to try and maximize throughput for the highest priority tasks, will still allowing background threads to run within a "reasonable" period of time (via priority boosts as threads spend time waiting to run). RIFS tries to minimize latency, which is great for multimedia, but not as great when doing a single intensive task while a large collection of other tasks are also running.
Throughput vs latency is a major problem in computing, not just for scheduling. RAM/Cache access times vs total throughput, for instance.
When you maximize throughput, you get things done a lot faster, but other background threads will spend a LOT more time waiting to run, increasing overall latency.
Windows is designed to try and maximize throughput for the highest priority tasks, will still allowing background threads to run within a "reasonable" period of time (via priority boosts as threads spend time waiting to run). RIFS tries to minimize latency, which is great for multimedia, but not as great when doing a single intensive task while a large collection of other tasks are also running.
Throughput vs latency is a major problem in computing, not just for scheduling. RAM/Cache access times vs total throughput, for instance.
Comment