I understand that a lot of computations are done on the CPU, then the results are sent to the GPU. I was talking more about stuff like this though:
Watching a 1080p video normally, and watching a 1080p video with "hardware acceleration".
I assume the first means that all decoding and graphics processing is done on the CPU (assuming a non-OGL rendering method) while the second means using the GPU for both operations. If this is true, why wouldn't the GPU be used in the first place? Since it's obviously made for tasks such as these, vs the CPU which (for the most part) is not.
Watching a 1080p video normally, and watching a 1080p video with "hardware acceleration".
I assume the first means that all decoding and graphics processing is done on the CPU (assuming a non-OGL rendering method) while the second means using the GPU for both operations. If this is true, why wouldn't the GPU be used in the first place? Since it's obviously made for tasks such as these, vs the CPU which (for the most part) is not.
Comment