Originally posted by V!NCENT
View Post
"Ask ATI" dev thread
Collapse
X
-
Originally posted by deanjo View PostYa that's been tried in other OS's and doesn't work well. It's not as simple as it sounds with the current hardware. Alternatively you could use a IGP for 2d and a separate card for 3d (hmmm sounds like a voodoo card scenario doesn't it?).
The real problem, however, at least with desktop PCs, is that the IGP has to also be hooked up to the screen, or has to (proper solution) dump it's stuff to another framebuffer; the one that's on the high end graphics card. But then you can't shutdown that card.
OK now that I've come to think about it I now realise that this is not really going to be easy xD
Comment
-
-
Originally posted by bridgman View PostYeah, the only approaches that seem to work are (a) add programmable switches to select between the outputs of the IGP and discrete GPUs as needed, or (b) hook the displays up to the IGP full time and blit results from the discrete GPU to the IGP framebuffer for display.
Comment
-
-
The transfer would be done by the GPU (it would take much too long on CPU) so the performance hit would mostly come from competition for the GPU. A simple implementation could have overhead of 25% or higher (2560 x 1600 screen, 60 fps refresh means moving almost 1GB/sec to the IGP) but I think it could be optimized to a lot less. I think you could hide the latency if you were triple-buffering but probably not if you were double-buffering.Test signature
Comment
-
-
Originally posted by energyman View PostQaridarium , don't get excited to soon. Evergreen is made by TSMC.
And their 40nm process is extremely leaky. So to see good power numbers would mean that TSMC solved a problem haunting them for the last 12 month.
Switching between IGP & discrete has never seemed to work well unless it involved a way to physically shut downt he discrete GPU which AFAIK was only ever really done on the ASUS netbook with an Intel IGP & nVidia disrete 9300M via a physical/soft switch.
I've read about other designs attempting to meld same copany IGP & discrete GPUs together to improve performance, but can't recall any notebok that actualy implemented this.
(Oh God, my eyes are bleeding worse than reading World of Gothic English forums...)
Comment
-
-
I am not saying that nvidia's problems are caused by TSMCs leaky 40nm.
I never did that. But hey, you are free to not-read whatever you want,
Fact is, all cards produced in 40nm use a lot more power than most people expected. TSMC's process is known leaky - and even TSMC admitted that. Leaky means hot. Leaky is BAD.
This problem is well known.
And has nothing to do with Nvidia's bumpgate CF.
Comment
-
-
I don't expect things to get a lot easier from an IP perspective.
The main improvement will be that now we are "caught up" with new GPU introduction and able to work on open source docs and support while our hardware and software engineers still have a good chance of remembering what they had to do in order to make the chip workTest signature
Comment
-
-
Originally posted by bridgman View PostI don't expect things to get a lot easier from an IP perspective.
The main improvement will be that now we are "caught up" with new GPU introduction and able to work on open source docs and support while our hardware and software engineers still have a good chance of remembering what they had to do in order to make the chip workSo there are no seperate DRM circuits and such?
Comment
-
Comment