"Ask ATI" dev thread

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts
  • deanjo
    Senior Member
    • May 2007
    • 6501

    Originally posted by V!NCENT View Post
    You know what I want? A function in Galium3D that let's the desktop be completely ran by my onboard ATI GPU and whenever necessary activate my HD4870x2 for additional stuff. Otherwise just switch of the freaking card.

    Now that would be proper power management!
    Ya that's been tried in other OS's and doesn't work well. It's not as simple as it sounds with the current hardware. Alternatively you could use a IGP for 2d and a separate card for 3d (hmmm sounds like a voodoo card scenario doesn't it?).

    Comment

    • V!NCENT
      Senior Member
      • Aug 2009
      • 2226

      Originally posted by deanjo View Post
      Ya that's been tried in other OS's and doesn't work well. It's not as simple as it sounds with the current hardware. Alternatively you could use a IGP for 2d and a separate card for 3d (hmmm sounds like a voodoo card scenario doesn't it?).
      That's why I was bringing Galium3D into it. X.org and DRI2 just keeps on bla-bla-ing to Galium3D, and Galium3D decides what state tracker to talk to.

      The real problem, however, at least with desktop PCs, is that the IGP has to also be hooked up to the screen, or has to (proper solution) dump it's stuff to another framebuffer; the one that's on the high end graphics card. But then you can't shutdown that card.

      OK now that I've come to think about it I now realise that this is not really going to be easy xD

      Comment

      • bridgman
        AMD Linux
        • Oct 2007
        • 13188

        Yeah, the only approaches that seem to work are (a) add programmable switches to select between the outputs of the IGP and discrete GPUs as needed, or (b) hook the displays up to the IGP full time and blit results from the discrete GPU to the IGP framebuffer for display.
        Test signature

        Comment

        • V!NCENT
          Senior Member
          • Aug 2009
          • 2226

          Originally posted by bridgman View Post
          Yeah, the only approaches that seem to work are (a) add programmable switches to select between the outputs of the IGP and discrete GPUs as needed, or (b) hook the displays up to the IGP full time and blit results from the discrete GPU to the IGP framebuffer for display.
          How much performance loss or latency will you get if you transfer a pixmap/image/whatever from the high end card to the IGP's framebuffer? Is that significant? And what if you'd optimise that in assembly?

          Comment

          • bridgman
            AMD Linux
            • Oct 2007
            • 13188

            The transfer would be done by the GPU (it would take much too long on CPU) so the performance hit would mostly come from competition for the GPU. A simple implementation could have overhead of 25% or higher (2560 x 1600 screen, 60 fps refresh means moving almost 1GB/sec to the IGP) but I think it could be optimized to a lot less. I think you could hide the latency if you were triple-buffering but probably not if you were double-buffering.
            Test signature

            Comment

            • cutterjohn
              Senior Member
              • Mar 2009
              • 341

              Originally posted by energyman View Post
              Qaridarium , don't get excited to soon. Evergreen is made by TSMC.

              And their 40nm process is extremely leaky. So to see good power numbers would mean that TSMC solved a problem haunting them for the last 12 month.
              *snicker* right, the root cause of the nVidia GPU problem(Pb free solder kindof) fixes their half---ed 40nm process... right.

              Switching between IGP & discrete has never seemed to work well unless it involved a way to physically shut downt he discrete GPU which AFAIK was only ever really done on the ASUS netbook with an Intel IGP & nVidia disrete 9300M via a physical/soft switch.

              I've read about other designs attempting to meld same copany IGP & discrete GPUs together to improve performance, but can't recall any notebok that actualy implemented this.

              (Oh God, my eyes are bleeding worse than reading World of Gothic English forums...)

              Comment

              • energyman
                Senior Member
                • Jul 2008
                • 1755

                I am not saying that nvidia's problems are caused by TSMCs leaky 40nm.

                I never did that. But hey, you are free to not-read whatever you want,

                Fact is, all cards produced in 40nm use a lot more power than most people expected. TSMC's process is known leaky - and even TSMC admitted that. Leaky means hot. Leaky is BAD.

                This problem is well known.

                And has nothing to do with Nvidia's bumpgate CF.

                Comment

                • V!NCENT
                  Senior Member
                  • Aug 2009
                  • 2226

                  Hey I have a question:
                  Does ATI have any plans for future cards to make them more documentation and FLOSS friendly? Or is this confidential?

                  I am asking this because releasing documentation was said to be kind of a tricky thing to do and had to be done very carefully.

                  Comment

                  • bridgman
                    AMD Linux
                    • Oct 2007
                    • 13188

                    I don't expect things to get a lot easier from an IP perspective.

                    The main improvement will be that now we are "caught up" with new GPU introduction and able to work on open source docs and support while our hardware and software engineers still have a good chance of remembering what they had to do in order to make the chip work
                    Test signature

                    Comment

                    • V!NCENT
                      Senior Member
                      • Aug 2009
                      • 2226

                      Originally posted by bridgman View Post
                      I don't expect things to get a lot easier from an IP perspective.

                      The main improvement will be that now we are "caught up" with new GPU introduction and able to work on open source docs and support while our hardware and software engineers still have a good chance of remembering what they had to do in order to make the chip work
                      Thanks for the info So there are no seperate DRM circuits and such?

                      Comment

                      Working...
                      X