Announcement

Collapse
No announcement yet.

TTM-based OpenChrome In A Working State

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • TTM-based OpenChrome In A Working State

    Phoronix: TTM-based OpenChrome In A Working State

    With VIA Technologies delivering on their promises by finally releasing 2D/3D documentation and driver code, and Tungsten Graphics creating a new VIA 3D stack for a client, there has been a lot to report on in the VIA Linux scene. Tungsten Graphics and VIA are both interested in creating a Gallium3D driver for the Chrome 9 series, Tungsten already created a feature-rich DRM and Mesa driver, and there is a lot of other work going on too...

    http://www.phoronix.com/vr.php?view=NzAxMQ

  • #2
    What does it mean for a driver to only use TTM? I thought that either GEM or TTM run partially on the kernel, or is that only needed with KMS?

    Lol I'm one of those that are getting kinda confused with all of this

    Either way, this is great news. Hope to see some testing and benchmarks soon.

    Comment


    • #3
      Originally posted by [Knuckles] View Post
      What does it mean for a driver to only use TTM? I thought that either GEM or TTM run partially on the kernel, or is that only needed with KMS?

      I'm a bit confused on the real differences myself. TTM and GEM are both memory managers for graphics memory, which comprise both an API and an actual implementation (I think). So the driver is coded against the TTM or GEM APIs... I think. Kind of like the difference between Qt or GTK+ -- they do the same thing, but they're totally different codebases.

      What confuses me is why drivers can't just have a few bits rewritten to work against GEM, especially given that it's in the kernel now and TTM is not. What does TTM do that makes it so hard to switch to GEM?

      Comment


      • #4
        GEM is only in the kernel for Intel, not for any other GPU family. The implementation for Intel IGPs is felt to be fairly IGP-specific and not suitable for use with GPUs with dedicated video memory (which is most of them). This is, of course, hotly debated. Alsi, the GEM API defines many of the API calls as driver specific, so even "having the GEM API implemented" doesn't mean you have something directly useful or portable to another GPU family.

        Finally, the changes to make use of a full memory manager tend to be fairly significant. The big issue is that buffers can move around dynamically and you don't know where they are until you actually go to use them; most drivers were written assuming that the buffers stay put once they are allocated.
        Last edited by bridgman; 01-21-2009, 10:56 AM.

        Comment


        • #5
          ok. so what's the decission? a hick-hack. is ttm going away and gem is going to get ttm-like features? or does ttm "survive", gets in the kernel for ati, via, s3, nvidia perhaps etc.beside from gem?? are they going to try to get ttm in the kernel however?

          Comment


          • #6
            Originally posted by bridgman View Post
            GEM is only in the kernel for Intel, not for any other GPU family. The implementation for Intel IGPs is felt to be fairly IGP-specific and not suitable for use with GPUs with dedicated video memory (which is most of them). This is, of course, hotly debated. Alsi, the GEM API defines many of the API calls as driver specific, so even "having the GEM API implemented" doesn't mean you have something directly useful or portable to another GPU family.
            I haven't seen anything on any mailing lists or forums detailing the specific issues. Got any links? I'm just interested in knowing why GEM is so Intel/IGP specific or why the API is so unsuitable to other drivers. Looking at the actual i915 GEM code isn't making it particularly clear to me, but then I've never hacked GPU drivers before.

            Thank you!

            Comment


            • #7
              GEM makes a lot of assumptions that you only have one set of ram between the CPU and GPU, which is true on IGPs but not discrete cards.

              Comment


              • #8
                nvidia now sucks even more

                Wow! Those guys suck big time now.
                For how long they plan to be lame like that?

                Comment

                Working...
                X