Announcement

Collapse
No announcement yet.

Intel Working On TTM Integration For Discrete vRAM Management

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Intel Working On TTM Integration For Discrete vRAM Management

    Phoronix: Intel Working On TTM Integration For Discrete vRAM Management

    More than a decade ago when the open-source graphics driver stack was being modernized with kernel mode-setting and better handling the stack for OpenGL, composited desktops and the like, TTM (Translation Table Maps) was born for managing GPU video RAM by the kernel Direct Rendering Manager drivers. While Intel initially expressed interest in TTM, they ultimately decided to create GEM as the Graphics Execution Manager for handling their video memory management needs. Now in 2021 with Intel aggressively pursuing discrete graphics, they are working on TTM support...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Is TTM as good as GEM?
    Does GEM have advantages over TTM?

    The article says that GEM was developed after TTM with lessons learned from TTM.

    Comment


    • #3
      Originally posted by uid313 View Post
      Is TTM as good as GEM?
      Does GEM have advantages over TTM?

      The article says that GEM was developed after TTM with lessons learned from TTM.
      TTM is used actively by other discrete GPU drivers, so it possibly just boils down to suitability for discrete GPUs vs. suitability for integrated GPUs which share a memory bus with the CPU.

      Comment


      • #4
        Originally posted by uid313 View Post
        Is TTM as good as GEM?
        Does GEM have advantages over TTM?

        The article says that GEM was developed after TTM with lessons learned from TTM.
        If they're moving to TTM now, they must feel that TTM is now on par with GEM or better, and that TTM is important for good dGPU performance. Otherwise they wouldn't be making this change. Anyway, we'll have to wait and see.

        Comment


        • #5
          Some fun history facts:

          Thomas Hellström was the original creator of TTM (working for Tungsten Graphics at the time), and its maintainer for many years.

          TTM was originally created in 2006, around 2 years before KMS (or GEM).

          TTM was originally created for and first worked with the i915 kernel driver. For $REASONS, Intel decided to invent GEM instead of going with TTM, and the i915 TTM support was never merged into the upstream Linux kernel tree. Instead, the first upstream kernel driver to use TTM was radeon in 2009.

          Comment


          • #6
            They are moving completely to TTM or just their dGPU part will be using TTM meanwhile the iGPU will still using GEM?

            Comment


            • #7
              Originally posted by MrCooper View Post
              Some fun history facts:

              Thomas Hellström was the original creator of TTM (working for Tungsten Graphics at the time), and its maintainer for many years.

              TTM was originally created in 2006, around 2 years before KMS (or GEM).

              TTM was originally created for and first worked with the i915 kernel driver. For $REASONS, Intel decided to invent GEM instead of going with TTM, and the i915 TTM support was never merged into the upstream Linux kernel tree. Instead, the first upstream kernel driver to use TTM was radeon in 2009.
              Oh the good times. Back then I gave it 5-8 years before they gave up and figured out that going lone wolf in the linux world isn't the best way. Nice to see more cross vendor code sharing.
              But then I thought the same thing when they introduced NIR, which pretty much consumed everything.

              Comment


              • #8
                Originally posted by Serafean View Post
                Oh the good times. Back then I gave it 5-8 years before they gave up and figured out that going lone wolf in the linux world isn't the best way. Nice to see more cross vendor code sharing.
                But then I thought the same thing when they introduced NIR, which pretty much consumed everything.
                Isn't there lots of evidence that "going lone wolf" pays off on a regular basis? It's not like it cost anyone else anything that they developed GEM, and they got the benefit of having the original developers be on their team (not to be underestimated).

                I feel like a lot of claims of "fragmentation" and "NIH syndrome" come from people who have never worked on a professional software development team, and who don't understand the tradeoffs between adapting something you didn't write, and writing something different.
                Last edited by microcode; 18 May 2021, 11:53 AM.

                Comment


                • #9
                  Originally posted by microcode View Post

                  Isn't there lots of evidence that "going lone wolf" pays off on a regular basis? It's not like it cost anyone else anything that they developed GEM, and they got the benefit of having the original developers be on their team (not to be underestimated).
                  It always pays off short to mid-term. When you start talking longterm, I'd like to give the example of r300g, r600g mesa drivers. Those would be long dead without being completely integrated into shared infrastructure.
                  Same with WiFi drivers: all drivers once upon a time using custom 802.11 stacks are dead (looking at madwifi), while those using the shared {mac,cfg}802.11 stack are still ticking along just fine (looking at my rt2800usb based router running linux-5.10)

                  regarding cost, I'd like to take a page out of the economist's textbook: Foregone improvements (from foregone earnings) -- Had they been sharing from the beginning, issues might have been identified and fixed quicker, and new usecases applied earlier.

                  Comment


                  • #10
                    Originally posted by Serafean View Post
                    regarding cost, I'd like to take a page out of the economist's textbook: Foregone improvements (from foregone earnings) -- Had they been sharing from the beginning, issues might have been identified and fixed quicker, and new usecases applied earlier.
                    It really doesn't work like that with software; nor in business. If that were some ironclad law of economics, nobody would be self-insured, and no insurance company would be without reinsurance; but lo and behold, U-Haul self-insures their vehicles because over multiple vehicle lifecycles they can retain certificates of self-insurance, which end up less expensive than third-party auto insurance because they register their vehicles in a low-cost jurisdiction (Arizona). Here in Wyoming, a $200,000 USD cash deposit, surety bond, or security can entitle you to self-insure 25 vehicles, then $100 per vehicle in excess of 25. If your business can take on debt, sell equity, or simply has the cash on hand to resolve civil disputes arising from auto accidents, then it makes sense to take on the risk directly.

                    There are many, many cases where shared infrastructure is more of a risk than an asset; you can't apply some blanket judgement to the entire idea.
                    Last edited by microcode; 18 May 2021, 01:49 PM.

                    Comment

                    Working...
                    X