Announcement

Collapse
No announcement yet.

Kernel Mode-Setting, GEM, DRI Progresses On FreeBSD

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by icyh View Post
    Well now that FreeBSD is getting support, it should be about time for the linux folks to scrap it and write something new and completely incompatible... like they always do.
    I'm sure that if code was flowing in both directions, i.e. FreeBSD devs actually contributing to the development of common code and not only taking Linux code and porting/adapting it to run on FreeBSD, Linux devs would think twice before writing something new and incompatible .

    Comment


    • #12
      The drm code has always been MIT licensed since it started out as shared code with the BSDs.

      Comment


      • #13
        Originally posted by Shining Arcanine View Post
        The other option is to modify the Radeon and Nouveau drivers to use GEM.
        Using this as an excuse to ask what is probably a rather basic question - why do modern graphics cards need such complicated heap management, rather than just say telling the kernel which areas of VRAM are available for textures and things and letting the kernel set up a heap there itself? Or for that matter what makes GEM a better fit to some cards and TTM a better fit to others?

        Comment


        • #14
          I'm not a DRM developer so this may be completely wrong, but I think most of this is accurate. I'm sure I'll be corrected if it isn't.

          Originally posted by michael-vb View Post
          why do modern graphics cards need such complicated heap management, rather than just say telling the kernel which areas of VRAM are available for textures and things and letting the kernel set up a heap there itself?
          This is exactly what TTM/GEM (or kernel-mode graphics memory management) accomplishes. Previously, the user-space DRM bits managed allocations on the GPU which meant that when you switched to a VT all that had to be unloaded and marshalled back to the host so it could be restored when you came back to X. As far as the kernel managing the allocations on the GPU, I believe that the TTM code includes a fair amount of common code for the memory management with hooks so card-specific needs can be addressed.

          Originally posted by michael-vb View Post
          Or for that matter what makes GEM a better fit to some cards and TTM a better fit to others?
          Work on TTM was started before GEM, but its complexity ensured development took a while. Intel produced GEM as an alternative for their own hardware in the interim. GEM is simpler because it doesn't address memory that may only be accessible by the device, but that doesn't exist on Intel IGPs.

          Let the corrections begin.

          Comment


          • #15
            Originally posted by michael-vb View Post
            Using this as an excuse to ask what is probably a rather basic question - why do modern graphics cards need such complicated heap management, rather than just say telling the kernel which areas of VRAM are available for textures and things and letting the kernel set up a heap there itself? Or for that matter what makes GEM a better fit to some cards and TTM a better fit to others?
            One thing to remember is that on some cards, different regions of memory have very different properties. For instance, frame buffers might be required to be allocated in one region, while read-only textures can be put in another region that allows significantly faster read access. Simple hardware like Intel's that just uses system RAM lacks many of these optimization features. Also, there's synchronization issues, given that the GPU can potentially be doing some complex process scheduling of its own, and the CPU OS and the GPU need to make sure they don't get in each others' way.

            Comment


            • #16
              Almost all modern GPUs can address several types of memory all with their own set of limitations. Some examples:

              - VRAM. This is local memory attached to the GPU. It's fast for the GPU to access, but relatively slow for the CPU (especially for reads). Additionally, the CPU cannot necessarily access the entire amount of VRAM, only the GPU can. As such, if a buffer is in a non-CPU accessible region of VRAM, it has to be migrated by the GPU to a region of VRAM that can be accessed by the CPU or to anther memory type that the CPU can access.
              - AGP GART memory. These are pinned system pages that are mapped into a contiguous aperture provided by the AGP GART (Graphics Address Remapping Table) mechanism on the northbridge. Unfortunately, lots of AGP chipsets are buggy and AGP pages need to be uncached, so they are slow to access with the CPU.
              - GPU GART memory. These are pinned system pages that are mapped into a contiguous aperture provided by GART hardware on GPU. Usually the GPU GART can support both cached and uncached paged by doing a snooped request for cached pages. Both cached and uncached pages have their advantages. Cached pages are faster for the CPU to access, but slower for the GPU. Uncached pages are the opposite. Both kinds are slower than VRAM from the GPU's perspective, but faster for the CPU.

              Taken together the acceleration drivers have to decide what types of memory to are best for the specific task at hand. They make requests to the memory manager to fulfill those requests, but the pools are of limited sizes, so the memory manager has to do it's best fulfill those requests by migrating data around to keep everything running as optimally as possible which is not an easy task.

              Comment


              • #17
                ?????????? ??

                ?????????? ?????. ????? ? ??????? ???? ? ?????? ? ?????? ?????????, ?? ???????? ???????? ?? ???????????? ???????.
                ????? ????? ???? ???????? ??????, ?????? ????????? ???????? ? ???????? ????????????! ???? ??? ?? ??????? ???? ? ?????.
                ?????????? ??? ??? ??? ????????.?

                ????????:
                ????????????. ???? ???????? ??????????? ????? ?????. ?, ? ?????????, ?????? ??? ????? ????? ??????.
                ????????????, ??? ?? ????? - ??? ??????? ?? ??? ? ???????????. ??? ??? ??????? ??? ??? ??????.
                ? ? ????? ????? ???!

                Comment

                Working...
                X