Announcement

Collapse
No announcement yet.

Going Over DRM Render/Mode-Set Nodes

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Going Over DRM Render/Mode-Set Nodes

    Phoronix: Going Over DRM Render/Mode-Set Nodes

    For those curious about the DRM render/mode-set nodes work that was successfully accomplished via GSoC by David Herrmann, he has written another blog post at length about his accomplishments...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    David's articles are top-notch. Because he writes the articles with such clarity and expertise, they're quite easy and quick to read. If you have not read his articles on VT, sessions, seats, and DRM nodes, but were always interested, I highly recommend doing so. Start with https://dvdhrm.wordpress.com/2013/08...ment-on-linux/.

    Don't fear that these articles my be too in depth for you. They do go in depth as the articles progress, but David sets a solid foundation.

    Great job, David!

    Comment


    • #3
      Originally posted by FourDMusic View Post
      David's articles are top-notch. Because he writes the articles with such clarity and expertise, they're quite easy and quick to read. If you have not read his articles on VT, sessions, seats, and DRM nodes, but were always interested, I highly recommend doing so. Start with https://dvdhrm.wordpress.com/2013/08...ment-on-linux/.

      Don't fear that these articles my be too in depth for you. They do go in depth as the articles progress, but David sets a solid foundation.

      Great job, David!
      Hey David... will this help to enable Optimus? Use the Mode-Set nodes on the intel graphics, use the render nodes on the nvidia graphics, dma-buf to handle zero-copy? I was originally just thinking of this to help with GPGPU but it should also help to compartmentalize Optimus / PowerXPress, no?
      All opinions are my own not those of my employer if you know who they are.

      Comment


      • #4
        Originally posted by Ericg View Post
        Hey David... will this help to enable Optimus? Use the Mode-Set nodes on the intel graphics, use the render nodes on the nvidia graphics, dma-buf to handle zero-copy? I was originally just thinking of this to help with GPGPU but it should also help to compartmentalize Optimus / PowerXPress, no?
        Since soon I will get my first Optimus Laptop, can you explain this zero-copy thing. I can't image discrete GPU (nvidia) doing rendering on it's dedicated memory, and intel GPU scanning this memory without copy first the framebuffer in to it's memory region ( in the main memory).
        Thanks.

        Comment


        • #5
          Originally posted by Drago View Post
          Since soon I will get my first Optimus Laptop, can you explain this zero-copy thing. I can't image discrete GPU (nvidia) doing rendering on it's dedicated memory, and intel GPU scanning this memory without copy first the framebuffer in to it's memory region ( in the main memory).
          Thanks.
          True the memory from the graphics card itself wouldn't but the buffer it is submitting to could be shared between devices.
          All opinions are my own not those of my employer if you know who they are.

          Comment


          • #6
            Originally posted by Ericg View Post
            True the memory from the graphics card itself wouldn't but the buffer it is submitting to could be shared between devices.
            Can you explain that. Intel GPU holds the connector and needs the scanout buffer to be in its address space (physically on main memory). Nvidia GPU does the renderings on its dedicated memory. How this can be zero-copy? Thanks.

            Comment


            • #7
              Originally posted by Drago View Post
              Can you explain that. Intel GPU holds the connector and needs the scanout buffer to be in its address space (physically on main memory). Nvidia GPU does the renderings on its dedicated memory. How this can be zero-copy? Thanks.
              Well the image has to be copied to the backbuffer before it can be displayed, does it not? I may be leaning more towards wayland's needs and situations than X's, as Wayland just wants buffers filled with pixels to be displayed.
              All opinions are my own not those of my employer if you know who they are.

              Comment


              • #8
                Originally posted by Ericg View Post
                Hey David... will this help to enable Optimus? Use the Mode-Set nodes on the intel graphics, use the render nodes on the nvidia graphics, dma-buf to handle zero-copy? I was originally just thinking of this to help with GPGPU but it should also help to compartmentalize Optimus / PowerXPress, no?
                Well, it kind of helps. But optimus is a black box to me and I shouldn't pretend to know it and spread wrong facts.. But at least render-nodes simplify the decision which node to use for rendering. In the optimal case, you would have 3 nodes, one for the display-controller, one for the internal GPU and one for the power-hungry GPU. But without any more information on optimus-internals, we will probably never really support this..

                But zero-copy is totally dma-buf's job. If either GPU supports scanning out of a shared buffer, you get zero-copy. If the display-controller cannot do that, you obviously need to copy it.

                Comment


                • #9
                  Originally posted by dvdhrm View Post
                  Well, it kind of helps. But optimus is a black box to me and I shouldn't pretend to know it and spread wrong facts.. But at least render-nodes simplify the decision which node to use for rendering. In the optimal case, you would have 3 nodes, one for the display-controller, one for the internal GPU and one for the power-hungry GPU. But without any more information on optimus-internals, we will probably never really support this..

                  But zero-copy is totally dma-buf's job. If either GPU supports scanning out of a shared buffer, you get zero-copy. If the display-controller cannot do that, you obviously need to copy it.
                  This is exactly why I can't image how intel GPU can scan nvidia GPU dedicated memory. Can part of the dedicated memory be mapped to memory range of the main memory, so intel GPU can scan it out, without copy?

                  Comment


                  • #10
                    Originally posted by Drago View Post
                    This is exactly why I can't image how intel GPU can scan nvidia GPU dedicated memory. Can part of the dedicated memory be mapped to memory range of the main memory, so intel GPU can scan it out, without copy?
                    To further complicate things you have AMD's HSA computing where "System memory" and "GPU Memory" are one in the same. Intel (to my knowledge) doesn't have anything similar YET but if AMD supports it then Intel should unless they want to be at a distinct disadvantage.
                    All opinions are my own not those of my employer if you know who they are.

                    Comment

                    Working...
                    X