Announcement

Collapse
No announcement yet.

Damage Rectangle Interface Proposed For Atomic DRM Drivers

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Hi-Angel View Post
    I think it's possible to make a heuristic that chooses whether it's worth sending a few damaged regions or rather whole screen.
    XDamage has been around since 2003... just to give a hint that application-side infrastructure has been around a long time and we're talking about this like it's something new... well... it is, in DRM. And the original concept of XDamage wasn't entirely about saving bandwidth - it was also about discarding work that would never be seen, in the same way occlusion culling works in GPUs. So both client-side (X) and hardware side (GPU) have had tiling and occlusion to control bandwidth, redundant work, and latency, and only now 1.5 decades later is DRM getting on board... that should tell you how likely it is that the end-to-end benefits of this technology will be rapidly adopted... :-(

    It's a lot trickier than you might think. As I pointed out, these techniques have been used for a long time in GPUs and in rendering libraries client side, but it also requires some help from the application to get any benefit from it.

    Back to my browser example - Firefox has decided to start rendering full frames and pushing them to the surface regardless of how much of the screen was altered. When Firefox is fullscreen, it's going to be pushing 60hz full resolution non-stop regardless of whether anything has changed at all. So then if the OS wants to reduce power or latency by controlling bandwidth then the DRM is going to need at the very least some frame-to-frame comparisons to detect the regions to send. Same thing if the desktop compositor is sending in full frames all the time regardless of how much has changed. (My 5-year-old Panther Point i965 supports Display Link Power Management and Framebuffer Compression to minimise bandwidth so the DLPM can engage as long as possible, and I assume it differences the frames as well as compressing the differences.)

    One of the reasons I've never used Unity or Gnome 3+ for any significant time is because their compositors use far more CPU than KWin, XFCE or Compton, and I haven't looked but if they're burning 14% of 1 core and keeping the GPU warm when literally nothing is changing on my 5520x2160 dual-monitor screen - then they are very plainly not limiting themselves to updating only the changed screen regions.

    On the Application front, Firefox is definitely stepping in the wrong direction. They're going to screw users' mobile battery life pushing all that data to the compositor. Or maybe they assume the compositor will bypass them like a video player or a game.... hard to say... but on top of that they're also moving to a layout engine that uses 3-4x the compute to get 2x the performance so that is also going to wreck battery life.

    It'll be interesting to see how DRM damage rectangles gets properly leveraged by applications. I really look forward to it because there's a lot of gains to be made if the display stack and app developers are prepared to educate themselves and do the hard work to make it deliver results. Firefox is headed the wrong direction but an example of an app that's about as un-optimal as can be is Skype. It's notification area icon updates thousands (!) of times per second as you log in and for a while after, driving your compositor into a meltdown. If you really want to lock up your UI, try disconnecting the internet a fraction of a second after you start up Skype, and for fun leave top running so you can watch your compositor doing backflips.

    Yes Skype is rendering the entire UI nonresponsive... but the compositor is equally to blame for actually rendering all those redundant updates off-screen instead of discarding the occlusions. It gets 16+ updates to the exact same rectangle every single frame and yet the compositor doesn't choose to ignore the first 15 redundant events. If your GPU saw 15 updates to the same rectangle it would simply discard the occluded ones.

    Why does the GPU discard them? Because someone cared and took the time to make that optimisation and it made their product more competitive as a result.

    How do we get everyone at all levels of the stack to collaborate and make the effort to make these optimisations... because really they do need to be considered at all levels. I have no clue. I don't think most developers care to understand the whole stack, to worry about watching what pixels are being pushed in what order and whether any redundant work is being done. That's basically the wheelhouse of those involved with rasterization libraries for fonts and widgets and games. I know my own attitude many times has been along the lines of "It runs nicely on my workstation, on my iphones, on my ICS tablet, and hardware just keeps getting faster so I'd rather focus on features than tweaking."


    Comment


    • #12
      Originally posted by linuxgeex View Post

      XDamage has been around since 2003... just to give a hint that application-side infrastructure has been around a long time and we're talking about this like it's something new... well... it is, in DRM. And the original concept of XDamage wasn't entirely about saving bandwidth - it was also about discarding work that would never be seen, in the same way occlusion culling works in GPUs. So both client-side (X) and hardware side (GPU) have had tiling and occlusion to control bandwidth, redundant work, and latency, and only now 1.5 decades later is DRM getting on board... that should tell you how likely it is that the end-to-end benefits of this technology will be rapidly adopted... :-(
      Time line is kind of important. Before 2007 its the time of user mode setting(UMS) and kernel mode setting(KMS) starts in 2007 with production forms 2008. XDamage enters x.org X11 server in Oct 2003. UMS it a time when large sections of graphical driver code was in fact running in userspace and could in fact snoop across into X11 servers to get XDamage information. The DRM development starts in 1999 so UMS and KMS are two different stages of DRM development.

      So by timeline DRM did not need to be extended to support Damaged Rectangle Interface until after 2007 with the change to KMS as the early UMS DRM drivers could access the XDamage information and use it if they wanted to. So its not 1.5 decades late its 1 decade late its a feature that should have been transferred when KMS was made but was not.

      Lot of remote desktop software options on Linux like like x11vnc and rdesktop where started in the age of UMS yes x11vnc and rdesktop started 2001. So why does wayland have such remote desktop trouble? Most remote desktop software on Linux is older in design and is design around how UMS drivers work not the newer KMS and a feature they need was never ported to KMS DRM. Yes you find rdesktop and x11vnc accessing the XDamage information like a lot of the other options. Maybe if the Damaged Rectangle Interface gets in we can see remote desktop software on Linux updated to KMS design.

      UMS design means drivers can be very dependant on the X11 server to work. So no clean separation between X11 server and driver. KMS started providing clean separation between driver and X11 server.
      .
      The more look the less Wayland protocol should have network transparency if items like how remote desktops hook in were moved to compositor/server neutral locations.

      Comment


      • #13
        Originally posted by oiaohm View Post
        Time line is kind of important.
        You made a very good point there about when the need for DRM to adopt Damage Rectangle support first appeared, essentially with KMS. I agree that work proceeds along a path of least resistance a lot of the time.

        That doesn't change my belief that they should have adopted it into the initial design. "Direct Rendering Manager" vs "Framebuffer Driver", to me, implies some intelligence in arbitrating access to the display hardware. Basically everything that XRANDR, XCOMPOSITE and XDAMAGE do seem like minimum features of a "Direct Rendering Manager" (I might even throw GLX in there, in this age where even the majority of embedded devices have GPUs.) If DRM did what it implies then SDL should run directly on top of it, plus DRI/EGL, ALSA, and libinput, and there should be no need for protocol-driven display managers for something as simple as a fullscreen game.

        In some ways I feel like we've taken a step backwards from 1994 when you could easily write an app using svgalib and avoid loading X at all. Isn't that the point of DRM? To migrate the hardware management out of userspace and into the kernel, where it can be optimised, audited, and reused instead of userspace insecurely reinventing the wheel, so that there's a single API, and multiple users and/or processes can share the display hardware without necessitating an X11 or Wayland? Wouldn't it be great if your RPI or server could avoid loading all that cruft when all you want is to push a few pixels to visualise run state? Well... looks like we're finally getting there but I can't write an app to set the display mode and push some pixels inside 100 lines of code any more.
        Last edited by linuxgeex; 07 January 2018, 02:13 PM.

        Comment


        • #14
          Originally posted by linuxgeex View Post
          That doesn't change my belief that they should have adopted it into the initial design. "Direct Rendering Manager" vs "Framebuffer Driver", to me, implies some intelligence in arbitrating access to the display hardware. Basically everything that XRANDR, XCOMPOSITE and XDAMAGE do seem like minimum features of a "Direct Rendering Manager" (I might even throw GLX in there, in this age where even the majority of embedded devices have GPUs.) If DRM did what it implies then SDL should run directly on top of it, plus DRI/EGL, ALSA, and libinput, and there should be no need for protocol-driven display managers for something as simple as a fullscreen game.
          DRM was not design to set up screen.

          KMS is in fact design to setup screen. If you are using open sources yon can use.

          GLX is opengl for X11 to operate without X11 you require EGL. Yes the X in GLX is X11.

          Thanks to all the people who contributed code and feedback, SDL 2.0.6 is now available! http://www.libsdl.org/download-2.0.php In addition to lots of bug fixes and build improvements, here are the major changes in this release: General: Added cross-platform Vulkan graphics support in SDL_vulkan.h SDL_Vulkan_LoadLibrary() SDL_Vulkan_GetVkGetInstanceProcAddr() SDL_Vulkan_GetInstanceExtensions() SDL_Vulkan_CreateSurface() SDL_Vulkan_GetDrawableSize() SDL_Vulkan_UnloadLibrary() This is...

          Added an experimental KMS/DRM video driver for embedded development
          You are not exactly up to date this is from Sep 2017 2.0.6 SDL release.

          So yes SDL that can sit directly on the system graphics and other parts is coming and this should hopeful have full EGL support on everything in time due to the pressure being put in Nvidia.. Yes EGL works with open source drivers.

          I
          Originally posted by linuxgeex View Post
          In some ways I feel like we've taken a step backwards from 1994 when you could easily write an app using svgalib and avoid loading X at all. Isn't that the point of DRM?
          This is rose color glasses problem. svgalib when you application when wrong it went badly wrong. Please remember svgalib is like using framebuffer of basically forget having graphical acceleration.

          Something that people miss is the Linux kernel framebuffer drivers exist before svgalib does. svgalib was fairly much lets follow X11 UMS mode with all the evil issues and with no gpu vendor support leading to it dieing early death. I really do see SDL replacing what svgalib did in time but this time in fact working correctly.

          SDL also has Linux kernel frame buffer support that does equal to what svgalib did. Yes no working graphical acceleration or minimal working acceleration.

          This leads us back to the same universal problem Nvidia. Trying to get drivers that use same interfaces as all the other graphical drivers on Linux is a huge game of pulling teeth.

          I understand why. Nvidia has wanted to keep their driver code closed and secret. To achieve this they have wanted all user-space interfaces this is where eglstreams come from. It has required wayland compositor makers to jack up. People say why intentionally fragment into multi different compositors instead of 1 server. Do this makes the only common point the kernel and kill Nvidia patching the userspace as effective option. Also should make those doing remote desktop really think about what they are doing.

          Comment


          • #15
            Originally posted by oiaohm View Post
            DRM was not design to set up screen.
            KMS is part of DRM... at least according to kernel.org. Maybe they're wrong, lol.

            GLX is opengl for X11 to operate without X11 you require EGL. Yes the X in GLX is X11.
            Yes, I know, just sloppy of me. I used EGL later, I'm sure you noticed. :-P

            Thanks to all the people who contributed code and feedback, SDL 2.0.6 is now available! http://www.libsdl.org/download-2.0.php In addition to lots of bug fixes and build improvements, here are the major changes in this release: General: Added cross-platform Vulkan graphics support in SDL_vulkan.h SDL_Vulkan_LoadLibrary() SDL_Vulkan_GetVkGetInstanceProcAddr() SDL_Vulkan_GetInstanceExtensions() SDL_Vulkan_CreateSurface() SDL_Vulkan_GetDrawableSize() SDL_Vulkan_UnloadLibrary() This is...


            You are not exactly up to date this is from Sep 2017 2.0.6 SDL release.
            Thanks that's heartening for me. It's not exactly supported yet, but it puts a big warm grin on my face knowing it's coming, because I enjoy writing close to the hardware when I can, and this will encourage me to start some new projects.

            This is rose color glasses problem. svgalib when you application when wrong it went badly wrong. Please remember svgalib is like using framebuffer of basically forget having graphical acceleration.
            Yes, svgalib didn't even support linear framebuffer for many cards, let alone acceleration, and we were using VESA 2.0 BIOS calls to set up linear framebuffer... ggi was better, I was really hoping that would catch on.

            Yes NVidia is being a problem. Linus already shared the appropriate gesture with them, lol.
            Last edited by linuxgeex; 23 January 2018, 01:11 AM.

            Comment


            • #16
              Originally posted by linuxgeex View Post

              KMS is part of DRM... at least according to kernel.org. Maybe they're wrong, lol.
              Please I said was not meaning in history not current. UMS and KMS were both done as add-on to DRM. UMS was only a general guide line so Nvidia and other closed source graphics drivers did what ever they like in that time frame to be as faster as possible.

              When you got back to when svgalib was dominate used you are in the wild mess of pre mode setting being define by the DRM standard yes before even UMS guide lines. UMS guide lines first draft only appear in the year 2001 and KMS starts when it worked out that no matter how you write UMS guidelines you are always going to have fights between other programs wanting to use the graphics card. Yes the Unix X11 stack was a true wild out of control pace when it come to how to setup a video card,

              Now of course it makes sense for DRM and KMS today to be managed by the same people right. Note Nvidia closed source support current KMS but does not support DRM yet instead says you have to opengles or their custom userspace.

              DRM you use two user-space parts. Mode-setting either KMS or UMS(with UMS on the totally deprecated path) and DRI (Direct Rendering Infrastructure)

              DRM is not in fact as old as many would think it first appears in Linux kernel 2.3.18 that is sep 1999. So with svgalib were referring to usages before DRM even exits.

              Also DRI starts that year as well being 1999. Only very limited number of early DRI 1999 drivers in fact support DRM again this is another gone. DRM refers to kernel based Direct Rendering Manager early DRI has SRM as well Software Rendering Manager what basically lacks clean standard..



              This above is a good read. .Notice they talk about glx that Nvidia closed source follows not requiring X11 server modifications. This is using the old software rendering manager where it done inside the opengl stack and the driver module that plugs into X11 itself and most of the time vendor unique ways.

              There was a dispute if SRM based or DRM based was the best path. There are still embedded drivers that are SRM. Yes what android does with there user-space graphics drivers follows the SRM path where all the management is in user-space not kernel.. Yes since SRM does not have a clearly defined standard what Android implements in userspace graphics drivers is a perfect implementation.

              Comment


              • #17
                Originally posted by oiaohm View Post
                Please I said was not meaning in history not current. UMS and KMS were both done as add-on to DRM. UMS was only a general guide line so Nvidia and other closed source graphics drivers did what ever they like in that time frame to be as faster as possible.

                When you got back to when svgalib was dominate used you are in the wild mess of pre mode setting being define by the DRM standard yes before even UMS guide lines. UMS guide lines first draft only appear in the year 2001 and KMS starts when it worked out that no matter how you write UMS guidelines you are always going to have fights between other programs wanting to use the graphics card. Yes the Unix X11 stack was a true wild out of control pace when it come to how to setup a video card,

                Now of course it makes sense for DRM and KMS today to be managed by the same people right. Note Nvidia closed source support current KMS but does not support DRM yet instead says you have to opengles or their custom userspace.

                DRM you use two user-space parts. Mode-setting either KMS or UMS(with UMS on the totally deprecated path) and DRI (Direct Rendering Infrastructure)

                DRM is not in fact as old as many would think it first appears in Linux kernel 2.3.18 that is sep 1999. So with svgalib were referring to usages before DRM even exits.

                Also DRI starts that year as well being 1999. Only very limited number of early DRI 1999 drivers in fact support DRM again this is another gone. DRM refers to kernel based Direct Rendering Manager early DRI has SRM as well Software Rendering Manager what basically lacks clean standard..



                This above is a good read. .Notice they talk about glx that Nvidia closed source follows not requiring X11 server modifications. This is using the old software rendering manager where it done inside the opengl stack and the driver module that plugs into X11 itself and most of the time vendor unique ways.

                There was a dispute if SRM based or DRM based was the best path. There are still embedded drivers that are SRM. Yes what android does with there user-space graphics drivers follows the SRM path where all the management is in user-space not kernel.. Yes since SRM does not have a clearly defined standard what Android implements in userspace graphics drivers is a perfect implementation.
                You're bringing tears to my eyes now, lol. Yes I was offended that they were even adding DRI/DRM to X and kernel, and having an eventual goal of KMS and rootless X. I thought that was all terrible ideas back in those days, and I was very wrong.

                When EGL started to gain traction I did wish that GLX would be rebased onto it though.

                Today I wish that OpenGL, EGL, and Vulkan were all display-manager-agnostic and worked directly with a framebuffer regardless of who set it up or whether it was a display framebuffer. And things like parallel renderers and multiple seats would be easy software-defined solutions, no more politics and insane reinventing of interfaces and techniques for something that should be child's play... like displaying the output of a GPU from a discrete card on the monitor attached to an integrated one... I mean hello this should never have been challenging to begin with let alone been years in the making and never officially adopted by any major distro a decade after it started to work... and we still don't have sane multiseat even though we have the obscenely more complicated LXC that can easily slice the computer up for N users with entirely different environments... but share a display card with those environments... when it's really just some framebuffers... noooooooo.

                When all the magic is happening off-screen with triple-buffered compositing and all the "hard" issues resolved like OS timer VSYNC synchronisation so there's no busy-waits for frame swap. Tearing? What's that? Sound is always perfectly in sync and again always serviced on time so no crackle, minimal latency. Hooks for comression and remote presentation... with the same drivers and libraries regardless of whether it was used with XOrg, Wayland, the Linux Console, Mir, Magenta, then the world will be a better place.

                TBH I wish OpenGL, DirectX, and Metal were re-implemented as fat layers over Vulkan on DRM, kiss 95% of the non-display-management features of Display Managers goodbye, almost all of the challenging underpinnings of compositors already solved, a variety of well-understood APIs with super reliable back-ends, so we can just get on with making the most gloriously beautiful and productive user interfaces possible.

                And if we can still write "hello world" to display framebuffer in less than 100 lines of code, that would be nice too, lol.
                Last edited by linuxgeex; 28 January 2018, 08:34 AM.

                Comment

                Working...
                X