Announcement

Collapse
No announcement yet.

Linux GPU Drivers Prepare For Global Thermo-Nuclear War

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by sarmad View Post
    This shows how messy the design of Linux's graphics stack is. Apparently whoever designed it in the first place had never played a video game in his life, nor read about graphics programming. Even back in the 80s game developers on C64, Amiga, MSX, etc were utilizing v-sync correctly to create perfect frames yet in 2014 Linux is still trying to figure out how to create perfect frames!
    Not entirely surprising as X11 was not designed as a video game system.

    The decision for front buffer rendering was fairly deliberate. Memory was much more expensive back then. Yes you can do vsync'd front buffer rendering, but trust me that it would have been much more of a mess if GUI apps needed to know about vsync.

    Comment


    • #12
      Originally posted by DrYak View Post
      Yes.
      Atomic modsetting: should have less flicker, and less risk of corruption of visual output.
      Nuclear page-flippinf: less flicker and tearing.
      My understanding of ams is that it won't make any user facing changes unless you are using more than one monitor. Fit a single monitor, I think, the process is already atomic.

      Comment


      • #13
        Originally posted by sarmad View Post
        This shows how messy the design of Linux's graphics stack is. Apparently whoever designed it in the first place had never played a video game in his life, nor read about graphics programming. Even back in the 80s game developers on C64, Amiga, MSX, etc were utilizing v-sync correctly to create perfect frames yet in 2014 Linux is still trying to figure out how to create perfect frames!
        Exactly. In 1984 Amiga, C64, MSX, were just toys for the IT pros using UNIX. X11 was going to solve the needs of computers costing hundreds of thousands of dollars, who needed to display lots of terminals in an expensive monochrome 15" CRT. 1024x768x2 was the goalpost then. No games. You were not supposed to game, you were supposed to work.

        Comment


        • #14
          Originally posted by liam View Post
          My understanding of ams is that it won't make any user facing changes unless you are using more than one monitor. Fit a single monitor, I think, the process is already atomic.
          fwiw, it is mostly an issue with hw composition (ie. using overlays/planes to bypass gpu for some windows/surfaces). I'm not sure how much it will benefit traditional desktop gpus. It will definitely help anyone with overlays (ie. pretty much all SoC's, plus at least some intel), so we can actually use that hw in the display controller to offload the gpu, helping performance (reducing latency) and power.

          Basically, wayland (central compositor) lets us start taking advantage of these extra features in hw.. but now we need a kernel API that lets us update them all in sync so you don't see things in the wrong place (ie. remember moving windows with video back in the old Xv / non-compositing window mgr days, when the video and window position would get momentarily out of sync as you moved the window?)

          On the desktop side of things, probably atomic modeset is a bigger deal. For example, some generations of intel can drive 3 displays but only with 2 plls, so two of the displays have to "match". Various generations of nv/amd have various similar/weird multi-display contraints. Atomic will give us a way to tell userspace which combinations of resolutions will work together so UI can dtrt. Much easier than trying to explain the magic sequence of xrandr commands to disable then re-enable all three displays in the correct sequence with a set of resolutions that will work across them.

          Comment


          • #15
            Originally posted by robclark View Post
            fwiw, it is mostly an issue with hw composition (ie. using overlays/planes to bypass gpu for some windows/surfaces). I'm not sure how much it will benefit traditional desktop gpus. It will definitely help anyone with overlays (ie. pretty much all SoC's, plus at least some intel), so we can actually use that hw in the display controller to offload the gpu, helping performance (reducing latency) and power.

            Basically, wayland (central compositor) lets us start taking advantage of these extra features in hw.. but now we need a kernel API that lets us update them all in sync so you don't see things in the wrong place (ie. remember moving windows with video back in the old Xv / non-compositing window mgr days, when the video and window position would get momentarily out of sync as you moved the window?)

            On the desktop side of things, probably atomic modeset is a bigger deal. For example, some generations of intel can drive 3 displays but only with 2 plls, so two of the displays have to "match". Various generations of nv/amd have various similar/weird multi-display contraints. Atomic will give us a way to tell userspace which combinations of resolutions will work together so UI can dtrt. Much easier than trying to explain the magic sequence of xrandr commands to disable then re-enable all three displays in the correct sequence with a set of resolutions that will work across them.
            Heh, I started to mention that this was also supposed to aid with embedded display architectures decoupling of hardware (relative to a discrete GPU, that is), but I didn't recall the the details. I read about this in an lwn article talking about ams, nuclear page flip and, in general, the path forward for the stack. I know you were mentioned, along with a Google engineer, who had his own solution for replacing kms. His reasoning, iirc, was that it was too complex, so companies would likely not do a good job implementing it, but also that embedded architecture has combinations of hardware that just don't exist (or are at least quite uncommon) elsewhere. Things like display controllers, scalers, encoders, buffers, hardware compositing could be combined in odd ways.

            Best/Liam

            Comment


            • #16
              Originally posted by liam View Post
              Heh, I started to mention that this was also supposed to aid with embedded display architectures decoupling of hardware (relative to a discrete GPU, that is), but I didn't recall the the details. I read about this in an lwn article talking about ams, nuclear page flip and, in general, the path forward for the stack. I know you were mentioned, along with a Google engineer, who had his own solution for replacing kms. His reasoning, iirc, was that it was too complex, so companies would likely not do a good job implementing it, but also that embedded architecture has combinations of hardware that just don't exist (or are at least quite uncommon) elsewhere. Things like display controllers, scalers, encoders, buffers, hardware compositing could be combined in odd ways.

              Best/Liam
              At least the way it is shaping up, adding atomic support should be relatively easy for most drm drivers. At least most of the mobile display controllers I have seen have "GO" bits (ie. they let you program a bunch of registers to setup next frame, but changes don't take effect until you write some flush/go bit(s) in some register).. which makes what we want to do rather easy from a hw perspective. So far, it seems the hardest SoC driver to implement this for will be i915 :-P

              His reasoning is partially correct, in that the changes in drm core are not so straightforward. (And it is more than just atomic.. we've been busy adding drm_bridge, drm_panel, etc to better handle complex video paths, share encoder and panel drivers, etc.) And getting all these changes upstream takes some time. But I think the end result will be better, easier to implement for drivers, more shared code, etc. And it won't require throwing away all the good parts of KMS (modesetting helpers, hotplug infrastructure, edid (incl. all the quirks for buggy edids), etc, etc, like you would w/ ADF.

              My impression from the outside looking in, when it comes to doing things with existing driver infrastructure the google android folks tend to look at it for 3 minutes, then give up and invent their own thing. I suppose it is easier in the short term. And it isn't google who has to deal with writing two different display drivers for their SoC now :-(

              Comment


              • #17
                Amazing how people have forgotten that we would've had a property based system to begin with like 8 years ago, if i hadn't been shot down as i was: http://libv.livejournal.com/13443.html

                Comment


                • #18
                  Originally posted by robclark View Post
                  fwiw, it is mostly an issue with hw composition (ie. using overlays/planes to bypass gpu for some windows/surfaces). I'm not sure how much it will benefit traditional desktop gpus. It will definitely help anyone with overlays (ie. pretty much all SoC's, plus at least some intel), so we can actually use that hw in the display controller to offload the gpu, helping performance (reducing latency) and power.

                  Basically, wayland (central compositor) lets us start taking advantage of these extra features in hw.. but now we need a kernel API that lets us update them all in sync so you don't see things in the wrong place (ie. remember moving windows with video back in the old Xv / non-compositing window mgr days, when the video and window position would get momentarily out of sync as you moved the window?)

                  On the desktop side of things, probably atomic modeset is a bigger deal. For example, some generations of intel can drive 3 displays but only with 2 plls, so two of the displays have to "match". Various generations of nv/amd have various similar/weird multi-display contraints. Atomic will give us a way to tell userspace which combinations of resolutions will work together so UI can dtrt. Much easier than trying to explain the magic sequence of xrandr commands to disable then re-enable all three displays in the correct sequence with a set of resolutions that will work across them.
                  Will nuclear page-flipping support more than one monitor? Say I have a 3x3 monitor setup, so a total of 9 monitors, driven by 3 separate display controllers (each display controller drives 3 monitors). Could I use nuclear page flipping to get a smooth, tear-free video stretched on this ridiculous setup?

                  Comment


                  • #19
                    Originally posted by amehaye View Post
                    Will nuclear page-flipping support more than one monitor? Say I have a 3x3 monitor setup, so a total of 9 monitors, driven by 3 separate display controllers (each display controller drives 3 monitors). Could I use nuclear page flipping to get a smooth, tear-free video stretched on this ridiculous setup?
                    Well, wayland will help there, as it handles each output with an separate render loop and buffer. So you won't have the sort of tearing-on-all-but-one monitor issue that you have with X11. Atomic/nuclear doesn't really help there.

                    Atomic could hypothetically help synchronize flips across multiple displays **if** the hw supports synchronizing the vblanks. Maybe you get that between three displays on one card. I doubt it is possible across different cards.

                    Comment


                    • #20
                      This is all cool and all but as long as the KMS symbols are _GPL only, it sucks (at least for nvidia card owners, open source radeon drivers are good, can't say the same for nouveau).

                      Comment

                      Working...
                      X