Announcement

Collapse
No announcement yet.

Linux GPU Drivers Prepare For Global Thermo-Nuclear War

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Linux GPU Drivers Prepare For Global Thermo-Nuclear War

    Phoronix: Linux GPU Drivers Prepare For Global Thermo-Nuclear War

    Atomic mode-setting and nuclear page-flipping is becoming a reality within the open-source Linux graphics stack...

    http://www.phoronix.com/vr.php?view=MTcxMTQ

  • #2
    Thanks for the blog link. It's a nice read.

    Comment


    • #3
      So what will atomic modsetting and nuclear page-flipping mean for end users? Will it perform better, have greater quality (the linked blog seemed to suggest this), or are they just nessessary just to get it Wayland to work properly?

      Comment


      • #4
        Originally posted by Prescience500 View Post
        So what will atomic modsetting and nuclear page-flipping mean for end users?
        Yes.
        Atomic modsetting: should have less flicker, and less risk of corruption of visual output.
        Nuclear page-flippinf: less flicker and tearing.

        Comment


        • #5
          This shows how messy the design of Linux's graphics stack is. Apparently whoever designed it in the first place had never played a video game in his life, nor read about graphics programming. Even back in the 80s game developers on C64, Amiga, MSX, etc were utilizing v-sync correctly to create perfect frames yet in 2014 Linux is still trying to figure out how to create perfect frames!

          Comment


          • #6
            Originally posted by sarmad View Post
            This shows how messy the design of Linux's graphics stack is. Apparently whoever designed it in the first place had never played a video game in his life, nor read about graphics programming. Even back in the 80s game developers on C64, Amiga, MSX, etc were utilizing v-sync correctly to create perfect frames yet in 2014 Linux is still trying to figure out how to create perfect frames!
            The frames we get through Xorg are good enough, with or without VSync. Be sure to limit FPS to something reasonable.

            Comment


            • #7
              Originally posted by Calinou View Post
              The frames we get through Xorg are good enough, with or without VSync. Be sure to limit FPS to something reasonable.
              Totally disagree.. Ubuntu with compiz is not tear-free experience and if I take into account Xubuntu where there is no compositor, tearing is everywhere...

              Comment


              • #8
                Originally posted by sarmad View Post
                This shows how messy the design of Linux's graphics stack is. Apparently whoever designed it in the first place had never played a video game in his life, nor read about graphics programming. Even back in the 80s game developers on C64, Amiga, MSX, etc were utilizing v-sync correctly to create perfect frames yet in 2014 Linux is still trying to figure out how to create perfect frames!
                Every recent desktop OS has tearing issues, game consoles have them too.

                But nice troll attempt though...

                Comment


                • #9
                  Originally posted by Prescience500 View Post
                  So what will atomic modsetting and nuclear page-flipping mean for end users? Will it perform better, have greater quality (the linked blog seemed to suggest this), or are they just nessessary just to get it Wayland to work properly?
                  Enables graphic meltdown.

                  Comment


                  • #10
                    Originally posted by log0 View Post
                    Every recent desktop OS has tearing issues, game consoles have them too.

                    But nice troll attempt though...
                    Not true. On Windows it's usually the user's choice whether to enable or disable v-sync. When you enable v-sync in a Windows game you actually get perfect frames. On Linux, not so much. Depending on your hardware and driver v-sync might work and might not.
                    On consoles, it's usually the developer's choice. If a developer thinks it's worth it to take some tearing in return for extra performance they do that (Uncharted on PS3 for example), but the console and its OS is perfectly capable of supporting perfect frames.

                    Comment


                    • #11
                      Originally posted by sarmad View Post
                      This shows how messy the design of Linux's graphics stack is. Apparently whoever designed it in the first place had never played a video game in his life, nor read about graphics programming. Even back in the 80s game developers on C64, Amiga, MSX, etc were utilizing v-sync correctly to create perfect frames yet in 2014 Linux is still trying to figure out how to create perfect frames!
                      Not entirely surprising as X11 was not designed as a video game system.

                      The decision for front buffer rendering was fairly deliberate. Memory was much more expensive back then. Yes you can do vsync'd front buffer rendering, but trust me that it would have been much more of a mess if GUI apps needed to know about vsync.

                      Comment


                      • #12
                        Originally posted by DrYak View Post
                        Yes.
                        Atomic modsetting: should have less flicker, and less risk of corruption of visual output.
                        Nuclear page-flippinf: less flicker and tearing.
                        My understanding of ams is that it won't make any user facing changes unless you are using more than one monitor. Fit a single monitor, I think, the process is already atomic.

                        Comment


                        • #13
                          Originally posted by sarmad View Post
                          This shows how messy the design of Linux's graphics stack is. Apparently whoever designed it in the first place had never played a video game in his life, nor read about graphics programming. Even back in the 80s game developers on C64, Amiga, MSX, etc were utilizing v-sync correctly to create perfect frames yet in 2014 Linux is still trying to figure out how to create perfect frames!
                          Exactly. In 1984 Amiga, C64, MSX, were just toys for the IT pros using UNIX. X11 was going to solve the needs of computers costing hundreds of thousands of dollars, who needed to display lots of terminals in an expensive monochrome 15" CRT. 1024x768x2 was the goalpost then. No games. You were not supposed to game, you were supposed to work.

                          Comment


                          • #14
                            Originally posted by liam View Post
                            My understanding of ams is that it won't make any user facing changes unless you are using more than one monitor. Fit a single monitor, I think, the process is already atomic.
                            fwiw, it is mostly an issue with hw composition (ie. using overlays/planes to bypass gpu for some windows/surfaces). I'm not sure how much it will benefit traditional desktop gpus. It will definitely help anyone with overlays (ie. pretty much all SoC's, plus at least some intel), so we can actually use that hw in the display controller to offload the gpu, helping performance (reducing latency) and power.

                            Basically, wayland (central compositor) lets us start taking advantage of these extra features in hw.. but now we need a kernel API that lets us update them all in sync so you don't see things in the wrong place (ie. remember moving windows with video back in the old Xv / non-compositing window mgr days, when the video and window position would get momentarily out of sync as you moved the window?)

                            On the desktop side of things, probably atomic modeset is a bigger deal. For example, some generations of intel can drive 3 displays but only with 2 plls, so two of the displays have to "match". Various generations of nv/amd have various similar/weird multi-display contraints. Atomic will give us a way to tell userspace which combinations of resolutions will work together so UI can dtrt. Much easier than trying to explain the magic sequence of xrandr commands to disable then re-enable all three displays in the correct sequence with a set of resolutions that will work across them.

                            Comment


                            • #15
                              Originally posted by robclark View Post
                              fwiw, it is mostly an issue with hw composition (ie. using overlays/planes to bypass gpu for some windows/surfaces). I'm not sure how much it will benefit traditional desktop gpus. It will definitely help anyone with overlays (ie. pretty much all SoC's, plus at least some intel), so we can actually use that hw in the display controller to offload the gpu, helping performance (reducing latency) and power.

                              Basically, wayland (central compositor) lets us start taking advantage of these extra features in hw.. but now we need a kernel API that lets us update them all in sync so you don't see things in the wrong place (ie. remember moving windows with video back in the old Xv / non-compositing window mgr days, when the video and window position would get momentarily out of sync as you moved the window?)

                              On the desktop side of things, probably atomic modeset is a bigger deal. For example, some generations of intel can drive 3 displays but only with 2 plls, so two of the displays have to "match". Various generations of nv/amd have various similar/weird multi-display contraints. Atomic will give us a way to tell userspace which combinations of resolutions will work together so UI can dtrt. Much easier than trying to explain the magic sequence of xrandr commands to disable then re-enable all three displays in the correct sequence with a set of resolutions that will work across them.
                              Heh, I started to mention that this was also supposed to aid with embedded display architectures decoupling of hardware (relative to a discrete GPU, that is), but I didn't recall the the details. I read about this in an lwn article talking about ams, nuclear page flip and, in general, the path forward for the stack. I know you were mentioned, along with a Google engineer, who had his own solution for replacing kms. His reasoning, iirc, was that it was too complex, so companies would likely not do a good job implementing it, but also that embedded architecture has combinations of hardware that just don't exist (or are at least quite uncommon) elsewhere. Things like display controllers, scalers, encoders, buffers, hardware compositing could be combined in odd ways.

                              Best/Liam

                              Comment

                              Working...
                              X