Announcement

Collapse
No announcement yet.

OpenGL Frame Latency / Jitter Testing On Linux

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • OpenGL Frame Latency / Jitter Testing On Linux

    Phoronix: OpenGL Frame Latency / Jitter Testing On Linux

    Beyond there finally being Team Fortress 2 benchmarks on Linux, at Phoronix is now also support for OpenGL frame latency benchmarks! It's another much sought after feature and request for graphics hardware and driver testing...

    http://www.phoronix.com/vr.php?view=MTQxNDI

  • #2
    Nice to have this. This, combined with capturing an APITrace, could really make performance tuning graphics drivers easier.

    Comment


    • #3
      I have questions about how this works

      On the windows side, we've been told it's impossible to capture this information reliably without a piece of dedicated hardware that NVidia recently released to some review sites. This seems to be completely in software, though.

      Is this going to run into the same issues that windows games had? Is it the same less accurate tests that came out on windows a while back, that can show some obvious problems but not the real details of what's going on? Or is OpenGL somehow different than D3D in this regard?

      Comment


      • #4
        Originally posted by smitty3268 View Post
        On the windows side, we've been told it's impossible to capture this information reliably without a piece of dedicated hardware that NVidia recently released to some review sites. This seems to be completely in software, though.

        Is this going to run into the same issues that windows games had? Is it the same less accurate tests that came out on windows a while back, that can show some obvious problems but not the real details of what's going on? Or is OpenGL somehow different than D3D in this regard?
        It's what's reported by the engine. Some references:

        http://www.iddevnet.com/doom3/

        http://phoronix.com/forums/showthrea...ysis-on-Doom-3
        Michael Larabel
        http://www.michaellarabel.com/

        Comment


        • #5
          awesome, finally a really good measure how choppy the game feels. Really cool work there, Michael

          Comment


          • #6
            Originally posted by Michael View Post
            Hmm, the documentation doesn't really seem to say anything about how it calculates that number. I guess i'll need to dive into the source code if i really want to know. Most likely it does the same thing the old windows tests did, and just adds a callback event before the frame is displayed.

            As such, i think we need to take these tests with a grain of salt, although they are still very useful to see.

            Comment


            • #7
              Originally posted by smitty3268 View Post
              Hmm, the documentation doesn't really seem to say anything about how it calculates that number. I guess i'll need to dive into the source code if i really want to know. Most likely it does the same thing the old windows tests did, and just adds a callback event before the frame is displayed.

              As such, i think we need to take these tests with a grain of salt, although they are still very useful to see.
              Why? If it just calls gettimeofday() after every draw call and presents the difference from last frame, then it is showing frame latency. Sure, double or triple buffering will smooth things out for the user, but if you have peaks at the level of of 30ms, there's no way you won't notice it while gaming. So it does show how smooth the game is, and that's most important.

              Comment


              • #8
                Originally posted by smitty3268 View Post
                On the windows side, we've been told it's impossible to capture this information reliably without a piece of dedicated hardware that NVidia recently released to some review sites. This seems to be completely in software, though.

                Is this going to run into the same issues that windows games had? Is it the same less accurate tests that came out on windows a while back, that can show some obvious problems but not the real details of what's going on? Or is OpenGL somehow different than D3D in this regard?
                although windows related this video was interesting

                Comment


                • #9
                  Originally posted by tomato View Post
                  Why? If it just calls gettimeofday() after every draw call and presents the difference from last frame, then it is showing frame latency. Sure, double or triple buffering will smooth things out for the user, but if you have peaks at the level of of 30ms, there's no way you won't notice it while gaming. So it does show how smooth the game is, and that's most important.
                  Because there are various queues, buffers, and schedulers that can distort things.

                  See http://www.anandtech.com/show/6857/a...-roadmap-fraps for a good overview of some of the issues.
                  Edit - here's another: http://www.anandtech.com/show/6862/f...rking-part-1/2

                  However, i will note, that much of the issue has to do with numbering frames precisely and detecting tearing, which is tough to do in an overlay but easy if you modify the actual game code. So it seems like that may be exactly what the Doom3 code does, so maybe that's not an issue after all, and it would only become one if Michael tried to make this a generic feature of PTS that applied to other applications that didn't directly support it.
                  Last edited by smitty3268; 07-18-2013, 07:39 PM.

                  Comment


                  • #10
                    Originally posted by smitty3268 View Post
                    However, i will note, that much of the issue has to do with numbering frames precisely and detecting tearing, which is tough to do in an overlay but easy if you modify the actual game code. So it seems like that may be exactly what the Doom3 code does, so maybe that's not an issue after all, and it would only become one if Michael tried to make this a generic feature of PTS that applied to other applications that didn't directly support it.
                    PTS only supports it if it can be queried from the actual application/game directly. Only when the actual data gets exposed to PTS is it then handled in a generic way after that.
                    Michael Larabel
                    http://www.michaellarabel.com/

                    Comment


                    • #11
                      Are the values reliable? Latest catalyst has huge frame latency problems, at most on setups that have fast CPU and a slower GPU - it often reaches hundreds of milliseconds and more (I've read someone mention even second delay) - see here. I'm interested if this test can provide some insight.

                      Comment


                      • #12
                        Price / performance please

                        http://www.cpubenchmark.net/cpu.php?...+A10-6800K+APU
                        http://www.cpubenchmark.net/cpu.php?....50GHz&id=1919

                        150 USD vs 337 USD

                        5328 vs 10153 CPU Passmark points - unfortunately Phoronix do not want to make a normalized score

                        GPU Radeon HD 8610G vs Intel HD 4600

                        http://www.videocardbenchmark.net/gp...+8610G&id=2568
                        http://www.videocardbenchmark.net/gp...D+4600&id=2451

                        669 vs 592

                        Comment


                        • #13
                          i like this hmm

                          Comment


                          • #14
                            Re inaccuracy:

                            Measuring frame time at the application side like this is an exact indicator for app latency. If some frame takes long, the app cannot begin processing input for the next frame, causing a visible input lag to the user.

                            This happens regardless of what buffering or queue the driver is using. It has a direct, provable correlation to input latency.

                            Quoting from anandtech:
                            Simply put, FRAPS cannot tell you the frame interval at the end of the pipeline, it can only infer it from what it’s seeing.
                            This is a different measurement. AMD is correct in that it does not measure how often frames come out of the GPU; but AMD is incorrect in saying the latter number matters more. By buffering frames appropriately, the driver can ensure the frames come out evenly spaced in time. This does not help input latency, what the app sees. It actually adds to it.

                            So while out-the-gpu latency means smoother movement, it does not mean smoother response to input events, but the opposite.


                            @Michael

                            It's completely possible to measure this for all OpenGL apps just like the id engines do for their own frames. I've been sitting on a 84-line library that does just that, maybe I should publish that on github or something. Linux-only though.

                            The other thing, please do the other common representation of the latency graphs - a sorted graph, with marker lines for 50%, 90%, and 95% thresholds.

                            Comment


                            • #15
                              this will work well on Wayland no? seeing how all of them have a time stamp?

                              Comment

                              Working...
                              X