Announcement

Collapse
No announcement yet.

Benchmarking The Ubuntu "Low-Jitter" Linux Kernel

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Heh, it's one of those "quality articles" on Phoronix where Michael has no clue what it is he's testing :-P Instead of testing jitter, he tests throughput.

    Ah, well.

    Comment


    • #12
      Me: I concocted this really great variety of beer that's designed to keep for long periods of time without refrigeration (it's got loads of preservatives) and doesn't lose its flavor or spoil. Good to have with you if you're stranded on an island for months and need some alcohol

      Michael: Let's try this purported new variety of beer... ech, it tastes nothing like a German beer at all!

      Me: ...did you miss the point that it keeps for a long time?

      Comment


      • #13
        well, to be fair

        the poster who keeps talking about his jitter-kernel keeps trying to convince everyone it massively boosts framerates in games. or at least that what it sounds like he is saying. which is ridiculous, of course, as michael has now shown.

        Comment


        • #14
          Ya something measuring jitter would be nice. If it reduces it enough in a 3-d rendering engine, you might notice improved responsiveness and smoothness even though the total number of frames remain the same as the frames are more evenly spaced out.

          There usually is a trade-off in responsiveness vs throughput.

          Comment


          • #15
            Originally posted by smitty3268 View Post
            the poster who keeps talking about his jitter-kernel keeps trying to convince everyone it massively boosts framerates in games. or at least that what it sounds like he is saying. which is ridiculous, of course, as michael has now shown.
            Michael just have no clue he also tested catalyst versions in the past with massive "mouse lag/mouse problems" and then he claimed after the benchmarks the game run well because of the very high FPS rate.

            But the game was in fact unplayable. I reported that to Michael in the IRC chat via private message and he answers: he don't care he tested the FPS and the FPS is high ....... I think he only don't care because his workstation customers only care about rendering stuff they just don't play and they just don't need interactive mouse input while rendering the stuff.

            Any serious gaming-computer magazine would put a big fat warning over every FPS result like that: "warning unplayable in any fps rate because of mouse input lag/mouse problems"

            And now again Michael tested a "interactive-low-latency-low-jitter" kernel and Michael just show to the people that he has no clue what he is testing.

            You can't test this like THIS! you need a pro-gamer who plays clan-games with both kernels in high responsive high interactive games like: HON or Unreal/UT or stuff like that.

            Then you play the map "deck16" with 200% speed with a 2400DPI mouse as a input.

            I'm sure the one with the low-latency/low-Jitter/Real-time kernel will win the pro game match.

            But sure you can benchmark like a noob and just record the FPS.

            Comment


            • #16
              Hrm, I'm surprised that the low-jitter kernel had any overall performance boost at all. I'd expect performance to be down slightly, nearly across the board. low-jitter is nice for gaming and is also nice for GPGPU as you have less chance for the GPU to go idle while waiting on the CPU. For GPGPU, it really doesn't take any CPU work at all to send work to the GPU, but if the CPU's work dispatcher (to the GPU) thread gets delayed by even a single ms, then it could mean losing a massive number of compute cycles on the GPU and a massive drop in GPU compute performance.

              When we wrote the original folding@home wrapper for nVidia GPUs (CUDA) on Linux. Jitter was a big problem, I think partially because for whatever reason we had to use polling to check if the GPU was done with the GPGPU work previously sent. The problem was that we would sleep the work dispatcher thread to prevent wasting CPU, but it might have been many times longer than sleep duration before the GPU dispatcher thread got any CPU again. As a result, the GPU could go idle during that time and we'd lose massive amounts of GPU performance. I think some people fixed our initial work on the GPGPU wrapper by later going back and actively changing the polling interval to account for increasing jitter due to misc. system load. Increasing the amount of polling caused CPU usage to skyrocket on a thread that wasn't even doing any computation and the duration between each poll was still drastically random because of jitter. I had recommended people use low-jitter or real-time kernels for GPGPU work as they were clearly more efficient as we didn't have to use such aggressive polling and the thread would wake up from sleep exactly when it was supposed to. However, I think nVidia had a solution with the drivers where it wasn't necessary to use polling anymore for GPGPU.

              The kernel is *VERY* good at scheduling things to get the most efficient use out of the CPU, but sometimes you don't want to schedule things that way because it could mean starving some very low CPU usage threads that are very time critical, and causing the GPU to idle for *far* too long. You can "kind-of" counter that by being very aggressive with the polling, but that causes CPU usage to skyrocket and it's a lot of wasted CPU cycles when the CPU isn't doing other work. It also doesn't absolutely guarantee the kernel will give you the CPU when your thread needs it most.

              This is the kind of thing where I think Linux can really beat Windows at it's own game (pun intended). Linux can very easily be customized for optimal gaming all the way down to the kernel level (real-time / low-jitter kernels).
              Last edited by Sidicas; 15 October 2012, 11:26 PM.

              Comment


              • #17
                Originally posted by Sidicas View Post
                Hrm, I'm surprised that the low-jitter kernel had any overall performance boost at all. I'd expect performance to be down slightly, nearly across the board. low-jitter is nice for gaming and is also nice for GPGPU as you have less chance for the GPU to go idle while waiting on the CPU. For GPGPU, it really doesn't take any CPU work at all to send work to the GPU, but if the CPU's work dispatcher (to the GPU) thread gets delayed by even a single ms, then it could mean losing a massive number of compute cycles on the GPU and a massive drop in GPU compute performance.

                When we wrote the original folding@home wrapper for nVidia GPUs (CUDA) on Linux. Jitter was a big problem, I think partially because for whatever reason we had to use polling to check if the GPU was done with the GPGPU work previously sent. The problem was that we would sleep the work dispatcher thread to prevent wasting CPU, but it might have been many times longer than sleep duration before the GPU dispatcher thread got any CPU again. As a result, the GPU could go idle during that time and we'd lose massive amounts of GPU performance. I think some people fixed our initial work on the GPGPU wrapper by later going back and actively changing the polling interval to account for increasing jitter due to misc. system load. Increasing the amount of polling caused CPU usage to skyrocket on a thread that wasn't even doing any computation and the duration between each poll was still drastically random because of jitter. I had recommended people use low-jitter or real-time kernels for GPGPU work as they were clearly more efficient as we didn't have to use such aggressive polling and the thread would wake up from sleep exactly when it was supposed to. However, I think nVidia had a solution with the drivers where it wasn't necessary to use polling anymore for GPGPU.

                The kernel is *VERY* good at scheduling things to get the most efficient use out of the CPU, but sometimes you don't want to schedule things that way because it could mean starving some very low CPU usage threads that are very time critical, and causing the GPU to idle for *far* too long. You can "kind-of" counter that by being very aggressive with the polling, but that causes CPU usage to skyrocket and it's a lot of wasted CPU cycles when the CPU isn't doing other work. It also doesn't absolutely guarantee the kernel will give you the CPU when your thread needs it most.

                This is the kind of thing where I think Linux can really beat Windows at it's own game (pun intended). Linux can very easily be customized for optimal gaming all the way down to the kernel level (real-time / low-jitter kernels).
                the best solution would be if the real-time kernel would merged into mainline kernel.

                i think in the long run its the only way to go. if the people do this right there is no need for another kernel any-more.

                Comment


                • #18
                  Originally posted by allquixotic View Post
                  Me: I concocted this really great variety of beer that's designed to keep for long periods of time without refrigeration (it's got loads of preservatives)
                  Noob. Beer is very good at preserving itself without any additives and refridgeration, which is exactly why it is not kept in a freezer in the super market. I know because I tried one - even a unfiltered one where the guaranteed time is shorter, but still in the range of months - about two years after that date and it was still good as ever.
                  Prerequisites:
                  - bottled in glass bottle, not aluminium can (though that should also work), or worst: PET
                  - not opened
                  - preferably strong (the stronger the better, obviously, as less germs survive)
                  - brewer has to know what he was doing

                  ... so in short, US beer is out
                  Last edited by YoungManKlaus; 16 October 2012, 02:28 AM.

                  Comment


                  • #19
                    Lol, very exciting. So my kernel performs as good as the standard kernel. If you knew their argument was about throughput, that pretty much goes invalid, for their config. So now you can enable low-latency in your kernel, and not worry about a performance hit. (well if you have a particular work-load you can.) No more stuttering videos or games (or atleast well on the way there.)

                    Read also: https://lkml.org/lkml/2012/9/16/83



                    Peace Be With You.
                    Last edited by Paradox Ethereal; 16 October 2012, 03:20 AM.

                    Comment


                    • #20
                      You know, I have never encountered any stuttering or mouse problems on standard kernels. So the low jitter kernel is not needed for every gamer, that's for certain.

                      Comment

                      Working...
                      X