Announcement

Collapse
No announcement yet.

A Low-Latency Kernel For Linux Gaming

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • A Low-Latency Kernel For Linux Gaming

    Phoronix: A Low-Latency Kernel For Linux Gaming

    Within the Phoronix Forums and elsewhere it has been brought up that using a low-latency kernel can improve the Linux gaming performance, but is this really the case? In this article are some simple benchmarks comparing the stock Ubuntu 12.04 LTS "generic" Linux kernel compared to Ubuntu's low-latency flavor of Linux.

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    It may be that what people are talking about in terms of improved gaming "performance" is not the framerate (which unsurprisingly does stay unchanged or degrades a bit with preemption enabled - let alone if one was to install a proper -rt kernel). However you might experience a situation where overall latency decreases which might improve responsiveness to input devices, or the overall "smoothness" of the game might feel more right. Call it trading off excess fps for equal low latency access (who really cares if you are pushing 172 fps if your mouse jerky, when it could all be smooth at say 160 fps).

    Comment


    • #3
      When I hear about low-latency kernels I think of audio and near-realtime industrial control. I think in terms of one or two milliseconds or less for latency, performance levels necessary for those applications, but at or beyond the bleeding edge of human reflexes. At the same time, there are very real compromises necessary in order to get this fast latency, and those compromises can hurt throughput. There's a reason that realtime kernels are not used for general-purpose computing. There's a reason that the low-latency kernel is not the default, and it's not because Ubuntu doesn't think we're 133t enough to use it.

      In other words, unless you know exactly why you need a low-latency kernel, and unless you know what kinds of latencies you need, you don't really need a low-latency kernel. "I'm really fast! I don't want my computer to limit me." just doesn't cut it. (or even make sense in this context)

      Comment


      • #4
        Thanks for the info

        Comment


        • #5
          Why does this retarded article not measure frame jitter instead? The entire point of a low latency kernel combined with a effective IO manager is that your game application gets consistent troughput with low latency instead of being random.
          Why can't phoronix suit measure framejitter(frame jitter, etc)? Techreport already did it quite well.


          Edit:

          phred14, I have a small guess. Realtime is not a standard in OSes because of legacy. In the old day of computing, a realtime system or a less laggy system as one could put it had one major gripe: It consumed too much computing power, meaning it would be unfavorable.
          Then again, jitter from OS side is usually at <1ms, which is not significant, but if we can get a application which has severe input issues and jitter, we might be able to show just how much a more updating kernel is capable of remedying the problem.
          Last edited by del_diablo; 22 June 2012, 06:00 PM.

          Comment


          • #6
            Originally posted by phred14 View Post
            I think in terms of one or two milliseconds or less for latency, performance levels necessary for those applications, but at or beyond the bleeding edge of human reflexes.
            Just for the record the median perception time is at about 100ms. The average reaction time is about 250ms (usually more depending on a number of factors).

            Comment


            • #7
              Originally posted by phred14 View Post
              At the same time, there are very real compromises necessary in order to get this fast latency, and those compromises can hurt throughput.
              Lower latency == Faster response

              This is done by making the system switch contexts much faster and/or by allowing certain processes to interrupt anything else in the system.

              More times you switch contexts and the more times processes get interrupted then the slower the overall performance will be. When the kernel must switch rapidly between processes then cache in the CPU gets invalidated. Main memory is much slower then cpu cache so much so that you can spend a significant number of your cpu cycles just filling up the cache.

              Therefore when everything else is equal a very responsive system is going to be slower then a system that has poor response. Slower as in taking longer to do actual work.

              But this is going to matter most on a busy system.

              Low latency is important for audio work because you are building a system to interact with in real time. That is when you press a button on your midi synth you want to be able to hear it right away. When you are playing with a band you don't want the music you are playing to be all of a sudden off by 200 msecs because some system process decided it was time to flush the file system cache out to disk. This also can result in buffer underruns for realtime music... meaning your system has temporially ran out of information to give to your sound card and that results in skips, scratchy sounds, pops, and other audio artifacts. If you are doing live recording with multiple inputs it can be important to keep them all in sync.

              This is important because you will have a lot going on at your system at the same time. Reading in Midi data over USB, writing out midi data over midi connections, software synths processing realtime, sample loops playing, jackd running, and all sorts of other things in addition to your normal system processes. etc etc.

              A normal kernel cannot guarantee that it can go through all the processes in and do enough work to keep everything flowing at 30msec response rate. It doesn't matter how fast it is, it's just not setup to allow that because it's designed for best efficiency. Now with a realtime kernel you can still fall short, but that should technically be because your system just isn't fast enough.. not that the kernel failed to keep everything running properly for a quarter of a second.

              Now with video games you have a single process going flat-out. You are going to let that process be dominate and not much else is going to be going on. You can pretty much get the same effect as running a realtime kernel as just raising the process priority of X and your video game above anything else and not get the hit that you'd get from using a rt kernel.

              Comment


              • #8
                Originally posted by 89c51 View Post
                Just for the record the median perception time is at about 100ms. The average reaction time is about 250ms (usually more depending on a number of factors).
                Which is blatantly false. A monitor does not need such a severe input lag as 100ms before it becomes noticable. All you need to do, is that you have a jitter of 1ms --> 10ms --> 1ms --> 10ms, and it should be noticable that the input is quite unsmooth, especially if you are testing a application where the input matters(hardcore quake FPS anybody?).
                Nevermind that once you have gotten the mind into "ready mode", and are into a "flow of actions", the static reaction times is no longer there. Now... the 100ms median is true if you are waiting for twitching. But if you are in a constant twitching movement, in a state where you already have processed all the information, 100ms not your reaction time.

                Comment


                • #9
                  Originally posted by del_diablo View Post
                  Why does this retarded article not measure frame jitter instead? The entire point of a low latency kernel combined with a effective IO manager is that your game application gets consistent troughput with low latency instead of being random.
                  Why can't phoronix suit measure framejitter(frame jitter, etc)? Techreport already did it quite well.
                  Yes I did not understand that either.
                  I would expect a low latency kernel to have a lower average framerate, but I would also expect less jitter.
                  I don't see the point of the article...

                  Comment


                  • #10
                    Most serious competitive gamers will attempt to reduce latency and minimise jitter in latency. Course, those gamers won't be using linux because the platform simply doesn't support their games, but if it was they'd be looking for ways to minimise input lag.

                    It has nothing to do with FPS, but it's fairly trivial to see the difference in input lag between 250fps and 125fps whilst playing a q3 engine game. There may be more to it, but actual mouse pointer consistency can get seriously messed up from a gamer reacting to whats happening on his screen without a consistant system.

                    The average person denies its possible for a couple of ms difference to actually affect performance, but almost every high end gamer (ie, the people who win money at tournaments) would agree with what I said above.

                    For the record, playing a windows game with rawinput usually mean you're got a consistant gaming experience which leads to a much higher level of skill, and is obviously noticeable when you look at the people playing the games.

                    On the other hand, without directinput / rawinput the entire windows pointer system seems to be awful and crappy, and inconsistant in windows 7 to the point where I can be successful in XP with no acceleration fixes, but not so in windows 7.

                    I'm hoping steam for linux games have some kind of rawinput, because despite being a linux fanboy, I won't be gaming with it if I can't aim.

                    FYI, if we're seriously talking about 30ms kernel, I'd suspect we're seriously being way too slow already. We're talking about a group of people who raise their USB polling rates from 8ms to 1-2ms, using monitors than need to have < 10ms lag, and playing at high FPS (thus meaning < 10ms delay between screens) with generally 100hz+ refresh rates. A 30ms responce time is already way too slow, though I suppose thats a maxiumum as opposed to an average.
                    Last edited by ownagefool; 22 June 2012, 06:55 PM.

                    Comment

                    Working...
                    X