Announcement

Collapse
No announcement yet.

A Low-Latency Kernel For Linux Gaming

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • kraftman
    replied
    Originally posted by ownagefool View Post
    FYI, if we're seriously talking about 30ms kernel, I'd suspect we're seriously being way too slow already. We're talking about a group of people who raise their USB polling rates from 8ms to 1-2ms, using monitors than need to have < 10ms lag, and playing at high FPS (thus meaning < 10ms delay between screens) with generally 100hz+ refresh rates. A 30ms responce time is already way too slow, though I suppose thats a maxiumum as opposed to an average.
    I think you're mixing things up. Where we're too slow actually? Linux kernel is much more responsive than Windows, but if this matters to things you described I'm not so sure. When I had stuttering in Skyrim under Windows after 1.5 patch I had to limit FPS and game become playable once again, so there are more important things that affect smoothness than kernel responsiveness. I bet your responsiveness will be messed up with real time kernel and games will simply be much worst. Just a bet, but I played a lot with custom kernels in the past and generic was usually the best when comes to gaming.

    Leave a comment:


  • 89c51
    replied
    Originally posted by del_diablo View Post
    Which is blatantly false. A monitor does not need such a severe input lag as 100ms before it becomes noticable. All you need to do, is that you have a jitter of 1ms --> 10ms --> 1ms --> 10ms, and it should be noticable that the input is quite unsmooth, especially if you are testing a application where the input matters(hardcore quake FPS anybody?).
    Nevermind that once you have gotten the mind into "ready mode", and are into a "flow of actions", the static reaction times is no longer there. Now... the 100ms median is true if you are waiting for twitching. But if you are in a constant twitching movement, in a state where you already have processed all the information, 100ms not your reaction time.
    First of i wasn't referring to games or computers. The numbers come from the book "Introduction to Human Factors and Ergonomics for Engineers". 100ms as reaction time is out of this world. There are quite some things that happen between the time you get stimulated and the time that you will do what you have to do. The human body has its limits.

    Leave a comment:


  • not.sure
    replied
    Not.sure if it makes sense to benchmark a low-latency system if you don't have a latency benchmark or understand what latency is.

    Leave a comment:


  • set135
    replied
    Originally posted by ownagefool View Post
    FYI, if we're seriously talking about 30ms kernel, I'd suspect we're seriously being way too slow already. We're talking about a group of people who raise their USB polling rates from 8ms to 1-2ms, using monitors than need to have < 10ms lag, and playing at high FPS (thus meaning < 10ms delay between screens) with generally 100hz+ refresh rates. A 30ms responce time is already way too slow, though I suppose thats a maxiumum as opposed to an average.
    I believe the latency numbers for rt kernels on good hardware are measured in *microseconds*, say average <10us and maxiumum <30us on a core2 duo e6300.

    Leave a comment:


  • Dukenukemx
    replied
    Originally posted by DavidNielsen View Post
    It may be that what people are talking about in terms of improved gaming "performance" is not the framerate (which unsurprisingly does stay unchanged or degrades a bit with preemption enabled - let alone if one was to install a proper -rt kernel). However you might experience a situation where overall latency decreases which might improve responsiveness to input devices, or the overall "smoothness" of the game might feel more right. Call it trading off excess fps for equal low latency access (who really cares if you are pushing 172 fps if your mouse jerky, when it could all be smooth at say 160 fps).
    That's what I thought when I read low latency. I expected better response from controls and network latency. It would have obviously traded off fps and through output to achieve this.

    Leave a comment:


  • JanC
    replied
    Originally posted by del_diablo View Post
    Which is blatantly false. A monitor does not need such a severe input lag as 100ms before it becomes noticable. All you need to do, is that you have a jitter of 1ms --> 10ms --> 1ms --> 10ms, and it should be noticable that the input is quite unsmooth, especially if you are testing a application where the input matters(hardcore quake FPS anybody?).
    I want your hardware that jitters between 100-1000 frames/s...

    Leave a comment:


  • AnonymousCoward
    replied
    This article seems pointless. With maximum fps over 60 in all games, who cares about the exact number? Minimum frame rate or the already mentioned frame jitter seem a much more suitable measure of the effect of an rt kernel to me.

    Leave a comment:


  • uid313
    replied
    Low latency kernel does not mean good for gaming.
    It does not mean high performance.

    It means events are guaranteed to trigger with low latency, doesn't mean it will be fast, just quick.

    Low latency is great for industrial applications, robotics, physical security systems, medical systems, and such. Also good for audio production. Its not for gaming.

    Leave a comment:


  • ownagefool
    replied
    Most serious competitive gamers will attempt to reduce latency and minimise jitter in latency. Course, those gamers won't be using linux because the platform simply doesn't support their games, but if it was they'd be looking for ways to minimise input lag.

    It has nothing to do with FPS, but it's fairly trivial to see the difference in input lag between 250fps and 125fps whilst playing a q3 engine game. There may be more to it, but actual mouse pointer consistency can get seriously messed up from a gamer reacting to whats happening on his screen without a consistant system.

    The average person denies its possible for a couple of ms difference to actually affect performance, but almost every high end gamer (ie, the people who win money at tournaments) would agree with what I said above.

    For the record, playing a windows game with rawinput usually mean you're got a consistant gaming experience which leads to a much higher level of skill, and is obviously noticeable when you look at the people playing the games.

    On the other hand, without directinput / rawinput the entire windows pointer system seems to be awful and crappy, and inconsistant in windows 7 to the point where I can be successful in XP with no acceleration fixes, but not so in windows 7.

    I'm hoping steam for linux games have some kind of rawinput, because despite being a linux fanboy, I won't be gaming with it if I can't aim.

    FYI, if we're seriously talking about 30ms kernel, I'd suspect we're seriously being way too slow already. We're talking about a group of people who raise their USB polling rates from 8ms to 1-2ms, using monitors than need to have < 10ms lag, and playing at high FPS (thus meaning < 10ms delay between screens) with generally 100hz+ refresh rates. A 30ms responce time is already way too slow, though I suppose thats a maxiumum as opposed to an average.
    Last edited by ownagefool; 06-22-2012, 06:55 PM.

    Leave a comment:


  • geearf
    replied
    Originally posted by del_diablo View Post
    Why does this retarded article not measure frame jitter instead? The entire point of a low latency kernel combined with a effective IO manager is that your game application gets consistent troughput with low latency instead of being random.
    Why can't phoronix suit measure framejitter(frame jitter, etc)? Techreport already did it quite well.
    Yes I did not understand that either.
    I would expect a low latency kernel to have a lower average framerate, but I would also expect less jitter.
    I don't see the point of the article...

    Leave a comment:

Working...
X