That's the question isn't it? I guess it has something to do with process scheduling for the os kernel part but really it's like he spams the word jitter for many things. Like "browser-video jitter" (on http://paradoxuncreated.com/Blog/wordpress/?p=2268).
Originally Posted by 89c51
I guess what he means mostly is actually high frequncy FPS variation in video frames rendered by an application. So really much the same as the electronics definition for jitter of a clock signal.
The thing is, he doesn't in any way demonstrate which of his kernel config changes accomplished what results in his tests, if he actually has done any repeatable testing at all.
Now, maybe you can't get FPS variation or standard deviation out of Doom 3, but shouldn't that be the starting point? Doom 3 is open source now after all.
I dunno, maybe there's something to this but the documentation and presentation leave something to be desired.
Well I commented on this before. Apparently it is not something everyone notices. On windows you have these guys running all the services and daemons, and encouraging others to do the same, when there is a clear difference.
On my machine, standard kernel can`t even do 30fps video without stutter.
In these discussions there seems to be several who understand what I am talking about, yet many who talk against.
And even one argues "Yes, you will have ultrasmooth videos but.." - What are you saying, you WANT your videos to chop, even if throughput is on average the same? That would be retardation.
Peace Be With You.
Which all leads to single conclusion - we need a test that measures delay within kernel responses and not kernel throughput/raw performance.
This was entirely tuned by looking at doom 3 jitter, which I got out. And then observing some simple numbers in glxgears. Lukasz Sokol said he was going to have a look at a benchmark the weekend that was, so try mailing him: Lukasz Sokol <elGODesDAMNcrSPAMMERSgmail.com> remove the secret message.
Please create an automatic test for this jitter measurement. We have the doom3 source, should be very easy.
1. Insert timing calls before and after each frame
2. Keep track of max, and average.
3. At the end of a timedemo, calculate max - avg.
4. Print that difference both as microseconds and as a percentage of the average frame time. "Jitter for timedemo1469 was 1500 usec, or 15%".
Please read my last post if you didn`t see it. Also doom 3 is uneccesary for getting numbers on jitter. Actually taxing the cpu and gpu as little as possible, displaying only something simple, to only obtain jitter-levels would be the best.
Peace Be With You.
Phoronix Test Suite is awesome sauce, of course, but it's no good if you run the wrong tests.
In the future, when testing latency, these are the benchmarks that would be most relevant:
- Add Cyclictest from the Realtime Linux wiki to the PTS (if not already present). This is the standard Linux latency benchmark.
- For game benchmarks, report the Minimum FPS, median, and standard deviation instead of average FPS. Most importantly, you want to know the single longest frame time, and 5th percentile would be useful as well.
- Throughput benchmarks should only be included as an afterthought, if at all. They don't measure the important variable.
Cyclic-test is nice, but measuring the signalpath all the way to opengl, is what I am most interested in.
Peace Be With You!
And as I said elsewhere, if it comes down to a choice between latency/max jitter and performance, 0.2ms (200uS) is where I stop caring.
Peace Be With You.
So you play your games on 5000 FPS?
Originally Posted by Paradox Uncreated