Announcement

Collapse
No announcement yet.

Benchmarking The Ubuntu "Low-Jitter" Linux Kernel

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    please... make theese tests again.. with point.. not with just fps in games

    same stuff was with first SLI stuff. micro-stutering

    fps was higher, but there was lags ever frame

    Comment


    • #32
      Originally posted by Paradox Uncreated View Post
      No small jitter on startup of doom3 even? Then there are definately differences somewhere I am not aware of. Do note that I want it to run completely perfectly. Not even small jitter on startup. No lost frames at all. No "jerking" AT ALL.
      I think that on startup it's actually the disk I/O operations taking precedence over the graphical operations (please feel free to replace non-tech words with the correct terms) so of course there will be some slowdowns on screen updates. It's a matter of priority. You'll be amazed how much of what you call jitter will go away with a SSD. You should also be looking at ways of increasing the priority of graphical and user input operations vs disk I/O operations. That's what Google is trying to do with the newer versions of Android (4.1+).

      Comment


      • #33
        Originally posted by WorBlux View Post
        We want deviation measured from one frame to the next within a single benchmark, not one benchmark run to the next.
        Oh, yes, you're right. Making that information available shouldn't be difficult to implement...

        Comment


        • #34
          Originally posted by Paradox Uncreated View Post
          Peace Be With You.
          Why? Why? Why?

          Comment


          • #35
            @Paradox Uncreated

            You completely disable composite in your instructions on your page. I don't think that will help any u unity user. KDE would still work that way but basically it is enough when you run off the effects when you play games or videos. Basically your nick is somehow "right", what you do is really "paradox". Your tuning efforts are completely out of control. It is impossible that you can get rid of disk io related issues with just increasing the priority for X. Where should the new data magically appear? If you play games that buffers everything at level start then you don't see that effect of course. When you look at rage - which can be run with wine as well (use: winetricks xact_jun2010 directx9) you will see an example that loads all the time textures from hd. Best get yourself a ssd and try it without your hacks. Next time you say when you run a copy process in the background that you can not play correctly. I personally think that your hacks for doom3/prey are overkill for most gfx cards - until you get really fast ones and should be never used.

            Comment


            • #36
              Did you ever play rage? I mean for doom3/prey in single player mode it is completely unimportant - you will never die because of a small lag - you can save everywhere and you could even use god mode.

              Comment


              • #37
                You can be sure that it is possible to play rage with wine. And in case you want to play it on win you need absolutely no tweaks! Just use a 64 bit os, then you have got at least 4gb for the app (not 3 on a 32 bit one). Whats your stupid purpose to use xp for new games? You own a dx10 card, ok, for opengl that would not matter, but xp is restricted to dx9. If there is something that you could do wrong, you do it wrong.

                Comment


                • #38
                  Originally posted by Paradox Uncreated View Post
                  Well a lot of people believe that and maybe even an SSD helps. But what would you say if I am running completely without those lost frames?

                  Peace Be With You.
                  Excuse me while I call out BS. Since you have yet to measure frame LATENCY, you can't make that claim. Sure, SLI/CF can spit out hundreds of FPS, but frames get dropped ALL THE TIME. (see: http://techreport.com/review/21516/i...e-benchmarking)

                  Secondly, any modern scheduler should be smart enough to do something else while the HDD is busy with I/O (heck, any I/O operation is reason enough to preempt the current thread, since its going to be waiting for several thousand CPU cycles anyways...).

                  Finally, you simply can't "remove" the time it takes to go to the HDD, get the necessary data, load it into RAM, then get it to the CPU, simply by changing a few numbers in the kernel. You can reduce the latency of getting your thread back running again somewhat, but thats about it.

                  Finally, I note that if you turn every non-essential service off (no matter how useful), latency will naturally decrease. I wonder how well your kernel works when attempting to do multiple heavy-workload tasks at once. I would wager that all the extra context-switching would significantly increase the execution time of most all tasks.

                  Comment


                  • #39
                    Originally posted by Paradox Uncreated View Post
                    PS: Intel is doing work with low-latency trading, and networking. So I probably don`t have to talk much about this anyway, and few need to understand

                    When you see low-latency traders talk about 800nS (nanosecond) latency, I am atleast happy.

                    That means development on low-latency/low-jitter is fully on-going and without the need for a lot of nutjobs calling it "BS" or whatever.

                    They should go back to their jittering desktops, and I don`t know.. not quite be present. I actually had an uncle like this. While on vacation once, everyone complained about flies, and mosquitobites. No, but he didn`t feel a thing.

                    It`s like thousand tweakapps on windows, and the arguments and work on BFS, that inspired CFS is about the same. But this uncle is 1 out of 10 I guess on a vacation, but that makes quite a few thousand online.

                    Peace Be With You.
                    Except trades are RIDICULOUSLY low-throughput intensive tasks. The lower you make the latency, the less throughput you are going to have.

                    Comment


                    • #40
                      So how do we benchmark this feature?

                      Since the article's benchmarks don't show anything (and the full set on openbenchmarking aren't showing anything either), how do you know there is a reduction in jitter?

                      Once you know that jitter is better, how can you measure it with a tool (preferably automated)?

                      If you can't measure it (or measure with reasonable accuracy), then how can we ever know if we're getting better at reducing it, or if we've gotten worse?

                      Comment

                      Working...
                      X