Announcement

Collapse
No announcement yet.

Is Compiz On Its Deathbed?

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by ninez View Post

    I think your making the grand assumption that my 3 (or even 4) cores aren't already busy with other processing. It is quite typical on my desktop to be running 30-40 applications (all proaudio, using/requiring RT-scheduling), right now, *idling* i am using 34% of my CPU(s), keep in mind that i am running 28 proaudio applications, not including Firefox, gedit and musescore. Not one of my cores is idling at zero (or even close to zero). As for the latter part of your comment, not sure, wouldn't want to speculate either.
    Wow! That's more overhead than even I thought rt would have, based on our earlier discussions.
    Have you tried adjusting the slack timer so it can avoid some wakeups?

    Comment


    • #52
      Originally posted by liam View Post
      Wow! That's more overhead than even I thought rt would have, based on our earlier discussions.
      Have you tried adjusting the slack timer so it can avoid some wakeups?
      Hey liam, let me clarify a little.

      When i say 'idle', i really mean something along the lines of, 'as idle as it is can be'. ie: even in an idle state, their is a lot of processing going on. So it's not so much overhead for RT 'specifically', this is just overhead of running certains programs, that will use cpu regardless of whether or not, you actually can hear what they are doing (or processing).

      I haven't played around with /proc/<procid>/task/<task_id>/timer_slack_ns (if that is what you mean)

      I've seen that cgroups can manage this stuff, but i haven't investigated it.

      the best tunables that i have played with in the past, that are rt-related that yielded good results (but this was latency related) were these three tunables;

      /proc/sys/kernel/sched_latency_ns
      /proc/sys/kernel/sched_min_granularity_ns
      /proc/sys/kernel/sched_wakeup_granularity_ns

      i had to run tests over and over until i got the right mix between them. (using standard rt-related tools, like Cyclitest). But these days, things run very smooth and i haven't felt the need to be as crazy about those sorts of tweaks, as my box has been working very well.
      Last edited by ninez; 03-06-2012, 10:42 PM.

      Comment


      • #53
        Originally posted by ninez View Post
        Hey liam, let me clarify a little.

        When i say 'idle', i really mean something along the lines of, 'as idle as it is can be'. ie: even in an idle state, their is a lot of processing going on. So it's not so much overhead for RT 'specifically', this is just overhead of running certains programs, that will use cpu regardless of whether or not, you actually can hear what they are doing (or processing).

        I haven't played around with /proc/<procid>/task/<task_id>/timer_slack_ns (if that is what you mean)

        I've seen that cgroups can manage this stuff, but i haven't investigated it.

        the best tunables that i have played with in the past, that are rt-related that yielded good results (but this was latency related) were these three tunables;

        /proc/sys/kernel/sched_latency_ns
        /proc/sys/kernel/sched_min_granularity_ns
        /proc/sys/kernel/sched_wakeup_granularity_ns

        i had to run tests over and over until i got the right mix between them. (using standard rt-related tools, like Cyclitest). But these days, things run very smooth and i haven't felt the need to be as crazy about those sorts of tweaks, as my box has been working very well.
        Hey ninez,

        That's what I assumed you meant by idle, namely, programs that were still in active memory but not actively processing.
        Assuming the programs aren't just terribly written, or need to do synchronous logging even when idle (I think FF still does this), that percentage is likely due to RT_PREEMPT (I'd guess timer setup/breakdown is the main culprit which is why I mentioned slack timer...BTW, cgroup management is going to be easier to do than per proc... https://lkml.org/lkml/2011/10/11/246). However, if your system is working good enough, I sure as hell wouldn't screw with it My tuning experience has been very hit or miss. The problem, of course, is defining the problem and creating tests. Unfortunately, for ordinary desktop usage (i.e., encompassing more than irq request delay), it is very hard to measure, and people don't even agree what we should be measuring. If we did, there would be no ambiguity about which scheduler is better for general desktop usage

        BTW, I don't recall if I told you but I had to uninstall that RT kernel I got from the stanford repos. It was wreaking a wide swathe of destruction all through my desktop.

        Best/Liam

        Comment


        • #54
          Originally posted by liam View Post
          That's what I assumed you meant by idle, namely, programs that were still in active memory but not actively processing.
          That's not entirely correct, some stuff is 'actively processing', even when everything is pretty much idle. For example, i have several instruments (live inputs) ie: guitar, a couple mics, kaossilator - which even when not being played, will be passing a signal on through to other clients (such as DSP effects)... so some of these apps are always using CPU. which is why my CPU usage seemed high.

          Originally posted by liam View Post
          Assuming the programs aren't just terribly written, or need to do synchronous logging even when idle (I think FF still does this), that percentage is likely due to RT_PREEMPT (I'd guess timer setup/breakdown is the main culprit which is why I mentioned slack timer...BTW, cgroup management is going to be easier to do than per proc... https://lkml.org/lkml/2011/10/11/246). However, if your system is working good enough, I sure as hell wouldn't screw with it My tuning experience has been very hit or miss. The problem, of course, is defining the problem and creating tests. Unfortunately, for ordinary desktop usage (i.e., encompassing more than irq request delay), it is very hard to measure, and people don't even agree what we should be measuring. If we did, there would be no ambiguity about which scheduler is better for general desktop usage
          For the most part, i stay away from cgroups (with RT/jackd), I don't think i really need it, at this point. I probably could get it working, but it doesn't seem worth the effort, my system is very reliable and solid and although i do like to screw things up sometimes ;p ... this machine, always needs to be working well.


          Originally posted by liam View Post
          BTW, I don't recall if I told you but I had to uninstall that RT kernel I got from the stanford repos. It was wreaking a wide swathe of destruction all through my desktop.
          really?

          Comment

          Working...
          X