Announcement

Collapse
No announcement yet.

The v2 Rotary Interactivity Favor Scheduler

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • kernelOfTruth
    replied
    lol - something nasty is going on:


    [40851.104643] BUG: Bad rss-counter state mm:ffff880230e2bb80 idx:1 val:-2
    [40851.104648] BUG: Bad rss-counter state mm:ffff880230e2bb80 idx:2 val:2
    [41915.438826] BUG: Bad rss-counter state mm:ffff8802350c8000 idx:1 val:-1
    [41915.438834] BUG: Bad rss-counter state mm:ffff8802350c8000 idx:2 val:1
    [41915.440299] BUG: Bad rss-counter state mm:ffff880231aa1f80 idx:1 val:-1
    [41915.440307] BUG: Bad rss-counter state mm:ffff880231aa1f80 idx:2 val:1
    [41915.440454] BUG: Bad rss-counter state mm:ffff8802223a6c80 idx:1 val:-1
    [41915.440459] BUG: Bad rss-counter state mm:ffff8802223a6c80 idx:2 val:1
    [41915.442346] BUG: Bad rss-counter state mm:ffff8802351b0380 idx:1 val:-1
    [41915.442352] BUG: Bad rss-counter state mm:ffff8802351b0380 idx:2 val:1
    [41915.593699] BUG: Bad rss-counter state mm:ffff880184a08380 idx:1 val:-1
    [41915.593704] BUG: Bad rss-counter state mm:ffff880184a08380 idx:2 val:1
    [41915.607687] BUG: Bad rss-counter state mm:ffff8801842e6580 idx:1 val:-2
    [41915.607692] BUG: Bad rss-counter state mm:ffff8801842e6580 idx:2 val:2
    [41915.617876] BUG: Bad rss-counter state mm:ffff8801842e5400 idx:1 val:-2
    [41915.617880] BUG: Bad rss-counter state mm:ffff8801842e5400 idx:2 val:2
    [41915.621546] BUG: Bad rss-counter state mm:ffff8801842e4600 idx:1 val:-1
    [41915.621550] BUG: Bad rss-counter state mm:ffff8801842e4600 idx:2 val:1
    [41915.621988] BUG: Bad rss-counter state mm:ffff880184a0ec80 idx:1 val:-2
    [41915.621992] BUG: Bad rss-counter state mm:ffff880184a0ec80 idx:2 val:2
    [41915.622856] BUG: Bad rss-counter state mm:ffff8801842e5b00 idx:1 val:-1
    [41915.622860] BUG: Bad rss-counter state mm:ffff8801842e5b00 idx:2 val:1
    [41915.651114] BUG: Bad rss-counter state mm:ffff8802351e2680 idx:1 val:-2
    [41915.651118] BUG: Bad rss-counter state mm:ffff8802351e2680 idx:2 val:2
    [41915.652672] BUG: Bad rss-counter state mm:ffff8802351e5400 idx:1 val:-2
    [41915.652677] BUG: Bad rss-counter state mm:ffff8802351e5400 idx:2 val:2
    [41915.655617] BUG: Bad rss-counter state mm:ffff8801842e2300 idx:1 val:-1
    [41915.655621] BUG: Bad rss-counter state mm:ffff8801842e2300 idx:2 val:1
    [41915.660228] BUG: Bad rss-counter state mm:ffff8802351e3b80 idx:1 val:-1
    [41915.660231] BUG: Bad rss-counter state mm:ffff8802351e3b80 idx:2 val:1
    [41915.662536] BUG: Bad rss-counter state mm:ffff8801842e0a80 idx:1 val:-1
    [41915.662540] BUG: Bad rss-counter state mm:ffff8801842e0a80 idx:2 val:1
    [41915.728599] BUG: Bad rss-counter state mm:ffff880235277380 idx:1 val:-2
    [41915.728602] BUG: Bad rss-counter state mm:ffff880235277380 idx:2 val:2
    [41922.881771] BUG: Bad rss-counter state mm:ffff8802223a3b80 idx:1 val:-1
    [41922.881777] BUG: Bad rss-counter state mm:ffff8802223a3b80 idx:2 val:1
    [41924.393537] BUG: Bad rss-counter state mm:ffff8802351e0700 idx:1 val:-2
    [41924.393543] BUG: Bad rss-counter state mm:ffff8802351e0700 idx:2 val:2
    [41925.418184] BUG: Bad rss-counter state mm:ffff8802223a7a80 idx:1 val:-1
    [41925.418190] BUG: Bad rss-counter state mm:ffff8802223a7a80 idx:2 val:1
    [41925.419895] BUG: Bad rss-counter state mm:ffff8802223a5e80 idx:1 val:-1
    [41925.419902] BUG: Bad rss-counter state mm:ffff8802223a5e80 idx:2 val:1
    [41926.007780] BUG: Bad rss-counter state mm:ffff8801842e2a00 idx:1 val:-1
    [41926.007788] BUG: Bad rss-counter state mm:ffff8801842e2a00 idx:2 val:1
    [41926.012419] BUG: Bad rss-counter state mm:ffff880184a0db00 idx:1 val:-1
    [41926.012427] BUG: Bad rss-counter state mm:ffff880184a0db00 idx:2 val:1
    [41926.015712] BUG: Bad rss-counter state mm:ffff880184a0a680 idx:1 val:-1
    [41926.015722] BUG: Bad rss-counter state mm:ffff880184a0a680 idx:2 val:1
    [41926.045152] BUG: Bad rss-counter state mm:ffff880230e29f80 idx:1 val:-1
    [41926.045158] BUG: Bad rss-counter state mm:ffff880230e29f80 idx:2 val:1
    [41926.092410] BUG: Bad rss-counter state mm:ffff8801842e1180 idx:1 val:-1
    [41926.092415] BUG: Bad rss-counter state mm:ffff8801842e1180 idx:2 val:1

    I'll tweak dirty ratios, swap, etc. and see how it goes

    actually it shouldn't be necessary but in this case there's no other way since I need the box


    thanks !

    Leave a comment:


  • kernelOfTruth
    replied
    Originally posted by 3766691 View Post
    So if RIFS-ES-LOW-SPEC could help you to make the system responsive.
    Anyway many people.complain about the unresponsive problem about RIFS-ES and many people like RIFS-ES-LOW-SPEC, so RIFS-ES-LOW-SPEC would become the offical version of RIFS-ES
    RIFS will still be kept, it is used to make comparsion
    actually my system just got very un-responsive (even worse than BFS or CFS !) while playing 2 streams (I use it while studying & relaxing), backing up data (ext4 -> ext4 on luks partitions) and having around 10-20 okular instances open,

    20-30 tabs in chromium, 2-4 tabs in firefox, 4-6 tabs in nautilus, xfce4 desktop with compiz-fusion


    and this was only while rsyncing the whole partition (around 1.3 TB)

    afaik this didn't happen that extreme in the past - with CFS there were small stops of sound but then it immediately countinued (0.5 - 2 seconds), this was only with 1 audio stream so far

    it can't be that the 2nd stream causes that much trouble

    swap is using around 1.6 GB, RAM is used up around 6 GB of 8

    vfs_cache_pressure around 50, swappiness 60





    I'll try using the original (v2 ?) implementation without lowres of RIFS (which worked best for me so far) and see whether that improves things - hopefully it does since I really need the box 1000% available right now



    Chen, you think this could be several issues ? (some more tweaks needed in RIFS, major issues in sound, ata, graphics, etc. subsystems ?)

    thanks !



    edit:

    ok, tweaked the cpu governor a bit - let's see how it goes


    edit2:

    it's not even only under heavy i/o

    not it's all the time

    it seems some apps are constantly keeping the cpu / scheduler busy

    and load is around 2-3 - weird :/


    I definitely have to try out the non-ES RIFS scheduler and compare
    Last edited by kernelOfTruth; 22 June 2012, 02:33 PM.

    Leave a comment:


  • 3766691
    replied
    Originally posted by kernelOfTruth View Post
    nice


    I believe I know what is causing trouble for me: it's pulseaudio !

    it seems that is a real PITA when running 2 streams (I started this week listening to two and more streams - I love mixing) and htop shows between 100-300% cpu load - which is ridiculous btw


    that not only affects fluidity of video playback but also overall smoothness & fluidity of my composited desktop (compiz-fusion) - argh !


    the system is still smooth but it's at least noticable that the cpu is kept busy ...
    So if RIFS-ES-LOW-SPEC could help you to make the system responsive.
    Anyway many people.complain about the unresponsive problem about RIFS-ES and many people like RIFS-ES-LOW-SPEC, so RIFS-ES-LOW-SPEC would become the offical version of RIFS-ES
    RIFS will still be kept, it is used to make comparsion

    Leave a comment:


  • kernelOfTruth
    replied
    Originally posted by 3766691 View Post
    nice


    I believe I know what is causing trouble for me: it's pulseaudio !

    it seems that is a real PITA when running 2 streams (I started this week listening to two and more streams - I love mixing) and htop shows between 100-300% cpu load - which is ridiculous btw


    that not only affects fluidity of video playback but also overall smoothness & fluidity of my composited desktop (compiz-fusion) - argh !


    the system is still smooth but it's at least noticable that the cpu is kept busy ...

    Leave a comment:


  • 3766691
    replied
    Originally posted by kernelOfTruth View Post
    man, that's impressive to say the least


    BFS & CFS linearly "scale" for worse latency and RIFS-ES (already better) after 60-70 clients gets better and better with more and more clients

    fascinating !


    no idea about scheduling but there's surely some way to improve it even more (interestingly it scales similarly like BFS up to 50 and then between 55 - around 72 it's worse but then gets significantly better - might this be an issue
    with parts from mainline ? - you're partly using stuff from mainline - right ?)

    [I'm referring to Result-001.PNG, "Benchmark between RIFS-ES, CFS, BFS(latt, 64-200 clients)"]



    oh - kudos where kudos are due:





    I'll get around in about a week to test it again under very heavy load

    thanks !
    More benchmark with RIFS-ES(Low Spec), CFS, BFS:


    RIFS-ES-Low-Spec has been posted.

    Leave a comment:


  • 3766691
    replied
    Originally posted by kernelOfTruth View Post
    did some data backup and at the same time used the opportunity to watch 1080p video + 2 sound streams (including the hd video's stream)

    at least 5-10 times it stopped for 1-2 seconds


    observations so far:

    1) there seem to be issues with the sound system (probably pulseaudio related)

    would renice hep ?

    2) with heavy writing + HD video it lags pretty much - so there's issues with i/o, I'm already using BFQ but seemingly there's still room for improvements

    3) amd64: there seems to be more issues with heavy i/o (see the amd64 gentoo subforum on forums.gentoo.org)


    so I'll see during the next days on regular / everyday usage how it goes


    will at earliest be able to compare in a week with the non-ES



    impression so far: it only might be a feeling but the non-ES RIFS felt a little more fluid, dunno (especially when comparing the total halt of sound + video of HD video streaming - weird, gotta check next time if the video-streaming stopped or if it really was due to heavy i/o)


    thanks !
    Yeah, it is caused by the sleep tracking.
    So here I have posted the no-sleep-tracking version


    and benchmark are done with non-sleep-tracking version


    Thanks for reporting this(sleep-tracking broken)
    CHEN

    Leave a comment:


  • kernelOfTruth
    replied
    did some data backup and at the same time used the opportunity to watch 1080p video + 2 sound streams (including the hd video's stream)

    at least 5-10 times it stopped for 1-2 seconds


    observations so far:

    1) there seem to be issues with the sound system (probably pulseaudio related)

    would renice hep ?

    2) with heavy writing + HD video it lags pretty much - so there's issues with i/o, I'm already using BFQ but seemingly there's still room for improvements

    3) amd64: there seems to be more issues with heavy i/o (see the amd64 gentoo subforum on forums.gentoo.org)


    so I'll see during the next days on regular / everyday usage how it goes


    will at earliest be able to compare in a week with the non-ES



    impression so far: it only might be a feeling but the non-ES RIFS felt a little more fluid, dunno (especially when comparing the total halt of sound + video of HD video streaming - weird, gotta check next time if the video-streaming stopped or if it really was due to heavy i/o)


    thanks !

    Leave a comment:


  • 3766691
    replied
    Originally posted by kernelOfTruth View Post
    man, that's impressive to say the least


    BFS & CFS linearly "scale" for worse latency and RIFS-ES (already better) after 60-70 clients gets better and better with more and more clients

    fascinating !


    no idea about scheduling but there's surely some way to improve it even more (interestingly it scales similarly like BFS up to 50 and then between 55 - around 72 it's worse but then gets significantly better - might this be an issue
    with parts from mainline ? - you're partly using stuff from mainline - right ?)

    [I'm referring to Result-001.PNG, "Benchmark between RIFS-ES, CFS, BFS(latt, 64-200 clients)"]



    oh - kudos where kudos are due:





    I'll get around in about a week to test it again under very heavy load

    thanks !
    For each benchmark, I will make sure that the system is truely idle first.
    Then, run latt -cN sleep 10 > sched_name.resultN
    after one measurement is finished I will idle my system(Not moving the mouse) for 20s and run the second benchmark.

    If i didn't idle my system and run the benchmark, the latency result for latt -c200 will still be lower than both BFS and CFS but the latency is much higher because it will be treated as a io-bound task.

    Leave a comment:


  • 3766691
    replied
    Originally posted by kernelOfTruth View Post
    man, that's impressive to say the least


    BFS & CFS linearly "scale" for worse latency and RIFS-ES (already better) after 60-70 clients gets better and better with more and more clients

    fascinating !


    no idea about scheduling but there's surely some way to improve it even more (interestingly it scales similarly like BFS up to 50 and then between 55 - around 72 it's worse but then gets significantly better - might this be an issue
    with parts from mainline ? - you're partly using stuff from mainline - right ?)

    [I'm referring to Result-001.PNG, "Benchmark between RIFS-ES, CFS, BFS(latt, 64-200 clients)"]



    oh - kudos where kudos are due:





    I'll get around in about a week to test it again under very heavy load

    thanks !
    This is the effect of ES feature. If a task has not run for long time, it can get the high prio more faster.
    If a task sleep and wake up very often it can still gain high priority but the priority will be lower than some task like:
    int main()
    {
    int i;
    for(i = 0;i < 60;i++)
    sleep(1);
    }

    Leave a comment:


  • kernelOfTruth
    replied
    Originally posted by 3766691 View Post
    Also i have posted the newest latency-related benchmark.
    man, that's impressive to say the least


    BFS & CFS linearly "scale" for worse latency and RIFS-ES (already better) after 60-70 clients gets better and better with more and more clients

    fascinating !


    no idea about scheduling but there's surely some way to improve it even more (interestingly it scales similarly like BFS up to 50 and then between 55 - around 72 it's worse but then gets significantly better - might this be an issue
    with parts from mainline ? - you're partly using stuff from mainline - right ?)

    [I'm referring to Result-001.PNG, "Benchmark between RIFS-ES, CFS, BFS(latt, 64-200 clients)"]



    oh - kudos where kudos are due:





    I'll get around in about a week to test it again under very heavy load

    thanks !
    Last edited by kernelOfTruth; 19 June 2012, 01:08 PM.

    Leave a comment:

Working...
X