Announcement

Collapse
No announcement yet.

Fixed: The Linux Desktop Responsiveness Problem?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • squirrl
    replied
    backports to all the kernels up to now?

    "Cute", when is kernel.org going to start backports to all the old kernels?

    Leave a comment:


  • nerdopolis
    replied
    piotr I'm not exactly sure, but dd is probably pegging your CPU.

    see what happens to your performance when you are copying that 5GB file?

    Leave a comment:


  • damentz
    replied
    Patches have been available in the zen-kernel.org git repository for some time.

    2.6.35: http://git.zen-kernel.org/?p=kernel/...df2807fc826af5

    2.6.34: http://git.zen-kernel.org/?p=kernel/...fdf3ebcc2a62f1

    Leave a comment:


  • FunkyRider
    replied
    having a dedicated system/vm disk would help a lot.

    I run my root and swap on a separate small SSD and left all other data on a 640GB hdd, copying a xxGB file would feel like nothing.

    The fundamental problem lies on the mechanical moving arm in the disk. It just can't serve OS read request when it constantly writes to somewhere else.

    Leave a comment:


  • piotr
    replied
    Well It's not working for me I think.

    Code:
    dd if=/dev/zero of=test bs=1M count=5024 && rm test -f
    in while loop and firefox lag, mouse laggy from time to time, HARDCORE. :<

    Also I think after fresh boot starting firefox took a way more time than normal.

    Leave a comment:


  • yotambien
    replied
    Originally posted by kernelOfTruth View Post
    you're welcome

    right now I'm copying several hundreds of GiBs back and forth between different filesystems and it really feels snappier - and this without BFS or BFQ !


    I had to change a few names since the name of a variable changed between 2.6.35 and the kernel used by the kernel-creators
    Wait a sec, aren't you the dude who said in another thread that there was no problem about this at all? And now you post a patch to solve it? Strange worlds, I'm telling you. Anyway, good stuff.

    Any other reports from people who had noticed the problem before?

    Leave a comment:


  • kernelOfTruth
    replied
    Originally posted by piotr View Post
    Thanks for code.

    Code:
    File to patch: mm/vmscan.c
    patching file mm/vmscan.c
    Hunk #1 succeeded at 1112 (offset -1 lines).
    Hunk #2 succeeded at 1242 (offset -1 lines).
    Time to test it. ;-)
    you're welcome

    right now I'm copying several hundreds of GiBs back and forth between different filesystems and it really feels snappier - and this without BFS or BFQ !


    I had to change a few names since the name of a variable changed between 2.6.35 and the kernel used by the kernel-creators

    more info:

    http://forums.gentoo.org/viewtopic-t...-start-50.html

    Leave a comment:


  • piotr
    replied
    Originally posted by kernelOfTruth View Post
    kudos to Wu Fengguang and KOSAKI Motohiro

    I haven't tested the patch yet (compiling NOW) so use it at your own risk

    this applies to 2.6.35
    Thanks for code.

    Code:
    File to patch: mm/vmscan.c
    patching file mm/vmscan.c
    Hunk #1 succeeded at 1112 (offset -1 lines).
    Hunk #2 succeeded at 1242 (offset -1 lines).
    Time to test it. ;-)

    Leave a comment:


  • kernelOfTruth
    replied
    Code:
    --- /usr/src/sources/kernel/zen-upstream/mm/vmscan.c	2010-07-21 17:01:20.911512995 +0200
    +++ mm/vmscan.c	2010-08-04 22:11:43.663379966 +0200
    @@ -1113,6 +1113,47 @@
     }
     
     /*
    + * Returns true if the caller should wait to clean dirty/writeback pages.
    + *
    + * If we are direct reclaiming for contiguous pages and we do not reclaim
    + * everything in the list, try again and wait for writeback IO to complete.
    + * This will stall high-order allocations noticeably. Only do that when really
    + * need to free the pages under high memory pressure.
    + */
    +static inline bool should_reclaim_stall(unsigned long nr_taken,
    +					unsigned long nr_freed,
    +					int priority,
    +					struct scan_control *sc)
    +{
    +	int lumpy_stall_priority;
    +
    +	/* kswapd should not stall on sync IO */
    +	if (current_is_kswapd())
    +		return false;
    +
    +	/* Only stall on lumpy reclaim */
    +	if (!sc->lumpy_reclaim_mode)
    +		return false;
    +
    +	/* If we have relaimed everything on the isolated list, no stall */
    +	if (nr_freed == nr_taken)
    +		return false;
    +
    +	/*
    +	 * For high-order allocations, there are two stall thresholds.
    +	 * High-cost allocations stall immediately where as lower
    +	 * order allocations such as stacks require the scanning
    +	 * priority to be much higher before stalling.
    +	 */
    +	if (sc->order > PAGE_ALLOC_COSTLY_ORDER)
    +		lumpy_stall_priority = DEF_PRIORITY;
    +	else
    +		lumpy_stall_priority = DEF_PRIORITY / 3;
    +
    +	return priority <= lumpy_stall_priority;
    +}
    +
    +/*
      * shrink_inactive_list() is a helper for shrink_zone().  It returns the number
      * of reclaimed pages
      */
    @@ -1202,15 +1243,8 @@
     		nr_scanned += nr_scan;
     		nr_freed = shrink_page_list(&page_list, sc, PAGEOUT_IO_ASYNC);
     
    -		/*
    -		 * If we are direct reclaiming for contiguous pages and we do
    -		 * not reclaim everything in the list, try again and wait
    -		 * for IO to complete. This will stall high-order allocations
    -		 * but that should be acceptable to the caller
    -		 */
    -		if (nr_freed < nr_taken && !current_is_kswapd() &&
    -		    sc->lumpy_reclaim_mode) {
    -			congestion_wait(BLK_RW_ASYNC, HZ/10);
    +		/* Check if we should syncronously wait for writeback */
    +		if (should_reclaim_stall(nr_taken, nr_freed, priority, sc)) {
     
     			/*
     			 * The attempt at page out may have made some
    kudos to Wu Fengguang and KOSAKI Motohiro

    I haven't tested the patch yet (compiling NOW) so use it at your own risk

    this applies to 2.6.35

    Leave a comment:


  • V!NCENT
    replied
    Originally posted by curaga View Post
    Michael, please fix this sentence:



    To /dev/zero? The mail clearly states from it to the ssd...
    And I thought it was /dev/null =x

    Leave a comment:

Working...
X