Page 3 of 7 FirstFirst 12345 ... LastLast
Results 21 to 30 of 61

Thread: Fixed: The Linux Desktop Responsiveness Problem?

  1. #21
    Join Date
    Aug 2008
    Posts
    233

    Talking

    Quote Originally Posted by Hephasteus View Post
    I still can't figure out which is more annoying linux going unresponsive under intense disk pressure or windows staying "low responsive" and never finishing it's I/O tasks as they go from 6 minutes to complete to 30. Both ways seem equally annoying to me.
    The author of the Windows 'file copy' dialog visits some friends


  2. #22
    Join Date
    Aug 2009
    Posts
    2,264

    Default

    Quote Originally Posted by curaga View Post
    Michael, please fix this sentence:



    To /dev/zero? The mail clearly states from it to the ssd...
    And I thought it was /dev/null =x

  3. #23
    Join Date
    Jan 2009
    Location
    Vienna, Austria; Germany; hello world :)
    Posts
    629

    Default

    Code:
    --- /usr/src/sources/kernel/zen-upstream/mm/vmscan.c	2010-07-21 17:01:20.911512995 +0200
    +++ mm/vmscan.c	2010-08-04 22:11:43.663379966 +0200
    @@ -1113,6 +1113,47 @@
     }
     
     /*
    + * Returns true if the caller should wait to clean dirty/writeback pages.
    + *
    + * If we are direct reclaiming for contiguous pages and we do not reclaim
    + * everything in the list, try again and wait for writeback IO to complete.
    + * This will stall high-order allocations noticeably. Only do that when really
    + * need to free the pages under high memory pressure.
    + */
    +static inline bool should_reclaim_stall(unsigned long nr_taken,
    +					unsigned long nr_freed,
    +					int priority,
    +					struct scan_control *sc)
    +{
    +	int lumpy_stall_priority;
    +
    +	/* kswapd should not stall on sync IO */
    +	if (current_is_kswapd())
    +		return false;
    +
    +	/* Only stall on lumpy reclaim */
    +	if (!sc->lumpy_reclaim_mode)
    +		return false;
    +
    +	/* If we have relaimed everything on the isolated list, no stall */
    +	if (nr_freed == nr_taken)
    +		return false;
    +
    +	/*
    +	 * For high-order allocations, there are two stall thresholds.
    +	 * High-cost allocations stall immediately where as lower
    +	 * order allocations such as stacks require the scanning
    +	 * priority to be much higher before stalling.
    +	 */
    +	if (sc->order > PAGE_ALLOC_COSTLY_ORDER)
    +		lumpy_stall_priority = DEF_PRIORITY;
    +	else
    +		lumpy_stall_priority = DEF_PRIORITY / 3;
    +
    +	return priority <= lumpy_stall_priority;
    +}
    +
    +/*
      * shrink_inactive_list() is a helper for shrink_zone().  It returns the number
      * of reclaimed pages
      */
    @@ -1202,15 +1243,8 @@
     		nr_scanned += nr_scan;
     		nr_freed = shrink_page_list(&page_list, sc, PAGEOUT_IO_ASYNC);
     
    -		/*
    -		 * If we are direct reclaiming for contiguous pages and we do
    -		 * not reclaim everything in the list, try again and wait
    -		 * for IO to complete. This will stall high-order allocations
    -		 * but that should be acceptable to the caller
    -		 */
    -		if (nr_freed < nr_taken && !current_is_kswapd() &&
    -		    sc->lumpy_reclaim_mode) {
    -			congestion_wait(BLK_RW_ASYNC, HZ/10);
    +		/* Check if we should syncronously wait for writeback */
    +		if (should_reclaim_stall(nr_taken, nr_freed, priority, sc)) {
     
     			/*
     			 * The attempt at page out may have made some
    kudos to Wu Fengguang and KOSAKI Motohiro

    I haven't tested the patch yet (compiling NOW) so use it at your own risk

    this applies to 2.6.35

  4. #24
    Join Date
    Aug 2010
    Posts
    4

    Default

    Quote Originally Posted by kernelOfTruth View Post
    kudos to Wu Fengguang and KOSAKI Motohiro

    I haven't tested the patch yet (compiling NOW) so use it at your own risk

    this applies to 2.6.35
    Thanks for code.

    Code:
    File to patch: mm/vmscan.c
    patching file mm/vmscan.c
    Hunk #1 succeeded at 1112 (offset -1 lines).
    Hunk #2 succeeded at 1242 (offset -1 lines).
    Time to test it. ;-)

  5. #25
    Join Date
    Jan 2009
    Location
    Vienna, Austria; Germany; hello world :)
    Posts
    629

    Default

    Quote Originally Posted by piotr View Post
    Thanks for code.

    Code:
    File to patch: mm/vmscan.c
    patching file mm/vmscan.c
    Hunk #1 succeeded at 1112 (offset -1 lines).
    Hunk #2 succeeded at 1242 (offset -1 lines).
    Time to test it. ;-)
    you're welcome

    right now I'm copying several hundreds of GiBs back and forth between different filesystems and it really feels snappier - and this without BFS or BFQ !


    I had to change a few names since the name of a variable changed between 2.6.35 and the kernel used by the kernel-creators

    more info:

    http://forums.gentoo.org/viewtopic-t...-start-50.html

  6. #26
    Join Date
    Jan 2008
    Location
    Have a good day.
    Posts
    678

    Default

    Quote Originally Posted by kernelOfTruth View Post
    you're welcome

    right now I'm copying several hundreds of GiBs back and forth between different filesystems and it really feels snappier - and this without BFS or BFQ !


    I had to change a few names since the name of a variable changed between 2.6.35 and the kernel used by the kernel-creators
    Wait a sec, aren't you the dude who said in another thread that there was no problem about this at all? And now you post a patch to solve it? Strange worlds, I'm telling you. Anyway, good stuff.

    Any other reports from people who had noticed the problem before?

  7. #27
    Join Date
    Aug 2010
    Posts
    4

    Default

    Well It's not working for me I think.

    Code:
    dd if=/dev/zero of=test bs=1M count=5024 && rm test -f
    in while loop and firefox lag, mouse laggy from time to time, HARDCORE. :<

    Also I think after fresh boot starting firefox took a way more time than normal.

  8. #28
    Join Date
    Aug 2007
    Posts
    437

    Default

    having a dedicated system/vm disk would help a lot.

    I run my root and swap on a separate small SSD and left all other data on a 640GB hdd, copying a xxGB file would feel like nothing.

    The fundamental problem lies on the mechanical moving arm in the disk. It just can't serve OS read request when it constantly writes to somewhere else.

  9. #29
    Join Date
    Apr 2007
    Posts
    99

    Default

    Patches have been available in the zen-kernel.org git repository for some time.

    2.6.35: http://git.zen-kernel.org/?p=kernel/...df2807fc826af5

    2.6.34: http://git.zen-kernel.org/?p=kernel/...fdf3ebcc2a62f1

  10. #30
    Join Date
    May 2010
    Posts
    165

    Default

    piotr I'm not exactly sure, but dd is probably pegging your CPU.

    see what happens to your performance when you are copying that 5GB file?

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •