Announcement

Collapse
No announcement yet.

Fixed: The Linux Desktop Responsiveness Problem?

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Did anyone else think the article was gonna talk about BFS or ck patches when they read the title?

    Comment


    • #17
      I still can't figure out which is more annoying linux going unresponsive under intense disk pressure or windows staying "low responsive" and never finishing it's I/O tasks as they go from 6 minutes to complete to 30. Both ways seem equally annoying to me.

      Comment


      • #18
        Originally posted by anonymous View Post
        Did anyone else think the article was gonna talk about BFS or ck patches when they read the title?
        Nope :P

        Comment


        • #19
          In my experience, Windows 7 also has responsiveness problems when doing heavy I/O. I had thought that it was just unavoidable with current computer architecture. I would love to be wrong about that.

          Comment


          • #20
            Michael, please fix this sentence:

            On his system when writing 300M to the /dev/zero file the problem was even worse.
            To /dev/zero? The mail clearly states from it to the ssd...

            Comment


            • #21
              Originally posted by Hephasteus View Post
              I still can't figure out which is more annoying linux going unresponsive under intense disk pressure or windows staying "low responsive" and never finishing it's I/O tasks as they go from 6 minutes to complete to 30. Both ways seem equally annoying to me.
              The author of the Windows 'file copy' dialog visits some friends

              Comment


              • #22
                Originally posted by curaga View Post
                Michael, please fix this sentence:



                To /dev/zero? The mail clearly states from it to the ssd...
                And I thought it was /dev/null =x

                Comment


                • #23
                  Code:
                  --- /usr/src/sources/kernel/zen-upstream/mm/vmscan.c	2010-07-21 17:01:20.911512995 +0200
                  +++ mm/vmscan.c	2010-08-04 22:11:43.663379966 +0200
                  @@ -1113,6 +1113,47 @@
                   }
                   
                   /*
                  + * Returns true if the caller should wait to clean dirty/writeback pages.
                  + *
                  + * If we are direct reclaiming for contiguous pages and we do not reclaim
                  + * everything in the list, try again and wait for writeback IO to complete.
                  + * This will stall high-order allocations noticeably. Only do that when really
                  + * need to free the pages under high memory pressure.
                  + */
                  +static inline bool should_reclaim_stall(unsigned long nr_taken,
                  +					unsigned long nr_freed,
                  +					int priority,
                  +					struct scan_control *sc)
                  +{
                  +	int lumpy_stall_priority;
                  +
                  +	/* kswapd should not stall on sync IO */
                  +	if (current_is_kswapd())
                  +		return false;
                  +
                  +	/* Only stall on lumpy reclaim */
                  +	if (!sc->lumpy_reclaim_mode)
                  +		return false;
                  +
                  +	/* If we have relaimed everything on the isolated list, no stall */
                  +	if (nr_freed == nr_taken)
                  +		return false;
                  +
                  +	/*
                  +	 * For high-order allocations, there are two stall thresholds.
                  +	 * High-cost allocations stall immediately where as lower
                  +	 * order allocations such as stacks require the scanning
                  +	 * priority to be much higher before stalling.
                  +	 */
                  +	if (sc->order > PAGE_ALLOC_COSTLY_ORDER)
                  +		lumpy_stall_priority = DEF_PRIORITY;
                  +	else
                  +		lumpy_stall_priority = DEF_PRIORITY / 3;
                  +
                  +	return priority <= lumpy_stall_priority;
                  +}
                  +
                  +/*
                    * shrink_inactive_list() is a helper for shrink_zone().  It returns the number
                    * of reclaimed pages
                    */
                  @@ -1202,15 +1243,8 @@
                   		nr_scanned += nr_scan;
                   		nr_freed = shrink_page_list(&page_list, sc, PAGEOUT_IO_ASYNC);
                   
                  -		/*
                  -		 * If we are direct reclaiming for contiguous pages and we do
                  -		 * not reclaim everything in the list, try again and wait
                  -		 * for IO to complete. This will stall high-order allocations
                  -		 * but that should be acceptable to the caller
                  -		 */
                  -		if (nr_freed < nr_taken && !current_is_kswapd() &&
                  -		    sc->lumpy_reclaim_mode) {
                  -			congestion_wait(BLK_RW_ASYNC, HZ/10);
                  +		/* Check if we should syncronously wait for writeback */
                  +		if (should_reclaim_stall(nr_taken, nr_freed, priority, sc)) {
                   
                   			/*
                   			 * The attempt at page out may have made some
                  kudos to Wu Fengguang and KOSAKI Motohiro

                  I haven't tested the patch yet (compiling NOW) so use it at your own risk

                  this applies to 2.6.35

                  Comment


                  • #24
                    Originally posted by kernelOfTruth View Post
                    kudos to Wu Fengguang and KOSAKI Motohiro

                    I haven't tested the patch yet (compiling NOW) so use it at your own risk

                    this applies to 2.6.35
                    Thanks for code.

                    Code:
                    File to patch: mm/vmscan.c
                    patching file mm/vmscan.c
                    Hunk #1 succeeded at 1112 (offset -1 lines).
                    Hunk #2 succeeded at 1242 (offset -1 lines).
                    Time to test it. ;-)

                    Comment


                    • #25
                      Originally posted by piotr View Post
                      Thanks for code.

                      Code:
                      File to patch: mm/vmscan.c
                      patching file mm/vmscan.c
                      Hunk #1 succeeded at 1112 (offset -1 lines).
                      Hunk #2 succeeded at 1242 (offset -1 lines).
                      Time to test it. ;-)
                      you're welcome

                      right now I'm copying several hundreds of GiBs back and forth between different filesystems and it really feels snappier - and this without BFS or BFQ !


                      I had to change a few names since the name of a variable changed between 2.6.35 and the kernel used by the kernel-creators

                      more info:

                      http://forums.gentoo.org/viewtopic-t...-start-50.html

                      Comment


                      • #26
                        Originally posted by kernelOfTruth View Post
                        you're welcome

                        right now I'm copying several hundreds of GiBs back and forth between different filesystems and it really feels snappier - and this without BFS or BFQ !


                        I had to change a few names since the name of a variable changed between 2.6.35 and the kernel used by the kernel-creators
                        Wait a sec, aren't you the dude who said in another thread that there was no problem about this at all? And now you post a patch to solve it? Strange worlds, I'm telling you. Anyway, good stuff.

                        Any other reports from people who had noticed the problem before?

                        Comment


                        • #27
                          Well It's not working for me I think.

                          Code:
                          dd if=/dev/zero of=test bs=1M count=5024 && rm test -f
                          in while loop and firefox lag, mouse laggy from time to time, HARDCORE. :<

                          Also I think after fresh boot starting firefox took a way more time than normal.

                          Comment


                          • #28
                            having a dedicated system/vm disk would help a lot.

                            I run my root and swap on a separate small SSD and left all other data on a 640GB hdd, copying a xxGB file would feel like nothing.

                            The fundamental problem lies on the mechanical moving arm in the disk. It just can't serve OS read request when it constantly writes to somewhere else.

                            Comment


                            • #29
                              Patches have been available in the zen-kernel.org git repository for some time.

                              2.6.35: http://git.zen-kernel.org/?p=kernel/...df2807fc826af5

                              2.6.34: http://git.zen-kernel.org/?p=kernel/...fdf3ebcc2a62f1

                              Comment


                              • #30
                                piotr I'm not exactly sure, but dd is probably pegging your CPU.

                                see what happens to your performance when you are copying that 5GB file?

                                Comment

                                Working...
                                X