Announcement

Collapse
No announcement yet.

The Linux Desktop Responsiveness Patches Are Feeling Good

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • The Linux Desktop Responsiveness Patches Are Feeling Good

    Phoronix: The Linux Desktop Responsiveness Patches Are Feeling Good

    As was reported on Phoronix yesterday, the Linux desktop responsiveness problem may be fixed. This is the issue that has affected many Linux desktop users for numerous months where when dealing with large file transfers or other disk operations, the desktop interface (regardless of whether its GNOME, KDE, Xfce, etc) would become unresponsive and it could be a good number of seconds before a simple action like clicking a menu item would be processed...

    http://www.phoronix.com/vr.php?view=ODQ3OQ

  • #2
    Your assumption is wrong. KernelofTruth's backport only contains 2 out of 7 of the patches. There are 5 additional that he did not backport:

    http://forums.gentoo.org/viewtopic-p...0.html#6377520

    Please try to do fact checks on your assumptions in the future before reporting them.

    Comment


    • #3
      Oh there's 7 patches that all together fix the issue?

      Comment


      • #4
        Originally posted by DeepDayze View Post
        Oh there's 7 patches that all together fix the issue?
        indeed, but they are more tricky to backpart

        at least starting with the 3rd patch I tried it and parts of the code are spread all over the file

        since I'm not that experienced I'm waiting for someone else to backport (most favorably the zen-kernel devs )

        Comment


        • #5
          since I'm pretty busy - I unfortunately can't do the further backporting

          anyways: the results already are pretty impressive considering that it's only a small amount of code that's been added / removed (those 2 patches)

          Comment


          • #6
            I thought this was FUD, but it was true? Linux could not handle things without lagging sometimes? And now, is the situation better or does it still occur?

            Comment


            • #7
              Originally posted by kebabbert View Post
              I thought this was FUD, but it was true? Linux could not handle things without lagging sometimes? And now, is the situation better or does it still occur?
              it was true for some time in the past (for me)

              from my observation it most probably got worse / introduced after 2.6.34 since that kernel was working excellent for me back then - even under heavy traffic


              luckily with those lots of changes and the other improvements coming (e.g. reducing barrier writing, a unified slab (v3), and more) the future looks bright

              Comment


              • #8
                2.6.22 (?) with Con's SD or RSD cpu scheduler was another masterpiece in terms of performance & responsibility

                Comment


                • #9
                  Originally posted by kebabbert View Post
                  I thought this was FUD, but it was true? Linux could not handle things without lagging sometimes? And now, is the situation better or does it still occur?
                  Sometimes it happened and sometimes it didn't.
                  For example yesterday I put a 2.8 gb file on my local lighttp and downloaded it with aria2. This has to be the most I/O one would normally produce and my desktop was perfectely responsive...

                  Comment


                  • #10
                    Originally posted by kernelOfTruth View Post
                    indeed, but they are more tricky to backpart

                    at least starting with the 3rd patch I tried it and parts of the code are spread all over the file

                    since I'm not that experienced I'm waiting for someone else to backport (most favorably the zen-kernel devs )
                    Damentz (he's one of the zen devs) has some experience with backporting patches so he might take a crack at backporting all 7 patches to 2.6.35 (and maybe 2.6.34)

                    Comment


                    • #11
                      If someone has the patches running, it would be interesting to see how it fares with this little test; (adjust MEM_SIZE to match your physical ram)

                      For a long time, this has managed to bring almost all my Linux-systems to a crawl, not surprising since it basically forces excessive swapping. My worry though, and the reason that I investigated it, is that occasionally some application is bound to go haywire and do something like this.

                      On some systems, the result is so bad, I can't even regain control of the system in reasonable time. IMHO, this is a plausible denial-of-service attack for any multi-user-system, or any service which can be triggered to excessive ram consumption.

                      /* Stupid program aiming to eat the swap alive */
                      #include <stdlib.h>
                      #include <string.h>

                      size_t MEM_SIZE = 1024*1024*1024;
                      float MEM_USE = 2;
                      size_t ALLOC_SIZE = 4*1024*1024;

                      int main() {
                      char **lists;
                      char *list;
                      size_t i,j;
                      size_t lists_len = (MEM_SIZE*MEM_USE) / ALLOC_SIZE;

                      lists = malloc(lists_len * sizeof(list));
                      for (i = 0; i < lists_len; i++)
                      lists[i] = malloc(ALLOC_SIZE);

                      while (1) {
                      for (i = 0; i < lists_len; i++)
                      memcpy(lists[i], lists[(i+1)%lists_len], ALLOC_SIZE);
                      }
                      return 0;
                      }

                      Comment


                      • #12
                        Try to open more than the RAM in size text file /2-4 GB/ with gedit and it'll be enough. Happened to me already (:

                        Comment


                        • #13
                          I rolled my own 2.6.35 kernel with this and the bfq patches

                          2.6.35-amd64-iowait-bfq #1 SMP PREEMPT

                          And the general responsiveness of the system under IO load is night and day compared to how it was before, copying a 3.7gig file from one of my NTFS drives (i know, i know, leftover from my switch from windows) used to bring the system to a halt, programs used to freeze up until it had finished etc.

                          Now, everything keeps going, and copying speed is just as good (I havenít timed it to compare)

                          (this is on a Debian squeeze/testing base btw)

                          Comment


                          • #14
                            Originally posted by rawler View Post
                            #include <stdlib.h>
                            #include <string.h>

                            size_t MEM_SIZE = 1024*1024*1024;
                            float MEM_USE = 2;
                            size_t ALLOC_SIZE = 4*1024*1024;

                            int main() {
                            char **lists;
                            char *list;
                            size_t i,j;
                            size_t lists_len = (MEM_SIZE*MEM_USE) / ALLOC_SIZE;

                            lists = malloc(lists_len * sizeof(list));
                            for (i = 0; i < lists_len; i++)
                            lists[i] = malloc(ALLOC_SIZE);

                            while (1) {
                            for (i = 0; i < lists_len; i++)
                            memcpy(lists[i], lists[(i+1)%lists_len], ALLOC_SIZE);
                            }
                            return 0;
                            }
                            No sure what you are trying to prove, nor do I understand what this has to do with the subject at hand...
                            This test will bring ANY (Unix/Linux) OS - as long as the admin is stupid enough to setup a multi-user system without using limits.conf to limit the per user resource consumption. (Win2K3/8 suffer from the same problem, and use a comparable solution - though MS' solution makes limits.conf look user-friendly)

                            - Gilboa
                            DEV: Intel S2600C0, 2xE52658V2, 32GB, 4x2TB + 2x3TB, GTX780, F21/x86_64, Dell U2711.
                            SRV: Intel S5520SC, 2xX5680, 36GB, 4x2TB, GTX550, F21/x86_64, Dell U2412..
                            BACK: Tyan Tempest i5400XT, 2xE5335, 8GB, 3x1.5TB, 9800GTX, F21/x86-64.
                            LAP: ASUS N56VJ, i7-3630QM, 16GB, 1TB, 635M, F21/x86_64.

                            Comment


                            • #15
                              Originally posted by rewind View Post
                              Try to open more than the RAM in size text file /2-4 GB/ with gedit and it'll be enough. Happened to me already (:
                              Are you for real?
                              You don't protect your machine by setting the right limits and you expect, what? That the kernel will magically transform your ultra-slow HD into a RAM like speed-daemon? Maybe it should kill your process with an EIDIOTPROTECTION error? Come-on!

                              - Gilboa
                              DEV: Intel S2600C0, 2xE52658V2, 32GB, 4x2TB + 2x3TB, GTX780, F21/x86_64, Dell U2711.
                              SRV: Intel S5520SC, 2xX5680, 36GB, 4x2TB, GTX550, F21/x86_64, Dell U2412..
                              BACK: Tyan Tempest i5400XT, 2xE5335, 8GB, 3x1.5TB, 9800GTX, F21/x86-64.
                              LAP: ASUS N56VJ, i7-3630QM, 16GB, 1TB, 635M, F21/x86_64.

                              Comment

                              Working...
                              X