Announcement

Collapse
No announcement yet.

2 cores vs. 4 cores in Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by frantaylor View Post
    Did you see the status of that bug report: RESOLVED INSUFFICIENT_DATA

    Did you read the comments in the bug report?

    "this bug has long past the point where it is useful.
    There are far too many people posting with different issues.
    There is too much noise to filter through to find a single bug.
    There aren't any interested kernel developers following the bug."

    It is not even a bug report, it is just a random flame fest.
    Yeah, but it's only one of the reports. And it seems you didn't even read. Jens Axboe is still working on it. There's his patch also. Your reaction is very funny

    This is a problem in Linux with scheduling I/O and many cores. One process gets all the bandwidth and others can't get a word in edgewise.
    It seems it's not:

    Fair queuing would allow many processes demanding large levels of disk IO to each get fair access to the device, preventing any one process from denying the others.
    Even SFQ allowed this and CFQ moved even further. However, I don't expect from you to know this (if you even don't understand kernel versioning...). You and your friend already proved this in another thread.

    Have a nice trolling there:

    Discussion of Solaris-based operating systems including OpenSolaris, Oracle Solaris, Nexenta, and BeleniX.
    Last edited by kraftman; 05 August 2009, 11:51 AM.

    Comment


    • #12
      My desktop applications freeze when there's heavy I/O. Of course others might pretend everything is OK simply because they don't understand the issue and think these freezes are normal/acceptable/unavoidable or they don't get them at all.

      I do. There's a problem with Linux I/O and graphics. I don't know who's at fault. The problem is there. I always had it, with every PC I ever used. Windows does not have this problem; the GUI is always fluid no matter how heavy I/O load there is.

      Comment


      • #13
        Originally posted by RealNC View Post
        My desktop applications freeze when there's heavy I/O. Of course others might pretend everything is OK simply because they don't understand the issue and think these freezes are normal/acceptable/unavoidable or they don't get them at all.

        I do. There's a problem with Linux I/O and graphics. I don't know who's at fault. The problem is there. I always had it, with every PC I ever used. Windows does not have this problem; the GUI is always fluid no matter how heavy I/O load there is.
        Yes, there's definitely problem with I/O on some configurations. It's science 2.6.18 like mentioned in bug report (but it's due to bug not due to design like some trolls want to profff; however long standing one, but not everyone is affected). Graphic is another case

        Easy way to check you're affected is to copy file which is bigger then your RAM. System becomes unresponsive for some amount of time.
        Last edited by kraftman; 05 August 2009, 02:53 PM.

        Comment


        • #14
          Originally posted by kraftman View Post
          Yeah, but it's only one of the reports. And it seems you didn't even read. Jens Axboe is still working on it. There's his patch also. Your reaction is very funny

          It seems it's not:

          Even SFQ allowed this and CFQ moved even further. However, I don't expect from you to know this (if you even don't understand kernel versioning...). You and your friend already proved this in another thread.

          Have a nice trolling there:

          http://www.phoronix.com/forums/showt...t=18073&page=7
          I work with bugs every day. When the developer marks the bug as closed, that means "I'm not working on this any more"

          Comment


          • #15
            Originally posted by frantaylor View Post
            I work with bugs every day. When the developer marks the bug as closed, that means "I'm not working on this any more"
            So check date "closed" and when Jens uploaded the patch. There are also other reports like this one. If they'll close all reports there will be new, because bug is still there... Believe or not, but I'll probably switch to FreeBSD or Solaris, because of this (if it will really piss me of). However, I don't copy big files too much and I have Windows installed, so it's hard decision.
            Last edited by kraftman; 05 August 2009, 04:55 PM.

            Comment


            • #16
              "CLOSED NEEDINFO" and "CLOSED WORKSFORME" doesn't mean there's no problem. It just means that "The Bazaar" failed.

              Comment


              • #17
                @kraftman

                I was experiencing an issue something like that bug report on 2.6.30, but it seems to have improved in 2.6.31 RC5. Have you tried that?

                Comment


                • #18
                  Originally posted by krazy View Post
                  @kraftman

                  I was experiencing an issue something like that bug report on 2.6.30, but it seems to have improved in 2.6.31 RC5. Have you tried that?
                  No, I'm using only distro provided kernels right now - 2.6.30.4, but I'll try this one and 2.6.30 with Jens patch.

                  @RealNC

                  "CLOSED NEEDINFO" and "CLOSED WORKSFORME" doesn't mean there's no problem. It just means that "The Bazaar" failed.
                  Exactly :>
                  Last edited by kraftman; 06 August 2009, 03:53 AM.

                  Comment


                  • #19
                    EDITED:

                    I compiled 2.6.31-rc5, but X doesn't start. However I did this in vt:

                    I copied big file from ntfs partition using ntfs-3g to my home directory, ran "top -d 0.2" (to notice eventual slowdowns) as root in another vt and then I started copying file from home to ntfs partition (so both files were copied simultaneously). There were no single visible latency! (I can do the same with previous kernels, but after some time system becomes unresponsive).

                    It seems rc5 behaves much better or bug is even fixed. However, I need to try this in some DE, because it can be hard to catch latencies in vt.
                    Last edited by kraftman; 06 August 2009, 10:45 AM.

                    Comment


                    • #20
                      Regression testing?

                      Originally posted by RealNC View Post
                      "CLOSED NEEDINFO" and "CLOSED WORKSFORME" doesn't mean there's no problem. It just means that "The Bazaar" failed.
                      Apparently "The Bazaar" does not do regression testing, either.

                      How do bugs like this make it into "RC" kernels? Does not "RC" mean "we have tested this and we think it is good"?

                      This is one reason why Linux has crummy market share. There are so many regressions. Normal non-hacker type people do not want to deal with regressions. They want to turn their computers on and get to work.

                      I wonder what can be done to deal with the regressions. Linux has no central testing lab and no formal process.

                      With a formal process, you are not even half done when you fix the bug. Next you have to write the regression test for the bug and then test the regression test. This usually takes more effort and more resources than fixing the bug. And then you have to run all the regression tests all the time. This requires an automated framework to run the regression tests and report the results. This is all enormous work but it needs to be done if you want to ship a quality product every time.

                      When you look at the bugzilla.kernel.org, there is not even a bug status for "needs testing". When something is "fixed" it gets marked as RESOLVED and that is that.

                      RedHat etc. have to do this testing and their kernels have hundreds of patches to the stock kernels to fix the problems that are not caught by the "Bazaar" process. When you look at these patches you see that most of them are fixes for regressions, things that used to work and then stopped working for some reason, and the regression was not caught. Or else they are driver patches for new drivers that never worked right in the first place because they did not get tested well. Some of the distribution patches stick around for years because they do not get accepted upstream for one reason or another. These patches need to be maintained as the code changes and that requires even more effort.

                      I don't know how it can be fixed. Nobody "owns" linux, so nobody wants to take the responsibility to do all the regression testing that should be done. The distributions "own" their kernels, but if they all do their own regression testing then there is enormous duplicated effort.

                      I worry that Linux is going to turn into even more of a chaotic mess as it gets bigger and gets more features. It is not the slim and trim kernel that it was back in the 90's.

                      Comment

                      Working...
                      X