Announcement

Collapse
No announcement yet.

2 cores vs. 4 cores in Linux

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • 2 cores vs. 4 cores in Linux

    Hello all.

    I intend to purchase a new desktop, and would like your opinion/advice on whether a 2 core 3Ghz system, or a slower quad core are best suited for me.

    I'm a physics grad student, so at the office i usually use software like Lyx, Octave and such. I know that currently octave can not utilize well a multi core processor (like MatLab can), and Lyx is not a demanding software. So when it comes to office work, i am more concerned as to how Linux deals with more then 2 cores. Is there any advantage in using more than 2 cores? If i use a kernel compiled with smp how much will i benefit from a quad processor? Can Linux use it to run several programs more efficiently? What about Linux applications in general?

    Thanks in advance

  • #2
    I recommend a faster 2-core because there are freezing issues in Linux due to I/O blockage, meaning that even though a program would run on its own core, it would get slowed down when there's heavy I/O load. A faster 2-core will make these freezes shorter due to its higher clock speed.

    Comment


    • #3
      Originally posted by RealNC View Post
      I recommend a faster 2-core because there are freezing issues in Linux due to I/O blockage, meaning that even though a program would run on its own core, it would get slowed down when there's heavy I/O load. A faster 2-core will make these freezes shorter due to its higher clock speed.
      It seems like a I/O bug when copying large files. Rather not related to number of cores.

      Comment


      • #4
        Related to clock speed, which is why I recommend a fast 2-core rather than a slow 4-core.

        Comment


        • #5
          I have an 8-core machine for running many simultaneous VMware images, and it runs without a hitch. My VMware images are single files of about 30 Gb and I have never had any problems copying them.
          I use XFS with optimization tweaks on my disk and it is VERY fast.

          pbzip can use all your cores to compress or uncompress files, it is very impressive with 8 cores.

          The Adobe flash plugin appears to be multithreaded because it chews up several cores when I play hi-res youtube videos at full screen 1920x1200.

          If your crunching jobs use lots of memory then memory speed will be more important than CPU clock speed. My 2.5 GHz machine stomps all over my 2.6 GHz machine because its RAM is twice as fast.

          Comment


          • #6
            Originally posted by kraftman View Post
            It seems like a I/O bug when copying large files. Rather not related to number of cores.
            I have run processes that saturate the drive I/O. You can see huge numbers in gkrellm. If you try to run another process that touches the same drive, it will bog down terribly. For example, I put my /home on its own drive. If I start a backup of a big subdirectory of my home directory and then launch a gnome program, the gnome program will take FOREVER to boot up because the OS will not give it the I/O bandwidth it needs to read its ~/.gnome-whatever file.

            This is a problem in Linux with scheduling I/O and many cores. One process gets all the bandwidth and others can't get a word in edgewise.

            Comment


            • #7
              @Mickeydi

              For desktop use 3 GHz dual is usually enough (using E8400). When you use Intel Quad with 2.5 GHz it is usally no problem to OC that to 3 GHz too, thats what I did with my Q9300. When you are used to 3 GHz you never want something slower again, but more cores are mainly used for compiling and some apps which scale well with more cores like 7zip. If i would need to buy a new cpu today i would go for a Q9550 (2.83 GHz) and OC that a bit. OC requires a good board, so go for P45 or wait till next month for first i5 systems. I guess those will rock too.

              Comment


              • #8
                Originally posted by frantaylor View Post
                I have run processes that saturate the drive I/O. You can see huge numbers in gkrellm. If you try to run another process that touches the same drive, it will bog down terribly. For example, I put my /home on its own drive. If I start a backup of a big subdirectory of my home directory and then launch a gnome program, the gnome program will take FOREVER to boot up because the OS will not give it the I/O bandwidth it needs to read its ~/.gnome-whatever file.

                This is a problem in Linux with scheduling I/O and many cores. One process gets all the bandwidth and others can't get a word in edgewise.
                It seems it's the bug I was talking about :/ It's rather not related to many cores. It's very well known and strange bug:

                http://bugzilla.kernel.org/show_bug.cgi?id=12309

                Thanksfully not everyone is affected, but sadly I am... There's workaround, but I didn't tried it. If you're interested let me know You see, it's probably not design problem, but a bug.
                Last edited by kraftman; 08-05-2009, 08:52 AM.

                Comment


                • #9
                  A slower 4 core will get you further. If anything you will have more cache (true of phemom IIs where the cache is shared and not on core2quads) and clockspeeds arent much lower, the only advantage of dual cores is they are cheaper in the real low end, but apart from that...

                  Also all current Core2s and phenomIIs overclock really well so you could do that if you wanted to match the 3.0ghz core2duos.

                  Comment


                  • #10
                    Originally posted by kraftman View Post
                    It seems it's the bug I was talking about :/ It's rather not related to many cores. It's very well known and strange bug:

                    http://bugzilla.kernel.org/show_bug.cgi?id=12309

                    Thanksfully not everyone is affected, but sadly I am... There's workaround, but I didn't tried it. If you're interested let me know You see, it's probably not design problem, but a bug.
                    Did you see the status of that bug report: RESOLVED INSUFFICIENT_DATA

                    Did you read the comments in the bug report?

                    "this bug has long past the point where it is useful.
                    There are far too many people posting with different issues.
                    There is too much noise to filter through to find a single bug.
                    There aren't any interested kernel developers following the bug."

                    It is not even a bug report, it is just a random flame fest.

                    Comment


                    • #11
                      Originally posted by frantaylor View Post
                      Did you see the status of that bug report: RESOLVED INSUFFICIENT_DATA

                      Did you read the comments in the bug report?

                      "this bug has long past the point where it is useful.
                      There are far too many people posting with different issues.
                      There is too much noise to filter through to find a single bug.
                      There aren't any interested kernel developers following the bug."

                      It is not even a bug report, it is just a random flame fest.
                      Yeah, but it's only one of the reports. And it seems you didn't even read. Jens Axboe is still working on it. There's his patch also. Your reaction is very funny

                      This is a problem in Linux with scheduling I/O and many cores. One process gets all the bandwidth and others can't get a word in edgewise.
                      It seems it's not:

                      Fair queuing would allow many processes demanding large levels of disk IO to each get fair access to the device, preventing any one process from denying the others.
                      Even SFQ allowed this and CFQ moved even further. However, I don't expect from you to know this (if you even don't understand kernel versioning...). You and your friend already proved this in another thread.

                      Have a nice trolling there:

                      http://www.phoronix.com/forums/showt...t=18073&page=7
                      Last edited by kraftman; 08-05-2009, 11:51 AM.

                      Comment


                      • #12
                        My desktop applications freeze when there's heavy I/O. Of course others might pretend everything is OK simply because they don't understand the issue and think these freezes are normal/acceptable/unavoidable or they don't get them at all.

                        I do. There's a problem with Linux I/O and graphics. I don't know who's at fault. The problem is there. I always had it, with every PC I ever used. Windows does not have this problem; the GUI is always fluid no matter how heavy I/O load there is.

                        Comment


                        • #13
                          Originally posted by RealNC View Post
                          My desktop applications freeze when there's heavy I/O. Of course others might pretend everything is OK simply because they don't understand the issue and think these freezes are normal/acceptable/unavoidable or they don't get them at all.

                          I do. There's a problem with Linux I/O and graphics. I don't know who's at fault. The problem is there. I always had it, with every PC I ever used. Windows does not have this problem; the GUI is always fluid no matter how heavy I/O load there is.
                          Yes, there's definitely problem with I/O on some configurations. It's science 2.6.18 like mentioned in bug report (but it's due to bug not due to design like some trolls want to profff; however long standing one, but not everyone is affected). Graphic is another case

                          Easy way to check you're affected is to copy file which is bigger then your RAM. System becomes unresponsive for some amount of time.
                          Last edited by kraftman; 08-05-2009, 02:53 PM.

                          Comment


                          • #14
                            Originally posted by kraftman View Post
                            Yeah, but it's only one of the reports. And it seems you didn't even read. Jens Axboe is still working on it. There's his patch also. Your reaction is very funny

                            It seems it's not:

                            Even SFQ allowed this and CFQ moved even further. However, I don't expect from you to know this (if you even don't understand kernel versioning...). You and your friend already proved this in another thread.

                            Have a nice trolling there:

                            http://www.phoronix.com/forums/showt...t=18073&page=7
                            I work with bugs every day. When the developer marks the bug as closed, that means "I'm not working on this any more"

                            Comment


                            • #15
                              Originally posted by frantaylor View Post
                              I work with bugs every day. When the developer marks the bug as closed, that means "I'm not working on this any more"
                              So check date "closed" and when Jens uploaded the patch. There are also other reports like this one. If they'll close all reports there will be new, because bug is still there... Believe or not, but I'll probably switch to FreeBSD or Solaris, because of this (if it will really piss me of). However, I don't copy big files too much and I have Windows installed, so it's hard decision.
                              Last edited by kraftman; 08-05-2009, 04:55 PM.

                              Comment

                              Working...
                              X