Announcement

Collapse
No announcement yet.

2 cores vs. 4 cores in Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • 2 cores vs. 4 cores in Linux

    Hello all.

    I intend to purchase a new desktop, and would like your opinion/advice on whether a 2 core 3Ghz system, or a slower quad core are best suited for me.

    I'm a physics grad student, so at the office i usually use software like Lyx, Octave and such. I know that currently octave can not utilize well a multi core processor (like MatLab can), and Lyx is not a demanding software. So when it comes to office work, i am more concerned as to how Linux deals with more then 2 cores. Is there any advantage in using more than 2 cores? If i use a kernel compiled with smp how much will i benefit from a quad processor? Can Linux use it to run several programs more efficiently? What about Linux applications in general?

    Thanks in advance

  • #2
    I recommend a faster 2-core because there are freezing issues in Linux due to I/O blockage, meaning that even though a program would run on its own core, it would get slowed down when there's heavy I/O load. A faster 2-core will make these freezes shorter due to its higher clock speed.

    Comment


    • #3
      Originally posted by RealNC View Post
      I recommend a faster 2-core because there are freezing issues in Linux due to I/O blockage, meaning that even though a program would run on its own core, it would get slowed down when there's heavy I/O load. A faster 2-core will make these freezes shorter due to its higher clock speed.
      It seems like a I/O bug when copying large files. Rather not related to number of cores.

      Comment


      • #4
        Related to clock speed, which is why I recommend a fast 2-core rather than a slow 4-core.

        Comment


        • #5
          I have an 8-core machine for running many simultaneous VMware images, and it runs without a hitch. My VMware images are single files of about 30 Gb and I have never had any problems copying them.
          I use XFS with optimization tweaks on my disk and it is VERY fast.

          pbzip can use all your cores to compress or uncompress files, it is very impressive with 8 cores.

          The Adobe flash plugin appears to be multithreaded because it chews up several cores when I play hi-res youtube videos at full screen 1920x1200.

          If your crunching jobs use lots of memory then memory speed will be more important than CPU clock speed. My 2.5 GHz machine stomps all over my 2.6 GHz machine because its RAM is twice as fast.

          Comment


          • #6
            Originally posted by kraftman View Post
            It seems like a I/O bug when copying large files. Rather not related to number of cores.
            I have run processes that saturate the drive I/O. You can see huge numbers in gkrellm. If you try to run another process that touches the same drive, it will bog down terribly. For example, I put my /home on its own drive. If I start a backup of a big subdirectory of my home directory and then launch a gnome program, the gnome program will take FOREVER to boot up because the OS will not give it the I/O bandwidth it needs to read its ~/.gnome-whatever file.

            This is a problem in Linux with scheduling I/O and many cores. One process gets all the bandwidth and others can't get a word in edgewise.

            Comment


            • #7
              @Mickeydi

              For desktop use 3 GHz dual is usually enough (using E8400). When you use Intel Quad with 2.5 GHz it is usally no problem to OC that to 3 GHz too, thats what I did with my Q9300. When you are used to 3 GHz you never want something slower again, but more cores are mainly used for compiling and some apps which scale well with more cores like 7zip. If i would need to buy a new cpu today i would go for a Q9550 (2.83 GHz) and OC that a bit. OC requires a good board, so go for P45 or wait till next month for first i5 systems. I guess those will rock too.

              Comment


              • #8
                Originally posted by frantaylor View Post
                I have run processes that saturate the drive I/O. You can see huge numbers in gkrellm. If you try to run another process that touches the same drive, it will bog down terribly. For example, I put my /home on its own drive. If I start a backup of a big subdirectory of my home directory and then launch a gnome program, the gnome program will take FOREVER to boot up because the OS will not give it the I/O bandwidth it needs to read its ~/.gnome-whatever file.

                This is a problem in Linux with scheduling I/O and many cores. One process gets all the bandwidth and others can't get a word in edgewise.
                It seems it's the bug I was talking about :/ It's rather not related to many cores. It's very well known and strange bug:



                Thanksfully not everyone is affected, but sadly I am... There's workaround, but I didn't tried it. If you're interested let me know You see, it's probably not design problem, but a bug.
                Last edited by kraftman; 05 August 2009, 08:52 AM.

                Comment


                • #9
                  A slower 4 core will get you further. If anything you will have more cache (true of phemom IIs where the cache is shared and not on core2quads) and clockspeeds arent much lower, the only advantage of dual cores is they are cheaper in the real low end, but apart from that...

                  Also all current Core2s and phenomIIs overclock really well so you could do that if you wanted to match the 3.0ghz core2duos.

                  Comment


                  • #10
                    Originally posted by kraftman View Post
                    It seems it's the bug I was talking about :/ It's rather not related to many cores. It's very well known and strange bug:



                    Thanksfully not everyone is affected, but sadly I am... There's workaround, but I didn't tried it. If you're interested let me know You see, it's probably not design problem, but a bug.
                    Did you see the status of that bug report: RESOLVED INSUFFICIENT_DATA

                    Did you read the comments in the bug report?

                    "this bug has long past the point where it is useful.
                    There are far too many people posting with different issues.
                    There is too much noise to filter through to find a single bug.
                    There aren't any interested kernel developers following the bug."

                    It is not even a bug report, it is just a random flame fest.

                    Comment

                    Working...
                    X