Announcement

Collapse
No announcement yet.

Multi-Core, Multi-OS Scaling Performance

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    I do wonder what kind of numbers an AMD Phenom II X6 would give here.

    Comment


    • #12
      Well, it could have been interesting to see how Windows 7/Server 2008 would fit here.

      In fact I've been displeased with F14 performance on some workloads with default settings.

      Comment


      • #13
        Originally posted by mirv View Post
        I do wonder what kind of numbers an AMD Phenom II X6 would give here.
        I did some testing a while back with that on my 1090T. Results looked a little weird given the clock speeds for up to three cores in use wound up being @ 3.6 Ghz and 4+ cores in use wound up being @ 3.2 Ghz.

        Comment


        • #14
          Originally posted by deanjo View Post
          I did some testing a while back with that on my 1090T. Results looked a little weird given the clock speeds for up to three cores in use wound up being @ 3.6 Ghz and 4+ cores in use wound up being @ 3.2 Ghz.
          I can't recall the marketing name for that feature. The chip keeps within it's TDP by up-clocking when only a limited number of cores is active. I am not sure how smart Linux is at keeping that in mind when scheduling. However, it does mean that if you need pure grunt for a while, it makes sense to offline the other cores. (You can do this via a sysfs entry.

          Comment


          • #15
            Originally posted by mtippett View Post
            I can't recall the marketing name for that feature.
            Ummm it's called "Turbo CORE".

            Comment


            • #17
              Originally posted by mtippett View Post
              The chip keeps within it's TDP by up-clocking when only a limited number of cores is active.
              Not so much active as in their lowest p-state.

              I am not sure how smart Linux is at keeping that in mind when scheduling. However, it does mean that if you need pure grunt for a while, it makes sense to offline the other cores. (You can do this via a sysfs entry.
              It works very well already in linux. IIRC the patches were pushed through in 2.6.34 (some distros back ported them to earlier releases as well.) You really don't have to do any fooling around with settings. It "just works" more or less out of the box on the newer kernels.

              Comment


              • #18
                Originally posted by ChrisXY View Post
                When I saw the title I immediately knew there would be no information about what CPU scheduler is used in these benchmarks. I just have to assume it is CFQ in all Linuxes? Why don't you benchmark other schedulers too?
                so true .. without info about schedulers the benchmarks have a lot less meaningful info

                Originally posted by ChrisXY View Post
                In other benchmarks it was shown that the filesystem actually can have a rather big impact on compiling. Why not also try it within a ramdisk (maybe with some filesystem formatted all benchmarked operating systems support)?
                or at least use some SSD (revo2 or something) for very high speed IOPS ... my impresion is that many benchmarks were greatly impaired by IO bottleneck .. the scaling is not natural for some benchmarks ...
                it would be VERY interesting to repeat the bechmarks either with a ramdisk or with ssd

                Comment


                • #19
                  Originally posted by adrian_sev View Post
                  so true .. without info about schedulers the benchmarks have a lot less meaningful info
                  I don't see people asking for the scheduler information for PC-BSD or OpenIndiana. The scheduler decisions made by Ubuntu and Red Hat (CentOS, Fedora) are no different than the scheduler decisions made for other OSes. Only those who regularly hack and play with schedulers really care. I have never heard anyone say distro-X with CFQ's scalability sucks. It's invariable either the scheduler as the primary point of interest or the distribution.

                  If the schedulers were given a head-to-head, then people would say "well, the distribution or compiler choices make the benchmarks pointless". Of course we all know what happens when compilers are compared...

                  Comment


                  • #20
                    Originally posted by mtippett View Post
                    I don't see people asking for the scheduler information for PC-BSD or OpenIndiana. The scheduler decisions made by Ubuntu and Red Hat (CentOS, Fedora) are no different than the scheduler decisions made for other OSes. Only those who regularly hack and play with schedulers really care. I have never heard anyone say distro-X with CFQ's scalability sucks. It's invariable either the scheduler as the primary point of interest or the distribution.

                    If the schedulers were given a head-to-head, then people would say "well, the distribution or compiler choices make the benchmarks pointless". Of course we all know what happens when compilers are compared...
                    It would be explanatory information, though - much like how the filesystem is often relevant when there are large gaps in database performance. (The different kernel versions here are of course a confounding effect - but it the two linuxes are also using different schedulers, that might be part of the explanation.)

                    Comment

                    Working...
                    X