Announcement

Collapse
No announcement yet.

Linux 2.6.38 Kernel Multi-Core Scaling

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Smorg
    replied
    IDK, The way Intel likes to never ever lower the prices on their higher-end consumer grade chips (i.e. gulftown), and with the relatively low cost of entry-level dual socket boards it might be a very logical upgrade path to grab yourself a second low-end i7 chip and go for a NUMA xeon setup. I'll take a 16-thread NUMA configuration over a $1000 12-thread single-socket gulftown.

    Leave a comment:


  • mtippett
    replied
    Originally posted by HokTar View Post
    Yes, but this scenario is typical for small-scale Linux-based clusters which are typically used for engineering / scientific calculations thus it is of interest to many of us.
    But I do get your point and unfortunately I can't donate a system like that so the situation is unlikely to change. Maybe you could ask Tyan or Supermicro for the reasons I mentioned.
    I'm expecting that it will come. Although I doubt that the scalability testing will be done by the vendors, having results from those systems are fully expected.

    Leave a comment:


  • mtippett
    replied
    Originally posted by jakubo View Post
    comments on the graphs are rare these days on phoronix.com
    nothing to tell why the big kernel lock patch and the "patch that does wonders" - as proclaimed - hardly make a change?
    It depends on where the contention lies. For heavy CPU-only loads, the BKL won't immediately yield any difference. When you start getting into heavy IO multi-threaded IO heavy loads, the waiting within the kernel becomes critical. This IO load can be graphics, disk or network.

    Different benchmarks will have different sensitivity.

    Leave a comment:


  • jakubo
    replied
    comments on the graphs are rare these days on phoronix.com
    nothing to tell why the big kernel lock patch and the "patch that does wonders" - as proclaimed - hardly make a change?

    Leave a comment:


  • HokTar
    replied
    Originally posted by mtippett View Post
    The 48 core systems will be 4x12 cores. That's a slightly different (and expensive scenario).
    Yes, but this scenario is typical for small-scale Linux-based clusters which are typically used for engineering / scientific calculations thus it is of interest to many of us.
    But I do get your point and unfortunately I can't donate a system like that so the situation is unlikely to change. Maybe you could ask Tyan or Supermicro for the reasons I mentioned.

    Leave a comment:


  • mtippett
    replied
    The 48 core systems will be 4x12 cores. That's a slightly different (and expensive scenario).

    This system represents a single package high core count - which is arguably going to be the typical mid-high end system that people will be getting for the next year or so.

    What is interesting is that Ubuntu did get a reasonable gain from 6 real cores to 12 cores (6 being HT). The PC-BSD and OpenIndiana systems would typically collapse when HT was turned on. You get the benefit of about 1-2 _real_ cores with the 6 HT "cores" being enabled.

    Leave a comment:


  • devius
    replied
    Shouldn't this be tested on something with a really big number of cores/processors to be able to see any differences? Something like 48 cores or more? 6 cores isn't all that much, even if they have HT.

    Leave a comment:

Working...
X