Originally posted by kebabbert
View Post
Therefore I dont agree with you, when you say that Linux scales well, but you (and the kernel hackers) only have experience of machines up to 4 CPUs. How can you or the Linux people claim that, without knowing? How many Linux kernel hackers have experience of big iron?
Like I said, I can't flat out claim that Linux scales well. I would bet that Linux would not be completely embarrased on medium-to-large machine. But I could be wrong about that. Certainly some workloads will expose weaknesses. Linux's scalability is definitely not mature or polished!
If I had to set up a big-iron server (> 16 cores) for something, I would try Linux with the workload I wanted to run, but I'd also try OpenSolaris (esp. in light of finding out that I might not hate its package manager ) On an 8cpu machine, I'd just use Linux, and be pretty confident that it wasn't going to do badly compared to anything else.
[quote]
As I said, I dont agree with people saying "Linux scaling well" if there are different Linux kernels for large clusters and for normal desktop PCs. Then you could also switch between FreeBSD kernels for one specifik task, and to Linux kernels for doing another task. As you do now, when switching between different Linux kernels for different tasks. That is clearly not "scalability", but rather "flexibility". Otherwise, what would Linux people call Solaris' ability to run the very same binaries on laptops to big iron? True Solaris scalability vs False Linux scalability?
Regarding Linux server vs non server. There is only one version of Solaris.
Solaris still does have a 32bit kernel for x86 Solaris, right? (BTW, on OpenSolaris, is /bin/ls a 64 bit binary? It's 32bit on Solaris 10, and it seemed like there were 64bit versions only of the things that needed to be 64bit to talk to the kernel. Plus a very few libraries so you could compile 64bit binaries.)
And it can function in both roles. True scalability again. But if you want, you change Solaris scheduler on the fly, during run time. Is that possible with Linux, or do you have to use a special esoteric Linux version to allow that? Or do you have to recompile the kernel?
A more recent scheduler-selection thing is from Jan 2008: http://www.ibm.com/developerworks/li...cfs/index.html With that code, you could build in multiple schedulers, but you can only select one at boot time, so you need to reboot. Which would make tuning take longer, but is ok once you have your server set up.
You can change I/O schedulers at runtime, per disk, though. (cfq, deadline, anticipatory, or no-op)
I still don't agree that having a single binary is so important. On smaller systems, you can afford to turn on more debugging stuff that adds tiny amounts of overhead in critical sections, and that's what Ubuntu's kernels do (even the -server ones). You could probably improve scalability a little by disabling some of the statistics-gathering and stuff that produces better debugging output when there is a problem. Since on big iron there will be more contention, so leaving debug checks out of critical sections will help more.
Is this single binary stuff about reliability and warranties? i.e. you get support only if you're using the distro kernel? I wonder how e.g. Canonical's or RedHat's support contracts work, if you could ask them to build you a kernel compiled for your big-iron server if you wanted to change some of the things that were only compile-time configurable...
But anyway, Linux as a source base can scale _way_ down to embedded systems. Linux can leave out e.g. the ability to swap, so you can really strip down the kernel. The Linux philosophy has never revolved around a single universal binary. (Although that hasn't stopped distros like RHEL (enterprise) from acting that way about the Linux binary they ship.)
That said, almost all of Linux's tunables these days are run time, not compile time. The compile-time options are integer variables, but rather chunks of code to leave in or out.
Some people think that GPL is quite egocentric license.
Comment