You don't need to use different versions. So stop saying that you do. A GNU/Linux distro like Ubuntu for AMD64 will scale quite well with its one universal kernel. Compiling your own custom kernel might help more at the extreme ends of the scalability range, e.g. on really big iron or single-core slow desktops, but we both agree already that no matter what you do, Linux probably isn't ready for really big iron the way Solaris is.
You can get more scalability by building different binaries from the same code base. I don't see that as "modifying it". To get more scalability from the C64 "operating system"(?) you would need major rewrites, and first-writes of major features it doesn't have at all. Maybe you just picked an example that's too extreme, because it looks like a straw-man to me.Solaris install DVD is the same, no matter which machine. THAT is scalability. It is not something we have to agree on, or disagree on. It is a fact. Solaris is scalable, Linux is not. Otherwise, I could equivally say "C64 is scalable"; I just have to modify it. That is simply plain stupid to say so. It is nothing to agree upon or not, it is stupid to say so.
I can agree with you to this extent, though: Ubuntu GNU/Linux as a distro of compiled binaries scales to the range of machines that it targets: not-too-ancient desktops up through > 16-core servers at least. To go beyond that range, it helps to start customizing the distribution's source and rebuilding parts. Specifically, you can maybe gain some performance by editting the configuration files for the Linux kernel and rebuilding that package.
Even unmodified, Ubuntu will run on large machines (they compile the kernel with NR_CPUS=64, so cores beyond that will go unused. Linux itself claims it can be compiled for up to 512 cores). Maybe some bottlenecks will be worse than with a custom kernel that leaves out some options you don't need, and so on. I don't know how e.g. RHEL or SuSE their kernels, since I just use Ubuntu and sometime Debian.
If you want to talk about the scalability of a specific distro, without allowing customized kernels, then that's one thing. Linux itself doesn't have an official binary, so it's native form is the source. The scalability of Linux is not just the range of machines a hypothetical distro could build a single kernel binary to do well on, it's the range of machines that the kernels built from the same source code can handle. That's how I see it, anyway. Obviously you've seen my previous statement of this definition of scalability and rejected it, so that's one place where we disagree about word definitions more than what Linux is actually like.
Not a lot of formal math, no. I have an undergrad B.Sc, combined honours in physics and c.s. And I was always more interested in the understanding-how-the-world-works part of physics than in the math formalism. So yeah, I guess I didn't know how much of an analogy you were intending with the word "theorem". Theorem = proven hypothesis, right? My mental models of how computers behave aren't usually formally proven.But certainly you havent studied much math.
I think I'm getting closer, but I still don't know what sort of a theorem your one all-encompassing theorem that models openSolaris behaviour would be.so you dont understand what I am talking about or why I emphasize that all the time.
Yeah, sorry, I was feeling snarky. I think we just have different ways of thinking about computers. I still don't understand how you use your way of understanding things in practice, which is why I gave some examples of how I use my way. I take back the "polite way of saying that" comment, because there's no reason for me to assume your way doesn't work well for you."But I couldnt think of a polite way of saying that". If you want to get sticky, we can.