Originally posted by kraftman
View Post
Originally posted by kraftman
View Post
Originally posted by kraftman
View Post
I am claiming that, just recently Linux was 250 times slower on 64 cpu machines on a certain thing than now. I am implying that this may also indicate that there are other things that are not fixed in Linux, regarding 64 CPU machines. I suspect most Linux devs have access to dual CPU machines at most. If they dont have access to 64 cpu machines for million of USD, then how can they improve the 64 cpu code? They can not. They have no hands on experience of 64 CPU machines. SUN engineers have had experience since many years. I really find it doubtful that a bunch of spare time coders produce well scaling without access to Big Iron? That would be a miracle.
Originally posted by kraftman
View Post
As Solaris scales well vertically, which is difficult to do - it can of course be used for large clusters also. But it is easier to take a simple kernel and rip out everything and just use it for clusters. The Linux kernel for clusters is modified, it is non standard Linux kernel. Whereas Solaris kernel for huge Big Iron is the same kernel, down to EePC laptops. It is exactly the same DVD. THAT is scalability. You dont use std Linux kernels for clusters. You modify it.
A cluster just does one thing: calculates. A cluster is no replacement for Big Iron where many users log in at the same time and do all kind of work, where many different processes interact.
I can argue that MS-DOS is scalable as Linux (I just have to modify it first). Or C64 is as scalable as Linux. But that is not a true statement. MS-dos is not scalable, even though I modify it.
Linux scales well on a network, which is easy. This is horizontal scaling. Vertical scaling on one Big Iron - is difficult to do. Linux is not as good as Solaris on this.
Originally posted by kraftman
View Post
Myself has only looked at parts of the Solaris code, but it didnt tell me much. I have no idea about the Solaris code quality. But I KNOW that there are lots of testimonies of companies that try Linux for small loads, and when load increases they switch to Solaris because Linux doesnt cut it anymore. For instance this link about a die hard Linux company is forced to switch to a Unix:
"The problems we encountered were because Linux doesn't scale all that well," Rand said."
Or this one:
"As a small company with 15 employees and contractors, Real Time Matrix was a die-hard Linux shop. But the company's computing processing needs quickly surpassed its size."
What I am trying to say is, based on articles and testimonies, etc Solaris doesnt have the problems that Linux has. Therefore I draw the conclusion that Solaris code is better. At least Solaris doesnt kill processes randomly or sucks at file serving, as Linux does. It also has stable API and ABIs - which means that SUN has designed the Solaris kernel well from the beginning. Linux breaks everything all the time, which means it is not well designed. Why not design Linux APIs and ABIs well instead of trying different approaches all the time?
Originally posted by kraftman
View Post
Originally posted by kraftman
View Post
Linux crumbles and they switch to Solaris:
"Yes. Same exact hardware. We reinstalled Linux twice even to make sure there wasn't something wrong with the install. I've had lots of other people chime in reporting very similar problems."
Originally posted by kraftman
View Post
Ok. Fine.
Originally posted by kraftman
View Post
Look, it is one thing to use Linux on a desktop for personal use. It is another thing to run Linux with large loads on a machine with many CPUs, with many concurrent users. I work at a large company where we use Solaris for some of our big systems. We are now switching some systems to Linux, but that is because of politics. Not because Solaris doesnt cut it anymore.
Comment