Originally posted by gilboa
View Post
Announcement
Collapse
No announcement yet.
"The World's Most Highly-Assured OS" Kernel Open-Sourced
Collapse
X
-
-
Originally posted by jayrulez View PostPlease read my previous reply.
Common wisdom ties the number of bugs in code to the total number of code lines - which in my view is utter bullshit, as it fails to take into account the complexity of the code (E.g. by forcing drivers to use complex IPC to move data around instead of simply yanking data out of a PCI device directly into a user-space buffer).
Keep in mind that I currently maintain a ~0.5M LOC out of tree kernel module(s) so I may know what I'm talking about.
In the end, the argument between monolithic and micro-kernels started when I was writing DOS code for a living, and will continue when I retire.
Whether or not micro-kernels have a huge theoretical advantage as more or less irrelevant when most of the big-iron Linux servers I worked with have uptime measured in years.
Nobu,
You comment is childish. Please list the manufacturer and model of all 1+ M$ servers you used, and feel free to add why you think they sucked.Last edited by gilboa; 04 August 2014, 03:00 AM.oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.
Comment
-
Originally posted by log0 View PostLOL. It only tells that servers are not seen as life- and safety-critical systems. Ever seen Linux running medical, nuclear, space or military devices? Think of SIL 3 and 4. They are dominated by microkernels.
Comment
-
Originally posted by gilboa View PostNobu,
You comment is childish. Please list the manufacturer and model of all 1+ M$ servers you used, and feel free to add why you think they sucked.
Either way, I can't see how what I said has anything to do with Microsoft.
...or is that supposed to be $1M+? I don't even know anymore.
Comment
-
Originally posted by Nobu View PostAnd it wasn't childish for you to say they just spend lots of money because it's burning a hole in their pockets? Or was there some sarcasm in there that I just failed to see?
Either way, I can't see how what I said has anything to do with Microsoft.
...or is that supposed to be $1M+? I don't even know anymore.
For the sake of being nice (for God knows what reason) let me reiterate my point.
Please take the time to read it *slowly*.
0. Of-course I was being sarcastic!
1. 1+M$ means 1 million USD or above.
2. Both Unix and Linux are used in big-iron servers.
3. Big iron means very expensive servers that are built to last anything short of a Nuclear attack.
4. Such servers include RAS (Reliability, availability and serviceability) features such as on-line CPU (!), memory, disk, expansion cards and power supply replacement, memory mirroring, complex RAID schemes, etc.
5. 5-9s means 99.999% uptime or 5 1/2 minutes of downtime *per* year.
Now, my point is that given the fact that monolithic kernels such as Linux, Solaris and AIX are capable of maintaining 99.999% uptime in *complex* environments, it is no longer possible to claim that monolithic kernels are unstable by design.
Quite the opposite, micro-kernels are currently limited to the fairly simple embedded market and have yet to prove themselves in complex deployments.
- GilboaLast edited by gilboa; 04 August 2014, 08:21 AM.oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.
Comment
-
Yes, I'm on drugs...
I don't know if you noticed, but sarcasm doesn't translate well into text. And 1+M$ means about as much to me as a monkey holding up a dollar and a plus sign. Anyway, it's obvious that I know nothing about the subject, so I'll be nice and leave.
Comment
-
Originally posted by TheBlackCat View PostYou mean like U.S. Naval vessels (including nuclear submarines), the Japanese bullet train system, air traffic control systems, road traffic management, docking in the international space station, and essentially all the world's major stock markets? Because all of these use Linux.
Ukernels have to contact switch a lot but, for sel4 at least, it is much faster than Linux is able to. Still, that doesn't mean that it's faster than Linux when Linux is able to address devices that are in kernel space (obviously), but the cost is now, and had been for awhile, small enough to be worth it for more general applications.
The goal is to build a verifiable stack (including drivers), which they are pretty far from completing (they've actually synthesized some drivers already, but mostly for things like disk controllers and NICs). One of the devs has a pretty nice blog which covers all of this.
Comment
-
Originally posted by gilboa View Post1. 1+M$ means 1 million USD or above.
Comment
-
Originally posted by gilboa View PostNow, my point is that given the fact that monolithic kernels such as Linux, Solaris and AIX are capable of maintaining 99.999% uptime in *complex* environments, it is no longer possible to claim that monolithic kernels are unstable by design.
the point here is not uptime, the point is correctness
the ECU in your car may very well have an uptime of just a few hours (the duration of a trip) but during those, you mostly want its sw to be correct in its execution, so that the correct electrical signals are sent out at the right time (not too early nor too late), otherwise it may very well make the difference between the successfully passing another car or avoiding an obstacle, and a crash...
otoh running for months, by itself doesnt tell anything about whether the running kernel has or doesnt have hidden vulnerabilities waiting to be exploited, nor whether or not the system is already compromised and part of a botnet (actually, if a malware wants you to be part of a botnet, it'd rather like your uptime to be longer than shorter ..)
with a normal, general purpose (or jack of all trades) kernel, security means that lurking vulnerabilities (which are there anyway) are possibly unknown to the majority of people (including malware writers), hopefully found, and when found a fix is written and deployed asap leaving a possibly limited exploitation window - with a kernel formally verified at the code level, you are assured beforehand against the presence of vulnerabilities in the kernel
this is assuming the mathematical model your kernel is tested against includes security protocols and exploitable code execution paths - in fact envisioning a complete formal model is a very complex part of the design process, that's why this is mostly applied to microkernels - with general purpose larger ones it becomes unwieldy, complexity growing exponentially
but nobody ever said monolithic kernels are unstable by design, they are inherently more complex and less resilient by design, in addition to being less suitable for this kind of development
Quite the opposite, micro-kernels are currently limited to the fairly simple embedded market and have yet to prove themselves in complex deployments.
Comment
-
Originally posted by gilboa View PostNow, my point is that given the fact that monolithic kernels such as Linux, Solaris and AIX are capable of maintaining 99.999% uptime in *complex* environments, it is no longer possible to claim that monolithic kernels are unstable by design.
Quite the opposite, micro-kernels are currently limited to the fairly simple embedded market and have yet to prove themselves in complex deployments.
- Gilboa
Comment
Comment