Originally posted by silix
View Post
Announcement
Collapse
No announcement yet.
"The World's Most Highly-Assured OS" Kernel Open-Sourced
Collapse
X
-
Originally posted by kaprikawn View PostThanks, wow that's quite something. A quick google search shows the Linux kernel was over 15million lines of code in 2011. I suppose it's a lot easier to keep code secure when there's so few moving parts.
For limited use cases I'm sure this would be good to use. Though with so few lines of code I'm assuming there's no hardware drivers. Imagine getting wi-fi up-and-running using this!
Comment
-
Originally posted by TheBlackCat View PostDo you have a source for that?
www.windriver.com/customers/customer-success/
www.qnx.com/solutions/industries/defense.html#customers
Comment
-
Originally posted by liam View PostI've mostly learned about this over time so I don't have any particular link in mind but here's what I found by just going to the companies websites.
www.windriver.com/customers/customer-success/
www.qnx.com/solutions/industries/defense.html#customers
Let me be more specific with my questions:
1. How familiar are you with the specific examples I mentioned?
2. Do you have specific knowledge that those examples involve Linux running on top of another real-time kernel, or are you just assuming that based on your impression of how systems you are familiar with are normally built?
Comment
-
Originally posted by liam View PostLinux is in places like these but not, afaict, running on the metal. They use something like qnx/wind river/sel4 and run Linux as a process (yes, a bit like KVM)..
Care to post links that prove your point (actual deployments)?
Lets start by stock exchanges and continue from there.
Originally posted by Luke_Wolf View PostYou're the first person I've ever seen format that like that and it's wrong, it should be formatted $1M+. In english the currency symbol always goes in front, and the only symbol that can occur between a number and it's units is a closed range (e.g. $1-2M, or $1 to 2M).
Originally posted by silix View Postbut your point is orthogonal to the point at hand...
the point here is not uptime, the point is correctness
the ECU in your car may very well have an uptime of just a few hours (the duration of a trip) but during those, you mostly want its sw to be correct in its execution, so that the correct electrical signals are sent out at the right time (not too early nor too late), otherwise it may very well make the difference between the successfully passing another car or avoiding an obstacle, and a crash...
otoh running for months, by itself doesnt tell anything about whether the running kernel has or doesnt have hidden vulnerabilities waiting to be exploited, nor whether or not the system is already compromised and part of a botnet (actually, if a malware wants you to be part of a botnet, it'd rather like your uptime to be longer than shorter ..)
with a normal, general purpose (or jack of all trades) kernel, security means that lurking vulnerabilities (which are there anyway) are possibly unknown to the majority of people (including malware writers), hopefully found, and when found a fix is written and deployed asap leaving a possibly limited exploitation window - with a kernel formally verified at the code level, you are assured beforehand against the presence of vulnerabilities in the kernel
A mistake in my car's ECU may cause the engine to shut down (Most of the countries in the world require a physical link to the steering wheel and brake systems that cannot be disabled by any type of computing system - hence the lack of air-craft-like "fly by wire" systems in 99.99% of the cars today).
A mistake in the N/Y stock exchange computing system (that ASAIK runs RHEL) may cost 100's of billions of USD.
2. It may possibly be that L4 and QNX are far more secure than the Linux kernel, but given the fact that 1-5% of the known vulnerabilities are kernel related makes the point rather mute. Far worse, embedded systems (such as ECU) are *notoriously* easy to crack. When you look at the main frame running a stock exchange / bank or looking at the ISP QOS DPI server, the amount of active critical code outside the kernel (E.g. DB, processing, web, etc) far out-weights the amount of *active* code paths within the kernel.
this is assuming the mathematical model your kernel is tested against includes security protocols and exploitable code execution paths - in fact envisioning a complete formal model is a very complex part of the design process, that's why this is mostly applied to microkernels - with general purpose larger ones it becomes unwieldy, complexity growing exponentially
but nobody ever said monolithic kernels are unstable by design, they are inherently more complex and less resilient by design, in addition to being less suitable for this kind of development
maybe, but that's beside the scope of the topic here..oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.
Comment
-
I don't even have anything to say. Why do you guys care so much? If it's nothing special, why waste your breath? It appears to be a microkernel that is relatively well designed and verified to work exactly as it should. Is it an OS? No. Not sure exactly the point of it but I'm sure there is one and the more things that are open sourced, the better. Perhaps some other kernels can take note of some things they've done. Either way, there is no point in fussing about it, move on to a subject that actually matters.
Comment
-
Originally posted by jimbohale View Postmove on to a subject that actually matters to me.oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.
Comment
-
Originally posted by TheBackCat View PostNeither of those links deal with the specific examples I listed. In fact, neither of them deal with anything remotely similar to any of the examples I listed. Almost all, if not all, of the examples in those links are function-specific embedded systems and embedded user interfaces. But the examples I listed are generally much larger, more complex systems doing a wider variety of tasks and requiring a wider variety of interfaces. I see no indication from either of your links that real-time systems are commonly used in the latter case, quite the contrary there is a notable absence of such systems on either website.
Let me be more specific with my questions:
1. How familiar are you with the specific examples I mentioned?
2. Do you have specific knowledge that those examples involve Linux running on top of another real-time kernel, or are you just assuming that based on your impression of how systems you are familiar with are normally built?
Vxworks(wind river) controls Curiosity. QNX runs nuclear power plants.
Dammit! Phoronix ate my reply!!!!!
Briefly, yes, ukernels are best for simple systems (for some value of simple), but ones where the software can't fail. That is, where hardware failure rates are higher than software failures. This isn't about the existence of some server staying up for twelve years, but about a particular device closing switch A in response to event B within time C EVERYTIME (with ardware failure rates being the sole limiter). Linux, running on its own, can't give those guarantees (it's way too complex to analyze using current methods to the best of my knowledge). A big iron server going down is inconvenient, possibly catastrophic, for an org, but people aren't going to die as a direct result of it going down. Besides, such servers have other means of achieving high reliability and that's through redundant hardware and fail over. If you don't have room for that, and the service is sufficiently simple, and it simply can't fail then moving to a ukernel makes sense.
There's way more to this but I'm really pissed off my response was eaten. Look up safety critical Linux (that's a link on the osadl website), and Safety Integrity Level(s).
Comment
-
Originally posted by liam View PostFirst, you could provide links for your claims. Specifically that they are being run on the metal. I was able to find an article from 1999 talking about using a Linux program to help with docking procedures.
But here are some links:
Echelon Corp. unveiled a distributed control node designed for electrical grid optimization. The DCN 3000 communicates with grid devices via OSGP power-line networking, reports back to the utility head-end via Ethernet or 3G, and enables downloading of Linux-based smart grid apps. Echelon, which claims to have more than 100 million Echelon-powered devices installed worldwide, defines the DCN 3000 as a
Originally posted by liam View PostA big iron server going down is inconvenient, possibly catastrophic, for an org, but people aren't going to die as a direct result of it going down.
So let me turn this around: do you have any examples of large, complex systems like my examples running real-time kernels? You keep providing examples of small, special-purpose embedded systems and then pretend this is representative of all safety-critical systems. But there are lots of larger, more complex safety-critical systems.
Comment
Comment