Page 6 of 7 FirstFirst ... 4567 LastLast
Results 51 to 60 of 61

Thread: "The World's Most Highly-Assured OS" Kernel Open-Sourced

  1. #51
    Join Date
    Nov 2012
    Posts
    614

    Default

    Quote Originally Posted by silix View Post
    but nobody ever said monolithic kernels are unstable by design, they are inherently more complex and less resilient by design, in addition to being less suitable for this kind of development
    maybe, but that's beside the scope of the topic here..
    I beg to differ. Unless you're comparing apples to oranges or Godzilla (Linux) to chicken (L4) kernel. In such case Linux as a whole is indeed more complex, but such comparison is unfair.

  2. #52
    Join Date
    Jan 2014
    Location
    Far East below 32nd parallel.
    Posts
    13

    Default

    Quote Originally Posted by kaprikawn View Post
    Thanks, wow that's quite something. A quick google search shows the Linux kernel was over 15million lines of code in 2011. I suppose it's a lot easier to keep code secure when there's so few moving parts.

    For limited use cases I'm sure this would be good to use. Though with so few lines of code I'm assuming there's no hardware drivers. Imagine getting wi-fi up-and-running using this!
    Which also means IT DOES VERY LITTLE!. ie bare bones. Considering the GD guys are into defense, they do cut corners without people knowing, so I doubt they had a bunch of security hackers to rigoriously test the security. You can see from many products of the Military Industrial Complex where the security is physical and they attached a self-destruct explosive to blow the thing into a billion bits to prevent the enemy from looking into their secrets!.

  3. #53
    Join Date
    Feb 2011
    Posts
    1,161

    Default

    Quote Originally Posted by liam View Post
    Linux is in places like these but not, afaict, running on the metal. They use something like qnx/wind river/sel4 and run Linux as a process (yes, a bit like KVM).
    Do you have a source for that?

  4. #54
    Join Date
    Jan 2009
    Posts
    1,419

    Default

    Quote Originally Posted by TheBlackCat View Post
    Do you have a source for that?
    I've mostly learned about this over time so I don't have any particular link in mind but here's what I found by just going to the companies websites.
    www.windriver.com/customers/customer-success/
    www.qnx.com/solutions/industries/defense.html#customers

  5. #55
    Join Date
    Feb 2011
    Posts
    1,161

    Default

    Quote Originally Posted by liam View Post
    I've mostly learned about this over time so I don't have any particular link in mind but here's what I found by just going to the companies websites.
    www.windriver.com/customers/customer-success/
    www.qnx.com/solutions/industries/defense.html#customers
    Neither of those links deal with the specific examples I listed. In fact, neither of them deal with anything remotely similar to any of the examples I listed. Almost all, if not all, of the examples in those links are function-specific embedded systems and embedded user interfaces. But the examples I listed are generally much larger, more complex systems doing a wider variety of tasks and requiring a wider variety of interfaces. I see no indication from either of your links that real-time systems are commonly used in the latter case, quite the contrary there is a notable absence of such systems on either website.

    Let me be more specific with my questions:
    1. How familiar are you with the specific examples I mentioned?
    2. Do you have specific knowledge that those examples involve Linux running on top of another real-time kernel, or are you just assuming that based on your impression of how systems you are familiar with are normally built?

  6. #56
    Join Date
    Oct 2006
    Location
    Israel
    Posts
    597

    Default

    Quote Originally Posted by liam View Post
    Linux is in places like these but not, afaict, running on the metal. They use something like qnx/wind river/sel4 and run Linux as a process (yes, a bit like KVM)..
    *None* of the big-iron systems I ever seen used such setup, starting from Linux *military grade* systems up to mainframes running in banks, hospitals and insurance companies.
    Care to post links that prove your point (actual deployments)?
    Lets start by stock exchanges and continue from there.


    Quote Originally Posted by Luke_Wolf View Post
    You're the first person I've ever seen format that like that and it's wrong, it should be formatted $1M+. In english the currency symbol always goes in front, and the only symbol that can occur between a number and it's units is a closed range (e.g. $1-2M, or $1 to 2M).
    My mistake.


    Quote Originally Posted by silix View Post
    but your point is orthogonal to the point at hand...
    the point here is not uptime, the point is correctness
    the ECU in your car may very well have an uptime of just a few hours (the duration of a trip) but during those, you mostly want its sw to be correct in its execution, so that the correct electrical signals are sent out at the right time (not too early nor too late), otherwise it may very well make the difference between the successfully passing another car or avoiding an obstacle, and a crash...
    otoh running for months, by itself doesnt tell anything about whether the running kernel has or doesnt have hidden vulnerabilities waiting to be exploited, nor whether or not the system is already compromised and part of a botnet (actually, if a malware wants you to be part of a botnet, it'd rather like your uptime to be longer than shorter ..)
    with a normal, general purpose (or jack of all trades) kernel, security means that lurking vulnerabilities (which are there anyway) are possibly unknown to the majority of people (including malware writers), hopefully found, and when found a fix is written and deployed asap leaving a possibly limited exploitation window - with a kernel formally verified at the code level, you are assured beforehand against the presence of vulnerabilities in the kernel
    1. I would imagine that the New York stock exchange is far more reliant on "correctness" than my car's ECU.
    A mistake in my car's ECU may cause the engine to shut down (Most of the countries in the world require a physical link to the steering wheel and brake systems that cannot be disabled by any type of computing system - hence the lack of air-craft-like "fly by wire" systems in 99.99% of the cars today).
    A mistake in the N/Y stock exchange computing system (that ASAIK runs RHEL) may cost 100's of billions of USD.

    2. It may possibly be that L4 and QNX are far more secure than the Linux kernel, but given the fact that 1-5% of the known vulnerabilities are kernel related makes the point rather mute. Far worse, embedded systems (such as ECU) are *notoriously* easy to crack. When you look at the main frame running a stock exchange / bank or looking at the ISP QOS DPI server, the amount of active critical code outside the kernel (E.g. DB, processing, web, etc) far out-weights the amount of *active* code paths within the kernel.

    this is assuming the mathematical model your kernel is tested against includes security protocols and exploitable code execution paths - in fact envisioning a complete formal model is a very complex part of the design process, that's why this is mostly applied to microkernels - with general purpose larger ones it becomes unwieldy, complexity growing exponentially
    but nobody ever said monolithic kernels are unstable by design, they are inherently more complex and less resilient by design, in addition to being less suitable for this kind of development
    maybe, but that's beside the scope of the topic here..
    Again, this is nice in theory, I've yet to see concrete evidence that validates this assumption.
    DEV: Intel S2600C0, 2xE52658V2, 32GB, 4x2TB, GTX780, F20/x86_64, Dell U2711.
    SRV: Intel S5520SC, 2xX5680, 36GB, 4x2TB, GTX550, F20/x86_64, Dell U2412..
    BACK: Tyan Tempest i5400XT, 2xE5335, 8GB, 3x1.5TB, 9800GTX, F20/x86-64.
    LAP: ASUS N56VJ, i7-3630QM, 16GB, 1TB, 635M, F20/x86_64.

  7. #57
    Join Date
    May 2014
    Posts
    101

    Default

    I don't even have anything to say. Why do you guys care so much? If it's nothing special, why waste your breath? It appears to be a microkernel that is relatively well designed and verified to work exactly as it should. Is it an OS? No. Not sure exactly the point of it but I'm sure there is one and the more things that are open sourced, the better. Perhaps some other kernels can take note of some things they've done. Either way, there is no point in fussing about it, move on to a subject that actually matters.

  8. #58
    Join Date
    Oct 2006
    Location
    Israel
    Posts
    597

    Default

    Quote Originally Posted by jimbohale View Post
    move on to a subject that actually matters to me.
    You get the point.
    DEV: Intel S2600C0, 2xE52658V2, 32GB, 4x2TB, GTX780, F20/x86_64, Dell U2711.
    SRV: Intel S5520SC, 2xX5680, 36GB, 4x2TB, GTX550, F20/x86_64, Dell U2412..
    BACK: Tyan Tempest i5400XT, 2xE5335, 8GB, 3x1.5TB, 9800GTX, F20/x86-64.
    LAP: ASUS N56VJ, i7-3630QM, 16GB, 1TB, 635M, F20/x86_64.

  9. #59
    Join Date
    Jan 2009
    Posts
    1,419

    Default

    Quote Originally Posted by TheBackCat View Post
    Neither of those links deal with the specific examples I listed. In fact, neither of them deal with anything remotely similar to any of the examples I listed. Almost all, if not all, of the examples in those links are function-specific embedded systems and embedded user interfaces. But the examples I listed are generally much larger, more complex systems doing a wider variety of tasks and requiring a wider variety of interfaces. I see no indication from either of your links that real-time systems are commonly used in the latter case, quite the contrary there is a notable absence of such systems on either website.

    Let me be more specific with my questions:
    1. How familiar are you with the specific examples I mentioned?
    2. Do you have specific knowledge that those examples involve Linux running on top of another real-time kernel, or are you just assuming that based on your impression of how systems you are familiar with are normally built?
    First, you could provide links for your claims. Specifically that they are being run on the metal. I was able to find an article from 1999 talking about using a Linux program to help with docking procedures.
    Vxworks(wind river) controls Curiosity. QNX runs nuclear power plants.
    Dammit! Phoronix ate my reply!!!!!
    Briefly, yes, ukernels are best for simple systems (for some value of simple), but ones where the software can't fail. That is, where hardware failure rates are higher than software failures. This isn't about the existence of some server staying up for twelve years, but about a particular device closing switch A in response to event B within time C EVERYTIME (with ardware failure rates being the sole limiter). Linux, running on its own, can't give those guarantees (it's way too complex to analyze using current methods to the best of my knowledge). A big iron server going down is inconvenient, possibly catastrophic, for an org, but people aren't going to die as a direct result of it going down. Besides, such servers have other means of achieving high reliability and that's through redundant hardware and fail over. If you don't have room for that, and the service is sufficiently simple, and it simply can't fail then moving to a ukernel makes sense.

    There's way more to this but I'm really pissed off my response was eaten. Look up safety critical Linux (that's a link on the osadl website), and Safety Integrity Level(s).

  10. #60
    Join Date
    Feb 2011
    Posts
    1,161

    Default

    Quote Originally Posted by liam View Post
    First, you could provide links for your claims. Specifically that they are being run on the metal. I was able to find an article from 1999 talking about using a Linux program to help with docking procedures.
    So in other words, you have no clue about the examples I listed, but you still stated definitively that they are not "being run on the metal".

    But here are some links:
    http://www.linuxjournal.com/article/7789
    http://www.efytimes.com/e1/fullnews.asp?edid=120870
    http://www.unixmen.com/15-weirdsurpr...-run-on-linux/
    http://linuxgizmos.com/linux-adds-fl...control-nodes/

    Quote Originally Posted by liam View Post
    A big iron server going down is inconvenient, possibly catastrophic, for an org, but people aren't going to die as a direct result of it going down.
    Yes, people most certainly can die if the system controlling the air traffic control RADAR dies. People most certainly can die if the computer controlling a nuclear submarine dies. People most certainly can die if the traffic control computer dies. People most certainly can die of the computer controlling the train switching system dies. People most certainly can die if the computers controlling the ISS docking procedure die. People most certainly can die if the power grid goes down.

    So let me turn this around: do you have any examples of large, complex systems like my examples running real-time kernels? You keep providing examples of small, special-purpose embedded systems and then pretend this is representative of all safety-critical systems. But there are lots of larger, more complex safety-critical systems.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •