Announcement

Collapse
No announcement yet.

"The World's Most Highly-Assured OS" Kernel Open-Sourced

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #51
    Originally posted by silix View Post
    but nobody ever said monolithic kernels are unstable by design, they are inherently more complex and less resilient by design, in addition to being less suitable for this kind of development
    maybe, but that's beside the scope of the topic here..
    I beg to differ. Unless you're comparing apples to oranges or Godzilla (Linux) to chicken (L4) kernel. In such case Linux as a whole is indeed more complex, but such comparison is unfair.

    Comment


    • #52
      Originally posted by kaprikawn View Post
      Thanks, wow that's quite something. A quick google search shows the Linux kernel was over 15million lines of code in 2011. I suppose it's a lot easier to keep code secure when there's so few moving parts.

      For limited use cases I'm sure this would be good to use. Though with so few lines of code I'm assuming there's no hardware drivers. Imagine getting wi-fi up-and-running using this!
      Which also means IT DOES VERY LITTLE!. ie bare bones. Considering the GD guys are into defense, they do cut corners without people knowing, so I doubt they had a bunch of security hackers to rigoriously test the security. You can see from many products of the Military Industrial Complex where the security is physical and they attached a self-destruct explosive to blow the thing into a billion bits to prevent the enemy from looking into their secrets!.

      Comment


      • #53
        Originally posted by liam View Post
        Linux is in places like these but not, afaict, running on the metal. They use something like qnx/wind river/sel4 and run Linux as a process (yes, a bit like KVM).
        Do you have a source for that?

        Comment


        • #54
          Originally posted by TheBlackCat View Post
          Do you have a source for that?
          I've mostly learned about this over time so I don't have any particular link in mind but here's what I found by just going to the companies websites.
          www.windriver.com/customers/customer-success/
          www.qnx.com/solutions/industries/defense.html#customers

          Comment


          • #55
            Originally posted by liam View Post
            I've mostly learned about this over time so I don't have any particular link in mind but here's what I found by just going to the companies websites.
            www.windriver.com/customers/customer-success/
            www.qnx.com/solutions/industries/defense.html#customers
            Neither of those links deal with the specific examples I listed. In fact, neither of them deal with anything remotely similar to any of the examples I listed. Almost all, if not all, of the examples in those links are function-specific embedded systems and embedded user interfaces. But the examples I listed are generally much larger, more complex systems doing a wider variety of tasks and requiring a wider variety of interfaces. I see no indication from either of your links that real-time systems are commonly used in the latter case, quite the contrary there is a notable absence of such systems on either website.

            Let me be more specific with my questions:
            1. How familiar are you with the specific examples I mentioned?
            2. Do you have specific knowledge that those examples involve Linux running on top of another real-time kernel, or are you just assuming that based on your impression of how systems you are familiar with are normally built?

            Comment


            • #56
              Originally posted by liam View Post
              Linux is in places like these but not, afaict, running on the metal. They use something like qnx/wind river/sel4 and run Linux as a process (yes, a bit like KVM)..
              *None* of the big-iron systems I ever seen used such setup, starting from Linux *military grade* systems up to mainframes running in banks, hospitals and insurance companies.
              Care to post links that prove your point (actual deployments)?
              Lets start by stock exchanges and continue from there.


              Originally posted by Luke_Wolf View Post
              You're the first person I've ever seen format that like that and it's wrong, it should be formatted $1M+. In english the currency symbol always goes in front, and the only symbol that can occur between a number and it's units is a closed range (e.g. $1-2M, or $1 to 2M).
              My mistake.


              Originally posted by silix View Post
              but your point is orthogonal to the point at hand...
              the point here is not uptime, the point is correctness
              the ECU in your car may very well have an uptime of just a few hours (the duration of a trip) but during those, you mostly want its sw to be correct in its execution, so that the correct electrical signals are sent out at the right time (not too early nor too late), otherwise it may very well make the difference between the successfully passing another car or avoiding an obstacle, and a crash...
              otoh running for months, by itself doesnt tell anything about whether the running kernel has or doesnt have hidden vulnerabilities waiting to be exploited, nor whether or not the system is already compromised and part of a botnet (actually, if a malware wants you to be part of a botnet, it'd rather like your uptime to be longer than shorter ..)
              with a normal, general purpose (or jack of all trades) kernel, security means that lurking vulnerabilities (which are there anyway) are possibly unknown to the majority of people (including malware writers), hopefully found, and when found a fix is written and deployed asap leaving a possibly limited exploitation window - with a kernel formally verified at the code level, you are assured beforehand against the presence of vulnerabilities in the kernel
              1. I would imagine that the New York stock exchange is far more reliant on "correctness" than my car's ECU.
              A mistake in my car's ECU may cause the engine to shut down (Most of the countries in the world require a physical link to the steering wheel and brake systems that cannot be disabled by any type of computing system - hence the lack of air-craft-like "fly by wire" systems in 99.99% of the cars today).
              A mistake in the N/Y stock exchange computing system (that ASAIK runs RHEL) may cost 100's of billions of USD.

              2. It may possibly be that L4 and QNX are far more secure than the Linux kernel, but given the fact that 1-5% of the known vulnerabilities are kernel related makes the point rather mute. Far worse, embedded systems (such as ECU) are *notoriously* easy to crack. When you look at the main frame running a stock exchange / bank or looking at the ISP QOS DPI server, the amount of active critical code outside the kernel (E.g. DB, processing, web, etc) far out-weights the amount of *active* code paths within the kernel.

              this is assuming the mathematical model your kernel is tested against includes security protocols and exploitable code execution paths - in fact envisioning a complete formal model is a very complex part of the design process, that's why this is mostly applied to microkernels - with general purpose larger ones it becomes unwieldy, complexity growing exponentially
              but nobody ever said monolithic kernels are unstable by design, they are inherently more complex and less resilient by design, in addition to being less suitable for this kind of development
              maybe, but that's beside the scope of the topic here..
              Again, this is nice in theory, I've yet to see concrete evidence that validates this assumption.
              oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
              oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
              oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
              Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

              Comment


              • #57
                I don't even have anything to say. Why do you guys care so much? If it's nothing special, why waste your breath? It appears to be a microkernel that is relatively well designed and verified to work exactly as it should. Is it an OS? No. Not sure exactly the point of it but I'm sure there is one and the more things that are open sourced, the better. Perhaps some other kernels can take note of some things they've done. Either way, there is no point in fussing about it, move on to a subject that actually matters.

                Comment


                • #58
                  Originally posted by jimbohale View Post
                  move on to a subject that actually matters to me.
                  You get the point.
                  oVirt-HV1: Intel S2600C0, 2xE5-2658V2, 128GB, 8x2TB, 4x480GB SSD, GTX1080 (to-VM), Dell U3219Q, U2415, U2412M.
                  oVirt-HV2: Intel S2400GP2, 2xE5-2448L, 120GB, 8x2TB, 4x480GB SSD, GTX730 (to-VM).
                  oVirt-HV3: Gigabyte B85M-HD3, E3-1245V3, 32GB, 4x1TB, 2x480GB SSD, GTX980 (to-VM).
                  Devel-2: Asus H110M-K, i5-6500, 16GB, 3x1TB + 128GB-SSD, F33.

                  Comment


                  • #59
                    Originally posted by TheBackCat View Post
                    Neither of those links deal with the specific examples I listed. In fact, neither of them deal with anything remotely similar to any of the examples I listed. Almost all, if not all, of the examples in those links are function-specific embedded systems and embedded user interfaces. But the examples I listed are generally much larger, more complex systems doing a wider variety of tasks and requiring a wider variety of interfaces. I see no indication from either of your links that real-time systems are commonly used in the latter case, quite the contrary there is a notable absence of such systems on either website.

                    Let me be more specific with my questions:
                    1. How familiar are you with the specific examples I mentioned?
                    2. Do you have specific knowledge that those examples involve Linux running on top of another real-time kernel, or are you just assuming that based on your impression of how systems you are familiar with are normally built?
                    First, you could provide links for your claims. Specifically that they are being run on the metal. I was able to find an article from 1999 talking about using a Linux program to help with docking procedures.
                    Vxworks(wind river) controls Curiosity. QNX runs nuclear power plants.
                    Dammit! Phoronix ate my reply!!!!!
                    Briefly, yes, ukernels are best for simple systems (for some value of simple), but ones where the software can't fail. That is, where hardware failure rates are higher than software failures. This isn't about the existence of some server staying up for twelve years, but about a particular device closing switch A in response to event B within time C EVERYTIME (with ardware failure rates being the sole limiter). Linux, running on its own, can't give those guarantees (it's way too complex to analyze using current methods to the best of my knowledge). A big iron server going down is inconvenient, possibly catastrophic, for an org, but people aren't going to die as a direct result of it going down. Besides, such servers have other means of achieving high reliability and that's through redundant hardware and fail over. If you don't have room for that, and the service is sufficiently simple, and it simply can't fail then moving to a ukernel makes sense.

                    There's way more to this but I'm really pissed off my response was eaten. Look up safety critical Linux (that's a link on the osadl website), and Safety Integrity Level(s).

                    Comment


                    • #60
                      Originally posted by liam View Post
                      First, you could provide links for your claims. Specifically that they are being run on the metal. I was able to find an article from 1999 talking about using a Linux program to help with docking procedures.
                      So in other words, you have no clue about the examples I listed, but you still stated definitively that they are not "being run on the metal".

                      But here are some links:

                      This section caters latest news on electronics, technology and related fields.


                      Echelon Corp. unveiled a distributed control node designed for electrical grid optimization. The DCN 3000 communicates with grid devices via OSGP power-line networking, reports back to the utility head-end via Ethernet or 3G, and enables downloading of Linux-based smart grid apps. Echelon, which claims to have more than 100 million Echelon-powered devices installed worldwide, defines the DCN 3000 as a


                      Originally posted by liam View Post
                      A big iron server going down is inconvenient, possibly catastrophic, for an org, but people aren't going to die as a direct result of it going down.
                      Yes, people most certainly can die if the system controlling the air traffic control RADAR dies. People most certainly can die if the computer controlling a nuclear submarine dies. People most certainly can die if the traffic control computer dies. People most certainly can die of the computer controlling the train switching system dies. People most certainly can die if the computers controlling the ISS docking procedure die. People most certainly can die if the power grid goes down.

                      So let me turn this around: do you have any examples of large, complex systems like my examples running real-time kernels? You keep providing examples of small, special-purpose embedded systems and then pretend this is representative of all safety-critical systems. But there are lots of larger, more complex safety-critical systems.

                      Comment

                      Working...
                      X