Announcement

Collapse
No announcement yet.

"The World's Most Highly-Assured OS" Kernel Open-Sourced

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #16
    Those l4 kernels are pretty impressive. These are what we need for IoT, along with rewritten userspace in "safe language" (for some value of safe). The BIG job will be rewriting drivers (at least certain drivers for devices like network adaptors (including Bluetooth)) so as to make the system as reliable as possible.
    The Genode framework is almost perfect for this sort of project but its security isn't there yet.

    Comment


    • #17
      Originally posted by liam View Post
      Those l4 kernels are pretty impressive. These are what we need for IoT, along with rewritten userspace in "safe language" (for some value of safe). The BIG job will be rewriting drivers (at least certain drivers for devices like network adaptors (including Bluetooth)) so as to make the system as reliable as possible.
      The Genode framework is almost perfect for this sort of project but its security isn't there yet.
      Sorry but no. I think for IoT we need security enhanced Linux + lightweight x86 hardware. Imagine a < 1W TDP and power of fastest Celerons. Possible in few years. I don't get this obsession with micro kernels. GNU Hurd basically showed that micro kernels fail hard and they're not usable for real world. Besides, there's no driver code for these. Even BSD is better.

      Comment


      • #18
        Originally posted by caligula View Post
        Sorry but no. I think for IoT we need security enhanced Linux + lightweight x86 hardware. Imagine a < 1W TDP and power of fastest Celerons. Possible in few years. I don't get this obsession with micro kernels. GNU Hurd basically showed that micro kernels fail hard and they're not usable for real world. Besides, there's no driver code for these. Even BSD is better.
        You don't seem to know what you are talking about.

        Comment


        • #19
          Originally posted by caligula View Post
          I don't get this obsession with micro kernels. GNU Hurd basically showed that micro kernels fail hard and they're not usable for real world.
          It's that "theory vs practice" thing again. A proper micro-kernel is a thing of beauty from an architecture perspective - but keeping that purity of architecture means you can't make the compromises and shortcuts that are needed to achieve adequate performance. Monoliths have their own problems, of course, but by not enforcing strict interfaces between modules, they can avoid an awful lot of overhead...

          Comment


          • #20
            Originally posted by Delgarde View Post
            It's that "theory vs practice" thing again.
            not really - or better, the kernel world is not that "black or white" anymore
            plus, the OP referred to HURD as a general proof of failure for microkernels in general - but if anything, HURD only proved that stallman and his minions arent able to successfully develop a microkernel based OS and make it mainstream (with the model they're still adhering to, being tied to early 90's microkernel technology because of them not knowing any better)
            but other microkernels have built upon the mistakes and inefficiencies of legacy Mach, distanced themselves a lot from it and strived quite a bit, although in specific niches (rt critical systems typically run on microkernels rather than linux) wouldn exactly call it a failure though..
            A proper micro-kernel is a thing of beauty from an architecture perspective -
            maybe, if you just look at the kernel alone and equate beauty with minimalism.. from an engineering perspective things get quite less rosy when you consider that code that would otherwise belong to the kernel ("drivers", resource arbitration facilities and so on (*)) is still there, together with more (kernel - and user -) non trivial code needed to support the former in running in its own process, be restartable etc
            so the overall view on the OS core (what needs to be present at very least to allow a process to run- which now includes much more than just the kernel) may even be less clean than before

            (*) resource (cpu cycles, memory, files or storage devices, io ports) sharing and arbitration need to be done anyway - but centralising "drivers" and the rest of it in the kernel allows to implement it in an efficient way that also allows the kernel to be sufficient (load the kernel and you're ready to run non core processes - not so with microkernels)
            that's why it's put it beneath the kernel to userpace barrier in a pragmatic, rather than elegant, design ..
            but keeping that purity of architecture means you can't make the compromises and shortcuts that are needed to achieve adequate performance.
            otoh, it's to be noted that microkernels existed that achieved better performance than certain monolithic ones.. fanboyisms apart, it's all a matter of algorithms really, and how much processing a system call takes compared to the overhead of the client server round trip overhad
            consider that during time, microkernels have tacked and sometimes very cleverly solved the ipc overhead problem - something linux has yet to discover as a problem, much less solve (KDBUS still does copy across address spaces and marshaling/demarshaling whereas other systems directly share argument stacks)
            Monoliths have their own problems, of course, but by not enforcing strict interfaces between modules, they can avoid an awful lot of overhead...
            and otoh, dont assume that just because linux isnt, no other monolithic/hybrid kernel is or can be internally compartmentalized with well defined api's...
            Last edited by silix; 07-30-2014, 10:42 AM.

            Comment


            • #21
              Originally posted by Delgarde View Post
              It's that "theory vs practice" thing again. A proper micro-kernel is a thing of beauty from an architecture perspective - but keeping that purity of architecture means you can't make the compromises and shortcuts that are needed to achieve adequate performance. Monoliths have their own problems, of course, but by not enforcing strict interfaces between modules, they can avoid an awful lot of overhead...
              Read up on l4 based kernels. They are completely practical. That might very well be the most used kernel arch in the world (a 2012 gd blog put the official deployment numbers at 1.5 billion).
              The IPC latency for l4 kernels can be FAST.

              Comment


              • #22
                Originally posted by liam View Post
                Those l4 kernels are pretty impressive. These are what we need for IoT, along with rewritten userspace in "safe language" (for some value of safe). The BIG job will be rewriting drivers (at least certain drivers for devices like network adaptors (including Bluetooth)) so as to make the system as reliable as possible.
                The Genode framework is almost perfect for this sort of project but its security isn't there yet.
                If 20+ years of micro-kernel development thought us anything is that the concept simply doesn't work. Period.
                Whether or not micro-kernel has theoretical advantage is completely irrelevant in the real world *, **.
                Monolithic/Hybrid kernels such as Linux, Windows and OSX won.

                Oh, and you do know that IoT is yet-another-Internet-related buzzword that will be replaced by another one in a year or two, right?

                (a 2012 gd blog put the official deployment numbers at 1.5 billion).
                Given the fact that Linux itself runs ~1.5b devices. (Android alone counts for +1b devices), 1.5b toasters is not really impressive.

                - Gilboa

                * If the root file-system driver or a PCI controller driver goes up the flames due to a software bug (or hardware issue), and leaves the OS in a inconsistent state, your OS is toast.
                ** Far worse, "safe" 4'th generation languages tend to use exceptions as a error-messaging tool, make the code far less resilient to minor errors.
                Last edited by gilboa; 07-31-2014, 04:08 AM.
                DEV: Intel S2600C0, 2xE52658V2, 32GB, 4x2TB + 2x3TB, GTX780, F21/x86_64, Dell U2711.
                SRV: Intel S5520SC, 2xX5680, 36GB, 4x2TB, GTX550, F21/x86_64, Dell U2412..
                BACK: Tyan Tempest i5400XT, 2xE5335, 8GB, 3x1.5TB, 9800GTX, F21/x86-64.
                LAP: ASUS N56VJ, i7-3630QM, 16GB, 1TB, 635M, F21/x86_64.

                Comment


                • #23
                  Originally posted by gilboa View Post
                  If 20+ years of micro-kernel development thought us anything is that the concept simply doesn't work. Period.
                  Whether or not micro-kernel has theoretical advantage is completely irrelevant in the real world *, **.
                  Monolithic/Hybrid kernels such as Linux, Windows and OSX won.

                  Oh, and you do know that IoT is yet-another-Internet-related buzzword that will be replaced by another one in a year or two, right?



                  Given the fact that Linux itself runs ~1.5b devices. (Android alone counts for +1b devices), 1.5b toasters is not really impressive.

                  - Gilboa

                  * If the root file-system driver or a PCI controller driver goes up the flames due to a software bug (or hardware issue), and leaves the OS in a inconsistent state, your OS is toast.
                  ** Far worse, "safe" 4'th generation languages tend to use exceptions as a error-messaging tool, make the code far less resilient to minor errors.
                  Actually I'm pretty sure that QNX and L4 show that the Microkernel concept does work. Period.

                  The reason that you basically can't use one on the desktop right now is the same reason that the BSDs still need you to think about your hardware before you buy it, whereas Linux you can throw on pretty much any commodity desktop hardware and expect it to work. Linux for better or worse is popular in such a way that most people don't even know that there are other alternatives, and as a result most of the time, money, and energy spent towards kernel work is going to done on Linux, and so everything else is developer-starved by comparison.

                  Comment


                  • #24
                    Originally posted by liam View Post
                    Those l4 kernels are pretty impressive. These are what we need for IoT, along with rewritten userspace in "safe language" (for some value of safe). The BIG job will be rewriting drivers (at least certain drivers for devices like network adaptors (including Bluetooth)) so as to make the system as reliable as possible.
                    The Genode framework is almost perfect for this sort of project but its security isn't there yet.
                    check out MOSA https://github.com/mosa/MOSA-Project/wiki

                    Comment


                    • #25
                      Originally posted by Luke_Wolf View Post
                      Actually I'm pretty sure that QNX and L4 show that the Microkernel concept does work. Period.

                      The reason that you basically can't use one on the desktop right now is the same reason that the BSDs still need you to think about your hardware before you buy it, whereas Linux you can throw on pretty much any commodity desktop hardware and expect it to work. Linux for better or worse is popular in such a way that most people don't even know that there are other alternatives, and as a result most of the time, money, and energy spent towards kernel work is going to done on Linux, and so everything else is developer-starved by comparison.
                      *Shrug*
                      I work in a market that require 5-9's reliability. All the systems that surrounds me run Linux (or big-iron Unix). I'd imagine that this is sufficient proof that monolithic kernels are just as flexible, reliable and secure as a theoretical micro-kernel based server OS without the added complexity and performance penalties.
                      Further-more, the fact that Linux started as hobby project and ended up being the top general computing OS, while fighting a up-hill battle against both Windows and Unix is sufficient proof that if QNX or L4 had real *measurable* advantages over existing monolithic or hybrid kernels, both would have managed to break out of the embedded world and replace the existing players.
                      Last edited by gilboa; 07-31-2014, 11:12 AM.
                      DEV: Intel S2600C0, 2xE52658V2, 32GB, 4x2TB + 2x3TB, GTX780, F21/x86_64, Dell U2711.
                      SRV: Intel S5520SC, 2xX5680, 36GB, 4x2TB, GTX550, F21/x86_64, Dell U2412..
                      BACK: Tyan Tempest i5400XT, 2xE5335, 8GB, 3x1.5TB, 9800GTX, F21/x86-64.
                      LAP: ASUS N56VJ, i7-3630QM, 16GB, 1TB, 635M, F21/x86_64.

                      Comment


                      • #26
                        Originally posted by gilboa View Post
                        *Shrug*
                        I work in a market that require 5-9's reliability. All the systems that surrounds me run Linux (or big-iron Unix). I'd imagine that this is sufficient proof that monolithic kernels are just as flexible, reliable and secure as a theoretical micro-kernel based server OS without the added complexity and performance penalties.
                        Further-more, the fact that Linux started as hobby project and ended up being the top general computing OS, while fighting a up-hill battle against both Windows and Unix is sufficient proof that if QNX or L4 had real *measurable* advantages over existing monolithic or hybrid kernels, both would have managed to break out of the embedded world and replace the existing players.
                        Correlation is not causation. There are more monolithic kernel projects that failed to success in their target markets that microkernels. Noone is pushing any microkernel for desktop or server markets. Despite many pushing Linux as a desktop OS, it has yet to make any inroads on the desktop. Does that make Linux or monolithic kernels bad for the desktop? I don't think so. Correlation is not causation.

                        Comment


                        • #27
                          Originally posted by jayrulez View Post
                          Correlation is not causation. There are more monolithic kernel projects that failed to success in their target markets that microkernels. Noone is pushing any microkernel for desktop or server markets. Despite many pushing Linux as a desktop OS, it has yet to make any inroads on the desktop. Does that make Linux or monolithic kernels bad for the desktop? I don't think so. Correlation is not causation.
                          Basically this ^, plus all of the FOSS gen2 microkernel OSes are in their infancy when compared to the monolithic ones. L4/Pistachio was released in 2001, Minix 3 was 2005, Genode in 2008, HelenOS AFAICT started in 2009.

                          Additionally it's actually really hard to replace a monolithic leader in the FOSS ecosystem. Unlike proprietary development where you're facing single companies developing a solution, in FOSS you're up against a bunch of companies and an entrenched community if you want to take down the leader. A faster more agile, better designed project *may* win out in the really long run, but often they're not going to have the manpower to really pull it off.

                          Comment


                          • #28
                            Originally posted by Luke_Wolf View Post
                            Basically this ^, plus all of the FOSS gen2 microkernel OSes are in their infancy when compared to the monolithic ones. L4/Pistachio was released in 2001, Minix 3 was 2005, Genode in 2008, HelenOS AFAICT started in 2009.

                            Additionally it's actually really hard to replace a monolithic leader in the FOSS ecosystem. Unlike proprietary development where you're facing single companies developing a solution, in FOSS you're up against a bunch of companies and an entrenched community if you want to take down the leader. A faster more agile, better designed project *may* win out in the really long run, but often they're not going to have the manpower to really pull it off.
                            I'd like to point out an endeavor that Intel and the folks behind sel4 are working on called Termite. This addresses my concern about driver bring up by generating a generic driver from the hardware schematics (created in verilog and the like) with a C interface (http://www.ertos.nicta.com.au/resear...ers/synthesis/).

                            Comment


                            • #29
                              Originally posted by Luke_Wolf View Post
                              Actually I'm pretty sure that QNX and L4 show that the Microkernel concept does work. Period.

                              The reason that you basically can't use one on the desktop right now is the same reason that the BSDs still need you to think about your hardware before you buy it, whereas Linux you can throw on pretty much any commodity desktop hardware and expect it to work. Linux for better or worse is popular in such a way that most people don't even know that there are other alternatives, and as a result most of the time, money, and energy spent towards kernel work is going to done on Linux, and so everything else is developer-starved by comparison.
                              You must not forget that Linux has KDBUS, all sorts of virtualization tech, containers, namspaces, lots of ways of restricting permissions and access on userspace. On top of that it can be small (LTO kernel on router less than 2 MB) or powerful (supports all hardware and big enterprise mainframes and supercomputers). Most microkernels are jokes that you run on qemu. Not suitable for real world. Plus when you need fast throughput, they might fail. Nobody can prove they work e.g. on Facebook's servers or at Amazon.

                              Comment


                              • #30
                                Originally posted by caligula View Post
                                You must not forget that Linux has KDBUS, all sorts of virtualization tech, containers, namspaces, lots of ways of restricting permissions and access on userspace. On top of that it can be small (LTO kernel on router less than 2 MB) or powerful (supports all hardware and big enterprise mainframes and supercomputers). Most microkernels are jokes that you run on qemu. Not suitable for real world. Plus when you need fast throughput, they might fail. Nobody can prove they work e.g. on Facebook's servers or at Amazon.
                                There's that too, even Minix3 which is the most advanced of the 3 primary FOSS microkernel (minix, genode, helenOS) projects can't really be run on bear metal at this point. Personally I expect Dragonfly BSD to finish turning into a microkernel before those 3 get themselves in shape to try to compete.

                                Comment

                                Working...
                                X