Announcement

Collapse
No announcement yet.

Introducing The Library Operating System For Linux

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    I guess POSIX-compatibility isn't that trendy these days

    Comment


    • #12
      Originally posted by duby229 View Post
      I really like the idea of a monolithic kernel focused only on hardware support. Which brings us to the need for something like a "software kernel" or whatever you might want to call it. That's the part I'm not so sure of.
      I liked your first troll bait, but when you repost the same thing twice on same comment page due to low flame response it makes you look pathetic.

      Comment


      • #13
        Originally posted by nanonyme View Post
        I guess POSIX-compatibility isn't that trendy these days
        Would we really give that up?

        Comment


        • #14
          Originally posted by gens View Post
          there are plenty of documents on microkernels that show their good and bad sides
          one that i remember is that it turns out to be slower in general (not by much, but still)

          the most popular debate is the great Tanenbaum?Torvalds flame war of the nineties


          just to say, multi-core in C is good
          you are probably thinking of all those applications that were programmed to use multiple cores poorly
          but that's another topic
          That's the (moot) point. The debate came down to the performance loss and inconveniences of using parallel and distributed algorithms (mostly in the scheduler). But, if hardware keeps going multi-core \ processor, then those algorithms will end up in the kernel anyhow.

          But like I hinted earlier hybrid architectures like the parallella board and language level portable concurrency are far more likely to happen. It won't be nearly as good as having the entire OS designed for, and taking advantage of, the parallelism, but it will be the path of least resistance to push the concurrency functionality away from the kernel and into the runtime.

          Comment


          • #15
            Originally posted by wizard69 View Post
            Would we really give that up?
            Well, it sounded to me like LibOS is all about replacing POSIX symbols with LibOS equivalents so programs written against it will not be portable except against LibOS. First comes socket handling, then other things

            Comment


            • #16
              I really like the idea of a monolithic kernel focused only on hardware support. Which brings us to the need for something like a "software kernel" or whatever you might want to call it. That's the part I'm not so sure of.
              no, which brings us to a .. microkernel
              you do realize that the distinction between micro- and macro- (or monolithic) kernels is in terms of privileged code footprint and of abstractions aither one exposes, and facilities either one implements, do you?
              minimal code for process scheduling with low level IPC and io ports available to userland -> uK, IO and access control at the file level with the kernel implementing stuff for whatever lies below the file abstraction -> macrokernel
              Originally posted by c117152 View Post
              That's the (moot) point. The debate came down to the performance loss and inconveniences of using parallel and distributed algorithms (mostly in the scheduler). But, if hardware keeps going multi-core \ processor, then those algorithms will end up in the kernel anyhow.

              But like I hinted earlier hybrid architectures like the parallella board and language level portable concurrency are far more likely to happen. It won't be nearly as good as having the entire OS designed for, and taking advantage of, the parallelism, but it will be the path of least resistance to push the concurrency functionality away from the kernel and into the runtime.
              the point of microkernels is reducing the size of the runtime privileged code - performance or scalability increases are not inherent in a microkernel architecture
              unless all io buses peripherals accept parallel stateless command issuing and device context opening, at some point concurrent operation will have to be serialized somehow anyway - though admittedly, microkernels usually being "leaner" designs (and more importantly, often designed from a clean slate rather than having to reuse legacy codebases) it's easier for them to implement the above as low in the stack as possible, and besides, to themselves operate in a lockless manner

              but this is something you could in theory design in a macrokernel too, provided you design your io stack as reentrant and lock-free from the beginning - as making it so as an afterthought retrofit is a major pita

              Comment


              • #17
                Originally posted by duby229 View Post
                Well not really. It can still be a monolithic kernel with modular architecture.
                Linux is a monolithic kernel with modular architecture.

                Read Linus's post on the issue. A microkernel replaces fast, simple function calls with communication between parallel processes. The arguments for and against this have been done to death.

                Originally posted by Linus Torvalds
                Any time you have "one overriding idea", and push your idea as a superior ideology, you're going to be wrong. Microkernels had one such ideology, there have been others. It's all BS. The fact is, reality is complicated, and not amenable to the "one large idea" model of problem solving. The only way that problems get solved in real life is with a lot of hard work on getting the details right. Not by some over-arching ideology that somehow magically makes things work.

                Comment


                • #18
                  Originally posted by chrisb View Post
                  Linux is a monolithic kernel with modular architecture.

                  Read Linus's post on the issue. A microkernel replaces fast, simple function calls with communication between parallel processes. The arguments for and against this have been done to death.
                  Thanks for the link. This is the first time I read that specific discussion. And he is right of course, reality is a lot more messy than ideology. I do still believe there is a lot of stuff currently in the kernel that shouldn't be there, the network stack being one of the biggest things. But overall I'm really happy with the state of the kernel. It's good stuff.

                  Comment


                  • #19
                    Originally posted by silix View Post
                    the point of microkernels is reducing the size of the runtime privileged code - performance or scalability increases are not inherent in a microkernel architecture
                    unless all io buses peripherals accept parallel stateless command issuing and device context opening, at some point concurrent operation will have to be serialized somehow anyway - though admittedly, microkernels usually being "leaner" designs (and more importantly, often designed from a clean slate rather than having to reuse legacy codebases) it's easier for them to implement the above as low in the stack as possible, and besides, to themselves operate in a lockless manner

                    but this is something you could in theory design in a macrokernel too, provided you design your io stack as reentrant and lock-free from the beginning - as making it so as an afterthought retrofit is a major pita
                    You're quoting ancient (mid 90s) textbook academic definitions that were made stale even before they were put to paper. Just pick up any paper or even patent from QNX from the last decade and you'll find those "inherit" problems were resolved in both software and hardware in products that are already in the market.

                    The funny thing is the current research is focused on trying to retrofit - as in, an afterthought - the current monolithic kernels (the Linux kernel mostly) in the manner you're justifiably spoke against simply because there's nothing else worthwhile doing that hasn't been done already.

                    Comment


                    • #20
                      I've integrated network stacks as user space services in safety-critical real-time systems, and there was a huge difference in performance from having the network stack in the kernel vs having it in user space (mainly because more context switches and copies are involved). In safety-critical systems you want services as isolated as posible, but that's not the case for linux.
                      For Linux, I see it as a nice playground to test and debug new hardware-independent functionality. But for performance reasons, it should be inside the kernel.

                      Comment

                      Working...
                      X