Announcement

Collapse
No announcement yet.

MINIX 3.4 RC6 Released

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    First of all Minix 3 is nothing like the older Minix. It is written to be self healing and robust at the cost of some performance. There is no war between Linux and Minix, they are two different approaches. Linux is a monolithic kernel and Minix is a microkernel writen to be as robust as possible. I would love for the Minix kernel to be part of Debian for example. If you know nothing about Minux and think it is just yet another unix like OS then by all means educate yourself a bit. Minix 3 is great!

    http://www.dirtcellar.net

    Comment


    • #12
      Originally posted by Holograph View Post
      I do not understand the point of the post bashing Tanenbaum. WTF?
      Between QNX, NT and Minix, Unix people don't like being reminded they stuck with the most obsolete design and won thanks to the x86's promise of backwards compatibility.

      Comment


      • #13
        Did they get a working X Server again?

        Comment


        • #14
          Tanenbaum has said that he made a few logical assumptions which is what led to minix not really developing at the pace of linux.

          1: Thought that GNU Hurd and other OS would end up being used more.

          2: Thought that x86 would end up being outdated just like the others before it.

          3: Thought that Minix would stay as a microkernel for education, so he never focused on getting it anywhere much further than that.

          Comment


          • #15
            Originally posted by c117152 View Post

            Between QNX, NT and Minix, Unix people don't like being reminded they stuck with the most obsolete design and won thanks to the x86's promise of backwards compatibility.
            There is a lot of things that are obsolete if not downright abject about Unix, but the kernel is not one of them. I fact I would argue that the Unix-style kernel proved to be the most successful design ever. Microkernels' promise was to make OSes more modular and more easily portable. In practice they proved anything but. They are notoriously much more difficult, rather than easier, to develop, which is why Linus deliberately steered clear of them. Linux is essentially an Unix-style kernel (with some microkernel-ish traits, but not in the sense of what Tannenbaum advocates) and is the most open and most versatile OS in existence. It can run on anything from watches to multimedia workstations to NUMA systems with thousands of CPUs to huge clusters. It also supports more CPUs and architectures than any other OS (except maybe NetBSD, which is a strict Unix and as far removed from a microkernel as it gets), and it offers better performance than pretty much anything out there.

            It is no accident that real microkernel OSes remain marginal and confined to niche applications. QNX is good for embedded realtime apps, but not as a general-purpose OS. Minix is an educational OS designed to be easy to get your hands on, not to run real-world applications. Hurd is forever stuck in development hell and is unlikely to ever produce anything usable. Other "microkernel" OSes - OSF/1, NeXT, MacOSX - are really one singe module sitting on top of the Mach kernel, which they essentially use as a HAL. Windows NT started as a (kind of) microkernel but that changed, today's NT kernels are large complex beasts just like Linux or Unix kernels.

            Whatever Linus' motivation not to use a microkernel was, history is here to prove that he made the right call, Tannenbaum's grief notwithstanding.

            Comment


            • #16
              Originally posted by jacob View Post

              There is a lot of things that are obsolete if not downright abject about Unix, but the kernel is not one of them. I fact I would argue that the Unix-style kernel proved to be the most successful design ever. Microkernels' promise was to make OSes more modular and more easily portable. In practice they proved anything but. They are notoriously much more difficult, rather than easier, to develop, which is why Linus deliberately steered clear of them. Linux is essentially an Unix-style kernel (with some microkernel-ish traits, but not in the sense of what Tannenbaum advocates) and is the most open and most versatile OS in existence. It can run on anything from watches to multimedia workstations to NUMA systems with thousands of CPUs to huge clusters. It also supports more CPUs and architectures than any other OS (except maybe NetBSD, which is a strict Unix and as far removed from a microkernel as it gets), and it offers better performance than pretty much anything out there.

              It is no accident that real microkernel OSes remain marginal and confined to niche applications. QNX is good for embedded realtime apps, but not as a general-purpose OS. Minix is an educational OS designed to be easy to get your hands on, not to run real-world applications. Hurd is forever stuck in development hell and is unlikely to ever produce anything usable. Other "microkernel" OSes - OSF/1, NeXT, MacOSX - are really one singe module sitting on top of the Mach kernel, which they essentially use as a HAL. Windows NT started as a (kind of) microkernel but that changed, today's NT kernels are large complex beasts just like Linux or Unix kernels.

              Whatever Linus' motivation not to use a microkernel was, history is here to prove that he made the right call, Tannenbaum's grief notwithstanding.
              Not even close. It is true that message passing is a problem that had to be solved to make it so that microkernels weren't slow however they weren't difficult. What basically happened is that Minix simply couldn't have won due to it's license, Hurd was never going to go anywhere in the first place much like OS/2, and Linux just happened to be in the right place at the right time to win the Open Source UNIX-like kernel spot, with BSD being its main competitor not because it had a monolithic kernel but because there was a built up fanbase from the 1980s

              Beyond that the reality is that Microkernel development outside of QNX and L4 has primarily been focused on developing toys for Academia such as HelenOS. QNX never seriously went after the desktop or server so it's usage is largely invisible to most people, and L4 never even tried. On the other hand Redox OS and most other hobby OSes are microkernels. if monolithic kernels were easier you'd think it'd be the opposite.

              Instead the answer is simple... manpower. let's say for the sake of argument that the coefficient for the speed of development of a microkernel was half that of a monoltithic kernel, there are roughly 1,500 active linux kernel developers at any one time ( https://lwn.net/Articles/654633/ ), in order to match their speed in this comparison you would need 750 developers working on this microkernel, instead there's 29 for Redox https://github.com/redox-os/redox, 52 for minix https://github.com/Stichting-MINIX-R...undation/minix and anything else is far less. So 3.86% and 6.93% of the speed respectively if we were to assume it requires half the effort... Is it really any wonder therefore that they can't catch up to/surpass linux? It's the same situation as with Calligra and LibreOffice, Calligra's architecture is just much better full stop, but LibreOffice has vastly more manpower to the point where it overcomes that fact. It's not Calligra's architecture's fault that it isn't succeeding, it's historical facts and popularity that define that.

              tl;dr: It's not the architecture of Microkernels that prevented them from getting anywhere, it's that anything that isn't Linux, BSD, or Solaris, in open source space has either been an Academic Ivory Tower project, a hobby OS, or a GNU Turd.

              Comment


              • #17
                Originally posted by jacob View Post
                QNX is good for embedded realtime apps, but not as a general-purpose OS..
                QNX almost had its day. In a short period of time, they released QNX with Photon and a POSIX layer. You could port many Linux/Unix apps to it (I did several myself) relatively painlessly, and performance was great on mediocre hardware. They even had a great app development tool called Photon Application Builder, which was self-hosted inside QNX and really made it easy to create native apps right away.

                I repurposed an old system (old in 2001) for my brother to use and QNX really made it shine. But it wasn't Free software, the Desktop capability was just a small flash in the pan, and now it's just a fond memory of an interesting system.

                Comment


                • #18
                  It's all about the hardware. What we call general purpose hardware has pointer registers, branch prediction and special instructions that are designed around shitty compilers, C memory structures and monolithic kernel message passing. If you count how many instructions it takes to do certain operations (about 12 instructions to open a thread, about the same to pass a message...) and try designing new language and code around this, you'll end up with the same designs of the past since the hardware is forcing them down your throat.

                  When you look at modern processors like baseband cellular and the likes (not even purpose\application driven ones), you'll find they're always running microkernels. Since, once you strip away all the compatibility and prediction crap (that the compiler should be doing but licensing issues force into the hardware), you end up with microkernels for the sake of both performance and stability.

                  Comment

                  Working...
                  X