Announcement

Collapse
No announcement yet.

Minix 3.2 Released, Uses LLVM/Clang, SMP, ELF

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • #46
    Originally posted by uid313 View Post
    I wonder if this guy ever regrets not releasing MINIX as open source before Linux?

    MINIX existed before Linux but was closed source. Then came Linux which was open source and it became big and famous.
    If MINIX had been open source from the start, then Linux would have never been written and Andrew S. Tanenbaum could be the rockstar that Linus Torvalds is today.
    He does regret some things. But open-source/closed-source has nothing to do with it.
    http://linuxfr.org/nodes/88229/comments/1291183

    Comment


    • #47
      Originally posted by ninez View Post
      The example given by RealNC is contrary to what you say ~ because the data remains in memory, the file-system restarts, the data can be saved, and nothing is lost.. So they aren't on par, because if a driver crashes in linux, i get a kernel panic, and my data is lost.
      I was thinking about bug that will corrupt your data. It's maybe a corner case, but the point is there's always some risk. In Linux you can get kernel oops, but if this will safe your data I don't know.

      Actually, if you read back, i fully acknowledge that Microkernels have addition overhead. What i disagree with is that that overhead is 100X what you find in a monolithic kernel.
      In this case it's about process creation which was 140X slower in the benchmark. Other things were twice slower etc. so I'm not saying it's always hundreds times slower.

      I also have an idea for you, why don't you go and look for benchmarks yourself, why don't you go to various companies websites who are using QNX Nuetrino ~ and see if any of their usages, apply to high-performance. Or maybe you could actually go and read a whitepaper or two, and count how many times you see the words 'high-performace' associated with QNX...
      I'm just interested in HPC, enterprise, server and desktop usage and it's hard to find any QNX benchmarks. In others like RT systems it's maybe good.

      wrong, data safe-ness is supposed to be BETTER with Microkernels ~ that is what you are missing. And in some cases, performance loss is marginal, at best. (atleast with QNX this is the case).
      I know it's better overall when comes to your data safeness, but in the corner case I mentioned they're on pair.

      Comment


      • #48
        Let's keep in mind though that Linux is generally rock-stable. I may get a crash every couple of months, but consider that I'm running a bleeding edge installation (Gentoo, using latest testing (~arch) packages.) But even 5 crashes in a year (and that number is actually higher than what I really get) is acceptable for desktop use. And I don't think that the enterprise is running bleeding edge distros to begin with. On servers I administer, I run Debian stable. I can't remember when one of them last crashed. Actually I think they never crashed. Not a single time.

        But... If I had a machine where crashes have bigger risks than just losing the download progress of your porn, like, I don't know, running a nuclear reactor or whatever, then I'd prefer a microkernel. But for desktops or even workstations? Nah. Linux is stable enough.

        It's nice that Minix is there as an option, but it's doomed to obscurity on desktops. No one really needs it there. Even if it would overnight magically acquire all the features Linux or Windows has, I still wouldn't use it; I already get annoyed enough when I lose 2FPS in Skyrim.

        Comment


        • #49
          Originally posted by RealNC View Post
          Let's keep in mind though that Linux is generally rock-stable. I may get a crash every couple of months, but consider that I'm running a bleeding edge installation (Gentoo, using latest testing (~arch) packages.) But even 5 crashes in a year (and that number is actually higher than what I really get) is acceptable for desktop use. And I don't think that the enterprise is running bleeding edge distros to begin with. On servers I administer, I run Debian stable. I can't remember when one of them last crashed. Actually I think they never crashed. Not a single time.

          But... If I had a machine where crashes have bigger risks than just losing the download progress of your porn, like, I don't know, running a nuclear reactor or whatever, then I'd prefer a microkernel. But for desktops or even workstations? Nah. Linux is stable enough.

          It's nice that Minix is there as an option, but it's doomed to obscurity on desktops. No one really needs it there. Even if it would overnight magically acquire all the features Linux or Windows has, I still wouldn't use it; I already get annoyed enough when I lose 2FPS in Skyrim.
          At work, we run Red Hat SANs. Up for 5 years would be more accurate than 5 crashes a year... Truth be told, in a production environment, stability is hardly an issue anymore. Hell, even Windows is stable enough for mission critical software. I'd say 99.9% of the time, if there is any downtime, it is due to either physical maintenance or software problems (software as in on top of the OS, not the OS itself.)

          Comment


          • #50
            Microkernels and overhead

            I'd say to at least look at the "microkernel overhead talk" at:

            http://fosdem.org/2012/schedule/trac...nel_os_devroom

            Comment

            Working...
            X