Announcement

Collapse
No announcement yet.

Minix 3.2 Released, Uses LLVM/Clang, SMP, ELF

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by ninez View Post
    For some types of applications one actually might want to have a self-healing OS, that never crashes and performance may not be a huge deal. And also remember, depending on the design - the performance loss may actually be fairly marginal. (like 5% or something).
    That's understandable. System made for specified tasks can serve you very well. I was just thinking about desktop usage.

    Comment


    • #32
      Originally posted by kraftman View Post
      Nobody, because I'm not saying microkernel will trash my data. It's obvious bugs happen everywhere. If there's a bug in the file system microkernel won't help you. Thus, I don't care if I'll have to restart my box which runs sometimes hundreds times faster or just restart my file system while in both cases my data will be lost.
      Well, if the network driver crashes, then you lose unsaved data anyway because the system will restart. Minix will not restart just because of the net driver crashing, thus saving the data. But maybe more importantly, saving the system's uptime.

      Personally, I don't care. But there are valid claims being made, you can't argue with that. Also, some people really need a 99.99999% crash-free guarantee. Microkernels are for them. Me, I don't care. I prefer performance. The occasional crash (happens maybe 5 times a year, usually the GPU driver fucks up) is acceptable.

      Comment


      • #33
        Or put it differenly: would you rather have an ATM running Windows or Minix3?

        Comment


        • #34
          Originally posted by kraftman View Post
          Ignoring his envy regarding Linux and reality that has already proven Linux model is better... I doubt.
          Linux is great, you'll get no argument from me, here. But i think you need to remember something, Microkernels/self-healing OSes offer up some features that the linux kernel doesn't ~ with Linux a buggy driver can take down your whole system, including causing data loss. - the same isn't true of a microkernel... the truth is, for an ultra reliable system, where it is 'mission critical' to have 99.999999% uptime ~ a well designed microkernel / self-healing OS is probably a better model than Linux. If you are concerned about having an extra 5-10% performance, and the ability of having a driver take down the whole system - then sure, Linux is the better choice.

          So my point here, is that neither model is actually superior over the other. In reality, they both have strengths and weaknesses that depending on the usage/application one might have an advantage over the other.


          Originally posted by kraftman View Post
          Nobody, because I'm not saying microkernel will trash my data. It's obvious bugs happen everywhere. If there's a bug in the file system microkernel won't help you. Thus, I don't care if I'll have to restart my box which runs sometimes hundreds times faster or just restart my file system while in both cases my data will be lost.
          Well, that is a bit silly and an illogical argument. One could just as easily say a buggy driver in Linux will take down the whole system, and the monolithic (linux) kernel won't help you, either. ~ but in this situation a Microkernel *would* still keep working. And so what, there is a bug in the file-system, it still doesn't take down your whole system && when the bug gets fixed and it's no longer a problem...I don't see your issue here, the same could be said of Linux.

          You also state (actually assume) that your system is 'hundreds of times faster' than a microkernel ~ which isn't really true... that is why i pointed out the Blackberry PlayBook, which i can easily compare to iOS and android devices, and i can tell you right now ~ it wasn't 'hundreds of times slower' than iOS/Android ~ in fact, it didn't seem slower to me, at all. It was very much comparable.

          anyway, like i said, you could read up on this stuff, As it would probably make more sense to do that ~ over making nonsensical uninformed statements about technology, that you don't really know very much about.
          Last edited by ninez; 02 March 2012, 09:40 AM.

          Comment


          • #35
            Originally posted by ninez View Post
            Ideally, the file-system restarts after it crashes ~ without any significant impact on the rest of the system. ie: it doesn't crash. The whole idea of a Microkernel/Self-healing OS is that all of it's components are isolated from each other and use IPC to communicate. So if a driver / file-system / *insert component here* happens to crash - it won't take down the whole operating system. ie: it is self-healing.

            If you're interested in the subject, rather than asking in the Phoronix forums - just search around the web && watch a video or two on youtube. there are lots of videos, whitepapers/research papers, wiki's, etc.

            But here are a couple quick links...Tannanbaum, discussing Minix3;

            By Andrew TanenbaumMINIX started in 1987 and led to several offshoots, the best known being Linux. MINIX 3 is the third major version of MINIX and is now foc...


            and



            QNX is another Self-healing/Microkernel/OS that is used in various industries.



            and..



            I am pretty sure that QNX is in much wider use than Minix (probably ever will be), and has been for years (dating back to the late 80s). It is used for many industrial applications. Also, (not that i am a huge fan of Blackberry). but as of 2012, all of Blackberries new smartphones will be using QNX, and currently, the blackberry Playbook does use QNX ---> actually let me rephrase that, they are using a modified version of QNX called BBX (BackBerry + QNX).

            cheerz
            you could have at least told them were to get the original last full featured free for ever to end users non commercial x86 QNX RTP 621 developer ISO releases (rather than the newer but less featured updates for blackberry use etc, does that even have the photon GUI as standard) with real-time inter process communication etc, although it seems its harder to find today as the official mirror ftp sites are deleting their "non commercial" qnx directories today it seems ftp://85.143.48.249/pub/os/qnx/ qnxpub621.ISO if readers want to try out a real time micro kernel as used in deep space missions so if you want it for trying later get it now is my advice.
            Last edited by popper; 03 March 2012, 12:49 AM.

            Comment


            • #36
              Originally posted by popper View Post
              you could have at least told them were to get the original last free for ever to end users non commercial QNX RTP 621 developer ISO releases (rather than the newer but less featured newer updates for blackberry use etc, does that even have the photon GUI as standard) although it seems its harder to find today as the ftp sites are deleting their "non commercial" qnx directories today it seems ftp://85.143.48.249/pub/os/qnx/ qnxpub621.ISO if readers want to try out a real time micro kernel as used in deep space missions so if you want it for trying later get it now is my advice.
              You assume, that i knew where to get the last non-commercial / developers ISO (that is from 2004!). Obviously, i'm not keeping track of stuff like that (as it's sort of a waste of my time). But that's cool that you knew where to find it & you posted it. nice find

              as far as QNX having less features than previous versions ~ do you have any facts / info to back up that claim. I tend to think QNX of today (not specifically BBX), would be a little richer than a version from 8years ago... Generally, i tend to see QNX used much how Linux is being used these days, embedded systems (of many different shapes/sizes/forms/fuctions). As an example; (right from their website)

              Who uses QNX?

              Customers rely on QNX to help build products that enhance their brand characteristics – innovative, high-quality, dependable. Global leaders like Cisco, Delphi, General Electric, Siemens, and Thales have discovered QNX Software Systems gives them the only software platform upon which to build reliable, scalable, and high-performance applications for markets such as telecommunications, automotive, medical instrumentation, automation, security, and more.
              cheerz
              Last edited by ninez; 03 March 2012, 01:00 AM.

              Comment


              • #37
                Originally posted by ninez View Post
                You assume, that i knew where to get the last non-commercial / developers ISO (that is from 2004!). Obviously, i'm not keeping track of stuff like that (as it's sort of a waste of my time). But that's cool that you knew where to find it & you posted it. nice find

                as far as QNX having less features than previous versions ~ do you have any facts / info to back up that claim. I tend to think QNX of today (not specifically BBX), would be a little richer than a version from 8years ago... Generally, i tend to see QNX used much how Linux is being used these days, embedded systems (of many different shapes/sizes/forms/fuctions). As an example; (right from their website)



                cheerz
                LOL . NO i didn't assume anyone here knows about that pre BB purchase full free x86 developer RTP ISO unless you were around when Dan (qnx CEO) was involved with that AMIGA developer release when they nearly partnered commercially with his company.

                i didn't find it as such (dan told me way back in the closed groups) and i wasn't keeping track, it and several older RTP developer URLs were in my browser bookmarks so i thought people might like to actually try it to know its NOT Slow as they assume, far from it... one day
                Michael might even bother to bench it now he has a direct ftp that still carries it, even if that version is older against minix/linux etc given its one of a very few that is actually officially UNIX certified unlike some here

                as far as QNX having less features than previous versions ~ do you have any facts / info to back up that claim
                perhaps i wasn't clear enough, im saying that that older 6.2 version was put together for full one time non commercial single x86 development platform given the AMIGA owners were considering both x86 and PPC at the time and qnx covers both those and ARM too, whereas their current BB limited developer ISO's are BB java centric and these newer versions are 30day trials (with an unstated little known fact that there is a free licence for NC end users) with a limited BB Java developer focus not generic x86 focus with GCC etc as the tool chain as back then, and not less featured as you put it for a commercial developer paying his fees to get that existing feature, but defiantly less featured as standard for the NC user at home, hence the advice to get that now while its still there and you dont need to register or have a personal NC key mailed to you as you do with the newer BB versions today.
                Last edited by popper; 03 March 2012, 02:02 AM.

                Comment


                • #38
                  Originally posted by ninez View Post
                  Well, that is a bit silly and an illogical argument. One could just as easily say a buggy driver in Linux will take down the whole system, and the monolithic (linux) kernel won't help you, either. ~ but in this situation a Microkernel *would* still keep working. And so what, there is a bug in the file-system, it still doesn't take down your whole system && when the bug gets fixed and it's no longer a problem...I don't see your issue here, the same could be said of Linux.
                  You're not following. If your filesystem crashes, and your data is lost, the OS no longer matters, so who cares if it keeps running?

                  With computing, the OS itself is not the goal. A running kernel is not the goal. Nerds may get hardons over technology like that, but at the end of the day the only reason that business and consumers run computer operating systems is to support some specific set of applications that operate on some specific data. When that data is lost -- or corrupted, or stolen, or what have you -- there is no longer any value in the OS. The OS is an implementation detail, a stepping stone, a way of achieving the real goal. It doesn't matter if Minix can recover from a crash that destroys the utility of having an OS in the first place, but Linux's and modern Windows' relative stability and the fact that they oh so very rarely actually crash or corrupt data is very valuable. The only argument then that Minix has going for it is that the separation of servers means that there's less overall code that directly affects the filesystem server, and hence less chance of crashing in the first place. With Linux, a bug in your GPU driver could corrupt memory used by the filesystem driver leading to corruption of the filesystem, but this can't (easily) happen on Minix (and maybe Windows and OSX/Darwin too; they're both hybrid microkernels, but I don't know specifically which parts are fully separated and which aren't).

                  This is similar to the problems with how modern Linux desktops are significantly less secure than Windows 7 and OS X (and even more so compared to Windows 8 and the next OS X release). Everybody and their brother in the Linux world keeps prattling on about the separation between root and users, but in the end, nobody who actually uses computers gives a crap about that. It does not matter in the slightest if root gets compromised or not on my single-user desktop if my user account is already compromised. Everything of actual value to me as a person is stored in my user account; my data files, my personal information, all of that is owned by me and my user, not root. Linux security is still all about entering the root password or using sudo for privileged operations while still running non-sandboxed browsers, non-sandboxed PDF/Office/Image/Email applications, and trusting that all software is pure because all software must have come from the surely thoroughly reviewed central repository. Windows makes sandboxing processes much easier with much better controls (Linux is catching up here), Windows browsers led the pack in sandboxing and Microsoft has sandboxed other apps, and both Microsoft and Apple have moved to app store models that have security by just not allowing apps to do bad things via strict sandboxing mechanisms so that even a malicious software distribution can't do significant harm, rather than Linux's security by trusting tens of thousands of random hobbyist package maintainers and random hobbyist upstream developers and random hobbyist software archive sysadmins to all be competent and honest. Minix fails here too, as far as I know; all traditional UNIX-like OSes do. SELinux and the like tries to address things and do a decent-ish job of it on the server, but the complex needs of desktop software has trouble fitting into the static predefined security attributes that SELinux tries to use (compared to the user-controllable flexible process-separated models that are shown to work on real desktop OSes).

                  Comment


                  • #39
                    Originally posted by elanthis View Post
                    You're not following. If your filesystem crashes, and your data is lost, the OS no longer matters, so who cares if it keeps running?
                    Because the data is still in the buffers (RAM). If the machine hangs, *then* you lose the data. With a microkernel, you reload the disk or FS driver, and flush the data again.

                    So it's exactly the opposite of what you suggest. You really do care that the OS will not crash if the driver crashes so you won't lose unsaved data.

                    Comment


                    • #40
                      Originally posted by ninez View Post
                      I don't see your issue here, the same could be said of Linux.
                      That was the point. In such case, where only my disk data matters, both systems are on pair when comes to 'unsafeness'. Micro kernel maybe has advantage when I care about uptime (but if there's cve in the kernel I'll have to put it down) or in the case RealNC described.

                      You also state (actually assume) that your system is 'hundreds of times faster' than a microkernel ~ which isn't really true... that is why i pointed out the Blackberry PlayBook, which i can easily compare to iOS and android devices, and i can tell you right now ~ it wasn't 'hundreds of times slower' than iOS/Android ~ in fact, it didn't seem slower to me, at all. It was very much comparable.
                      For normal usage it's maybe not so noticeable (on mobiles....), but in the operations it will be much slower like the benchmark shows (so it's you who just assume something, because you didn't even show any benchmark and that's a fact there's more overhead in micro kernels design). Could you point me where the micro kernels are used when performance matters?

                      anyway, like i said, you could read up on this stuff, As it would probably make more sense to do that ~ over making nonsensical uninformed statements about technology, that you don't really know very much about.
                      What I don't know exactly? If the bug is in the file system the risk is the same as with the monolitic kernels when comes to data safeness. Performance is also much worse with microkernels, so?

                      Comment

                      Working...
                      X