Announcement

Collapse
No announcement yet.

The Linux Kernel Has Been Forcing Different Behavior For Processes Starting With "X"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by xfcemint View Post
    Well, if we don't need microkernels for consumer-grade computers, can someone please explain to me why do we need protected memory and preemptive multitasking? The consumers can't tell a difference, consumers don't keep statistics, it's just that the OS is slightly less reliable. I mean, Windows 3.0 is just fine for consumer-grade, isn't it?
    The difference there was quite a lot more dramatic tho. There's a much more varied field of user applications than you'll ever find in a given kernel and its drivers, so much more chaos to deal with. Let alone you can't enforce any kind of permissions without protected memory, all processes are basically root. That is a difference users will notice and really be angry about if missing, even if they don't know why they're constantly finding their computer inside of a botnet.
    Maybe I'll end up being wrong about how appreciative users can be, but the only way to know is after the fact.

    Originally posted by xfcemint View Post
    I like your argument, it's quite clever. I'm not sure that it can be countered, because we are entering into the realms of psychology, but I'll try.

    The consumers don't know almost anything. They buy/choose one of the thirty or so computer models that are displayed on the shelf. One model looks nicer, one has a longer battery life, one has a CPU running at 1.3×sqrt(7) GHz.

    The problem with your argument is that all such arguments cannot be disproved in the present time. You could also say the same for advent of home computers in the early 80's: Will the consumers care? They will all just buy a console! Who needs a computer at home, for goodnes sake? Nobody believed that home computers would be a huge hit on the market, that's why IBM PC was designed with a small team of a dozen people, projected to sell into a total of 250 000 units over five years, and then the market would, obviously, become saturated.

    The advantages of a good design are appreciated slowly in time. The consumer's won't even notice. Their computer just becomes more reliable, more secure, and more upgradable than ever, but they are so used to world going forward and to forgetting all the shortcomings of the old OSes.

    So true what you said, the consumers won't care. They wont even know. We live under capitalism, good enough quickly wins over perfect slowly. That all together doesn't imply the the OSes shouldn't be improved. Well, at least we should make a very, very good attempt at it, because we who know, we know what's better, not the consumers.
    Fair enough. But you'll need money to pull it out, so you'll need to convince not only technical people, but business people. I can't really advice you on that because I'm not good at that, but the status quo has been this way too long to think that would be an easy task to accomplish.​

    The most likely path to a microkernel succeeding, IMO, would be trickle down from servers and workstations. That's how the Windows NT kernel ended up in consumer-grade hardware after all. It had been the workstation and server OS for several years before MS considered it to be a good idea to sell for consumers.
    My guess their reason to not go at it at first was a combination of consumer hardware being too weak before (it did come with more memory overhead compared to building upon DOS) and probably some missing compatibility with software. Because consumer hardware is nowadays just a weaker version of what runs on servers a microkernel developed with high availability in mind for those may end up running in a consumer machine.
    But even then it's something that will probably take no less than a decade due to migration costs. Only then and when compatible with current userspace is good enough it's likely that computers will start shipping a proper microkernel.

    Comment


    • Originally posted by mdedetrich View Post
      If you actually care about rock solid security and stability, microkernels is what has been used for obvious reasons. There are other techniques as well (i.e. formal verification) which is why sel4 (micorkernel with formal proofs) is alien level tech. This level of security/reliability is overkill for most consumer and even business segments but to claim microkernels are pointless or a gimmick is just stupid.
      This is only partly true. GNU Hurd is a good example of where the stability thing with Microkernel can go wrong with the cases of crashing loops with drivers. Yes where a driver crashes then the kernel restarts it then it crashes again then the kernel restart is with this being a simple example the horrible crashing loops are like domino effects going around in circiles. Sometimes a total system stopping panic is the right thing there is such thing as attempting to have too much stability.

      Formal verification is something that at this stage can only been done on small monolithic kernels and small microkernels with small user spaces. We don't have the tools yet to be performing full formal verification on something the size of the Linux kernel.

      Sorry to say obvious reasons people claim to use Microkernels are not as solid as one one think. Fully formally verification with formal proofs is more important than the Microkernel or Monolithic kernel design bit. The limitations on what can have formal verification done does push some cases into using Microkernels to limit code that needs to have formal verification done.

      More kernel space code more code that need to be formally verified more processing time need to complete that verification and with the current size of the Linux kernel with current methods by the time you have the code base validated there are going to be known CVE that had to be fixed resulting in having to start the process again. Linux kernel developers interest in rust and bpf for stuff is to attempt reduce validation cost.

      There is the possibility that if formal verification tech advances far enough that we look back with rose colored glasses future that people in the past were completely stupid to use micro-kernels for anything.

      Comment


      • Originally posted by xfcemint View Post
        Why microkernels are important for "consumer-grade" computers or home computers:
        - When you are building a house, the foundation is much more important that the roof or facade. Same with OSes.
        - The advantages of microkernels are a consequence of additional modularization and compartmentalization; too long to list one by one.
        - The advantages of a good design are appreciated slowly in time. The benefits are reaped after a delay.
        - When consumers cannot tell a difference, then other factors decide the path of progress.
        - The experts should act responsibly. The benefits will be reaped after a delay.
        Nothing here is straight forwards. Linux kernel is not pure monolithic design.

        Building house is a good example here not every house is exactly the same. One of the problems Microkernels normally run into is designs that mandate only using 1 type of IPC. Of course its not possible to make a IPC that is good at everything. house is a really good example as well. You can put down a perfectly quality foundation then put house on top that exceed specification.

        I have lived in houses that had more than 1 foundation slab that were all made at different times this is kind of a Linux kernel of houses. One of the big problems all the microkernel designs is here is exactly 1 foundation design and that is all you have even if it does your use case stuff you.

        Monolithic kernel design is most likely not right and neither is Microkernel design. There has to be a suitable middle ground somewhere.

        Comment


        • Originally posted by oiaohm View Post

          This is only partly true. GNU Hurd is a good example of where the stability thing with Microkernel can go wrong with the cases of crashing loops with drivers. Yes where a driver crashes then the kernel restarts it then it crashes again then the kernel restart is with this being a simple example the horrible crashing loops are like domino effects going around in circiles. Sometimes a total system stopping panic is the right thing there is such thing as attempting to have too much stability.

          Formal verification is something that at this stage can only been done on small monolithic kernels and small microkernels with small user spaces. We don't have the tools yet to be performing full formal verification on something the size of the Linux kernel.

          Sorry to say obvious reasons people claim to use Microkernels are not as solid as one one think. Fully formally verification with formal proofs is more important than the Microkernel or Monolithic kernel design bit. The limitations on what can have formal verification done does push some cases into using Microkernels to limit code that needs to have formal verification done.

          More kernel space code more code that need to be formally verified more processing time need to complete that verification and with the current size of the Linux kernel with current methods by the time you have the code base validated there are going to be known CVE that had to be fixed resulting in having to start the process again. Linux kernel developers interest in rust and bpf for stuff is to attempt reduce validation cost.

          There is the possibility that if formal verification tech advances far enough that we look back with rose colored glasses future that people in the past were completely stupid to use micro-kernels for anything.
          The whole point of combing formal proofs with microkernels is that the characteristics of microkernels cover the drawbacks of formal proofs, i.e. the fact that its impossible to formally prove every possible program. There will always be limits to what can be formally verified, which is where the other parts of microkernels come into play.

          There is a reason why sel4 is so successful in the space it was designed for. Even microkernels without formal verifications are incredibly useful, i.e. see minix which is used in Intel's ME, or QNX as mentioned earlier.

          So to be blunt, microkernels are as solid as people claim them to be otherwise they wouldn't be the most used kernel type in the space they are designed for. There are ways to improve that solidness (i.e. formal proofs) but a microkernel even on its own is a lot more stable than monolothic kernels and cherry picking random problems with microkernels that no one happens to use the industry (i.e. GNU herd) doesn't invalidate that.

          Comment


          • Originally posted by oiaohm View Post
            This is only partly true. GNU Hurd is a good example of where the stability thing with Microkernel can go wrong with the cases of crashing loops with drivers. Yes where a driver crashes then the kernel restarts it then it crashes again then the kernel restart is with this being a simple example the horrible crashing loops are like domino effects going around in circiles. Sometimes a total system stopping panic is the right thing there is such thing as attempting to have too much stability.
            Having a shit implementation or a shit design doesn't mean the microkernel concept is flawed. It means that was a poor design or a pauper implementation. Nobody in the whole discussion argued that microkernels are immune to human mistakes. Besides, microkernels don't need to go into those loops. They can panic if they need to, and they can panic better, without having to risk overwriting a different driver's memory to only (if lucky) find out it happened after that other driver tries to read it back and finds garbage.

            Originally posted by oiaohm View Post
            ... formal verification defense ...
            The problem of formal verification is not as much the size of the codebase but the size of the development team and the fact the code mutates constantly. You verified a function yesterday, well, today I modified a helper it used and now I have to re-verify every user of that function. Then you need to coordinate thousands of devs to make sure they all verify their code and then verify it yourself when doing the review, and then probably verifying the integration of the multiple patches too, and so on.
            Formal verification for anything big is (at least with today's and not-too-distant future tech) a pipe dream. So even if you want to go with formal proofs (which is not a bad idea for safety critical stuff), you need to stick to microkernels to stand a chance. The fact they force clear and reduced interfaces will help you verify the other components anyway.

            Originally posted by oiaohm View Post
            Linux kernel is not pure monolithic design.
            You keep saying this and you keep not answering what you mean by this.​

            Comment


            • Originally posted by mdedetrich View Post
              The whole point of combing formal proofs with microkernels is that the characteristics of microkernels cover the drawbacks of formal proofs, i.e. the fact that its impossible to formally prove every possible program. There will always be limits to what can be formally verified, which is where the other parts of microkernels come into play.

              There is a reason why sel4 is so successful in the space it was designed for. Even microkernels without formal verifications are incredibly useful, i.e. see minix which is used in Intel's ME, or QNX as mentioned earlier.

              So to be blunt, microkernels are as solid as people claim them to be otherwise they wouldn't be the most used kernel type in the space they are designed for. There are ways to improve that solidness (i.e. formal proofs) but a microkernel even on its own is a lot more stable than monolothic kernels and cherry picking random problems with microkernels that no one happens to use the industry (i.e. GNU herd) doesn't invalidate that.

              The hardware debugger is needed to troubleshoot drivers (and sometimes, user applications) on embedded systems, because on these systems, the drivers and user applications are linked into the same memory space as the OS kernel. If a driver or application crashes, it may crash the kernel as well, bringing down the entire system. Because software debuggers depend on the system running, they are of little use when the OS kernel has crashed. Thus, a hardware debugger is required.
              This is a different problem. Lot history Microkernels have equal to the Linux kernel /dev/kmem​ that been disabled for ages. Some of the reason why QNX has lost so much market share to Linux solutions is that security advantage of Microkernel is not really there. You have QNX user space drivers having full memory access to the Micro kernel ring 0. Not all Microkernels are in fact more stable than monolithic kernels. QNX is one of the examples that is no more stable than Monolithic kernel because drivers and kernel space do in fact have shared memory access with each other even that the driver code is running as a user process.

              Being a Microkenrel does not equal being sanely designed. Formal verification are absolutely important. Minix at different times has been formally verified. Sel4 is continuously formally verified and Minix is intermittently formally verified(normally when some university decided to do this as a class project). Then there are microkernels like hurd that have not taken off in the market place due to attempt to too stable and microkernels like QNX that are no more secure or stable than using a monolithic kernel that use to have a lot of market share.

              Basically being Microkernel design does not promise much other than driver and kernel being different parts. Its the formal verification of the design that promises security and stability.

              Comment


              • Originally posted by oiaohm View Post

                https://www.qnx.com/developers/docs/...bug/about.html


                This is a different problem. Lot history Microkernels have equal to the Linux kernel /dev/kmem​ that been disabled for ages. Some of the reason why QNX has lost so much market share to Linux solutions is that security advantage of Microkernel is not really there. You have QNX user space drivers having full memory access to the Micro kernel ring 0. Not all Microkernels are in fact more stable than monolithic kernels. QNX is one of the examples that is no more stable than Monolithic kernel because drivers and kernel space do in fact have shared memory access with each other even that the driver code is running as a user process
                The reason why QNX is "failing" (if it even is failing, I see no actual evidence of this) has very little to do with the technical but rather business, i.e. blackberry. Its also a commercial product (unlike Linux) and furthermore its actually being used by cars right now which you can verify with a google search, i.e. https://www.blackberry.com/us/en/com...llion-vehicles

                I don't know what bullshit you are getting this from.
                Last edited by mdedetrich; 10 November 2022, 05:30 PM.

                Comment


                • Originally posted by sinepgib View Post
                  You keep saying this and you keep not answering what you mean by this.​
                  Linux kernel is a mixture of different things in design once you look closer. Monolithic kernel pure you don't have drivers as modules. The Linux kernel is a Modular kernel because you have kernel modules. If it ended here you could say it still a monolithic because you can choose to build without modules. But it does not end here.

                  Then you have user space helpers and user space drivers(fuse/uio). These are Microkernel ways of doing drivers and are not the monolithic way of doing things at all.

                  Now we are seeing

                  BPF for HID drivers. This is bytedrivers that has a compiler to native in the kernel. This is taking a part out the managed OS design play book of having drivers done in some platform neutral bytecode. Please note this first started turning up in 2018 with BPF being used for IR decoding.

                  The Linux kernel is a mix of all the major OS kernel design concepts. Being a mix like this Linux kernel could over time evolve more in the Microkernel or Managed OS design direction. Also being a mix Linux kernel does not cleanly fit in one formal design box to describe it.



                  Comment


                  • Originally posted by mdedetrich View Post
                    The reason why QNX is "failing" (if it even is failing, I see no actual evidence of this) has very little to do with the technical but rather business, i.e. blackberry. Its also a commercial product (unlike Linux) and furthermore its actually being used by cars right now which you can verify with a google search, i.e. https://www.blackberry.com/us/en/com...llion-vehicles

                    I don't know what bullshit you are getting this from.
                    VIA’s “Mobile360 M800 Video Telematics System” for ADAS and fleet management runs Linux on an AI-boosted dual-core Cortex-A7 SoC. The IP67-protected system is accompanied by driver-facing and front-facing cameras. VIA Technologies has lately been focusing on its Mobile360 in-vehicle systems for ADAS (Advanced Driver Assistance Systems). Its new VIA Mobile360 M800 Video Telematics System enables fleet operators to add “cutting-edge…



                    Also you need to think there is roughly 1.4 billion cars on the road. Yes 175 million is a lot but less than 50 percent of the cars modern enough to have the tech that QNX is providing. So where is the other 50+ percent that have that tech getting it from. Interesting right QNX was used in phones and lost most of that market to Linux. Now in the automotive market having automotive Linux eat into this as well.

                    QNX is not successfully selling it self off being a Microkernel its more they have a functional solution. Yes some of the companies that they nicely list in that press breifing are also producing cars that are using automotive Linux instead of the QNX based solution.

                    Linux in the automotive space is a commercial solution with commercial support vendors and it is blackberry biggest competitor in that space and blackberry is losing customers. Things are not going well for blackberry. QNX microkernel design is not helping them in fact it most likely big reason why they are losing market share. If the QNX solution was like Minix or sel4 with a proper solid design they would have a selling point from microkernel bit..

                    Comment


                    • Originally posted by oiaohm View Post

                      https://linuxgizmos.com/linux-driven...ks-out-and-in/


                      Also you need to think there is roughly 1.4 billion cars on the road.
                      Yes 175 million is a lot but less than 50 percent of the cars modern enough to have the tech that QNX is providing. So where is the other 50+ percent that have that tech getting it from. Interesting right QNX was used in phones and lost most of that market to Linux. Now in the automotive market having automotive Linux eat into this as well.
                      Yes and the vast majority of those cars either don't have electronics or if they do its just car audio. Not everyone drives a Tesla

                      Originally posted by oiaohm View Post
                      QNX is not successfully selling it self off being a Microkernel its more they have a functional solution. Yes some of the companies that they nicely list in that press breifing are also producing cars that are using automotive Linux instead of the QNX based solution.

                      Linux in the automotive space is a commercial solution with commercial support vendors and it is blackberry biggest competitor in that space and blackberry is losing customers. Things are not going well for blackberry. QNX microkernel design is not helping them in fact it most likely big reason why they are losing market share. If the QNX solution was like Minix or sel4 with a proper solid design they would have a selling point from microkernel bit..
                      Or so you say. You keep on saying "not successful" without providing any evidence. Stop spreading FUD.
                      Last edited by mdedetrich; 10 November 2022, 06:16 PM.

                      Comment

                      Working...
                      X