Announcement

Collapse
No announcement yet.

The Linux Kernel Has Been Forcing Different Behavior For Processes Starting With "X"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by oiaohm View Post
    Linux kernel is a mixture of different things in design once you look closer. Monolithic kernel pure you don't have drivers as modules. The Linux kernel is a Modular kernel because you have kernel modules. If it ended here you could say it still a monolithic because you can choose to build without modules. But it does not end here.

    Then you have user space helpers and user space drivers(fuse/uio). These are Microkernel ways of doing drivers and are not the monolithic way of doing things at all.

    Now we are seeing

    BPF for HID drivers. This is bytedrivers that has a compiler to native in the kernel. This is taking a part out the managed OS design play book of having drivers done in some platform neutral bytecode. Please note this first started turning up in 2018 with BPF being used for IR decoding.

    The Linux kernel is a mix of all the major OS kernel design concepts. Being a mix like this Linux kernel could over time evolve more in the Microkernel or Managed OS design direction. Also being a mix Linux kernel does not cleanly fit in one formal design box to describe it.
    AFAIK the distinction between a monolithic kernel and a microkernel is not whether there are dynamic modules but whether those run isolated from the kernel or not. Right now, the only point you mention where Linux is not strictly monolithic is FUSE and uio, and even then that's used rather little and only for ease of implementation of custom drivers, rather than being the recommended way for most hardware and filesystems (to the point that Linus calls those use cases toys). Everything else is just code running in kernel mode.
    BPF is pretty much just a constrained kernel module that can be verified not to crash your system.
    I guess you could call it hybrid on a technicality, but I'd rather call that a stretch, considering almost all of the system runs with no isolation whatsoever.

    Comment


    • Originally posted by sinepgib View Post

      AFAIK the distinction between a monolithic kernel and a microkernel is not whether there are dynamic modules but whether those run isolated from the kernel or not
      Its not even just that, even if you ignore the modules being isolated from the kernel the other critical trait of microkernels is that they form stable interface between the kernel and its modules (i.e. a protocol, stable ABI or messaging), this is what allows you to easily downgrade or upgrade modules regardless of the microkernel version.

      Linux is the complete opposite of this, drivers are either in tree which means they are completely unstable and module loading solutions like DKMS only work if the internal kernel's haven't been changed or they are in userspace which isn't even treated seriously. Moreover Linux kernel developers (including Linus himself) are adamant in keeping things this way, they intentionally specified many times that they don't want any stable ABI because they want to strongly incentivise putting drivers into the tree specifically because the kernel dev's don't want to maintain any kind of ABI and they want to be able to refactor the code in tree as much as they wan't. This mindset is as monolithic as you can get, and unless this changes calling Linux anything but a monolithic kernel is just being deceptive.

      The only thing the Linux kernel has a guarantee on is userspace syscalls, but user vs kernel space isn't a distinction that defines microkernels, its somewhat orthogonal. Microkernels were created before the concept of different ring's processors even existed. Having different ring levels is what allows you to verify a specific type isolation on the CPU level but they are not necessary for microkernels.
      Last edited by mdedetrich; 11 November 2022, 06:09 AM.

      Comment


      • Originally posted by sinepgib View Post
        AFAIK the distinction between a monolithic kernel and a microkernel is not whether there are dynamic modules but whether those run isolated from the kernel or not. Right now, the only point you mention where Linux is not strictly monolithic is FUSE and uio, and even then that's used rather little and only for ease of implementation of custom drivers, rather than being the recommended way for most hardware and filesystems (to the point that Linus calls those use cases toys). Everything else is just code running in kernel mode.
        BPF is pretty much just a constrained kernel module that can be verified not to crash your system.
        I guess you could call it hybrid on a technicality, but I'd rather call that a stretch, considering almost all of the system runs with no isolation whatsoever.
        Usermode helpers are not as monolithic or modular kernel feature as well.

        BPF being a constrained kernel module is correct.
        https://en.wikipedia.org/wiki/Singul...erating_system)
        Singularity OS research OS that is classed as Microkernel happens to use the same protection method for its drivers that it happens to also run in ring 0/kernel mode with no other protection other than being validated.

        MMU less systems Microkernels also don't have isolation from kernel of drivers heck they don't have applications isolated from the kernel either.

        Not all Micro-kernels out there in fact run dynamic modules/drivers isolated from the kernel.


        If the hardware provides multiple rings or CPU modes, the microkernel may be the only software executing at the most privileged level, which is generally referred to as supervisor or kernel mode. Traditional operating system functions, such as device drivers, protocol stacks and file systems, are typically removed from the microkernel itself and are instead run in user space.
        sinepgib this on the Wikipedia is very right. Do note "may be only" and "typically removed". Microkernel OS exist that don't have all drivers running in userspace instead select particular ones to run in kernel mode for performance reasons or may simple be on hardware that does not support multiple rings or cpu modes todo the isolation or maybe using some other method like Singularity does to validate the drivers code before allow it to run as ring 0 this validation happens to be the same as what BPF does.

        There is a big thing here all Microkernel OS will have a API/ABI defined for drivers and this will include IPC.

        Comment


        • Originally posted by mdedetrich View Post
          Its not even just that, even if you ignore the modules being isolated from the kernel the other critical trait of microkernels is that they form stable interface between the kernel and its modules (i.e. a protocol, stable ABI or messaging), this is what allows you to easily downgrade or upgrade modules regardless of the microkernel version.
          The reality when you get into what a Microkernel is the isolated bit optional. Having a defined interface between kernel and modules is not optional. That interface being stable cross mirokernel versions is optional. There are historic microkernels that every new major release changed the defined interface of course due to the amount of work that causes people they did not come popular. Yes Linus rule against breaking user space has been breached by some historic RTOS microkernels people get really quickly sick of reinventing the wheel of their userspace stacks with every major release of those RTOS microkernels kernels.

          Originally posted by mdedetrich View Post
          Linux is the complete opposite of this, drivers are either in tree which means they are completely unstable and module loading solutions like DKMS only work if the internal kernel's haven't been changed or they are in userspace which isn't even treated seriously. Moreover Linux kernel developers (including Linus himself) are adamant in keeping things this way, they intentionally specified many times that they don't want any stable ABI because they want to strongly incentivise putting drivers into the tree specifically because the kernel dev's don't want to maintain any kind of ABI and they want to be able to refactor the code in tree as much as they wan't. This mindset is as monolithic as you can get, and unless this changes calling Linux anything but a monolithic kernel is just being deceptive.
          The unified unix driver project failed because those making drivers complained having stable ABI was introducing too much overhead instead wanted to use the unstable ABI. The redhat "Kernel Application Binary Interface" is the remains of the old unified unix driver project. That DKMS issue that partly that distributions will not agree to make compatible kernels with each other because if they did benchmarks between different Linux distributions would shrink. The unified unix driver project of old would have seen same binary kernel mode driver working with Linux, freebsd and Solaris with possible more into the future if hardware vendors had come on board. Linus did in fact mainline the unified unix driver thing in the past then removed it due to lack of adoption after 8 years yes 1998 to 2006.

          This is drawing a bow to far. Microkernel exist that the makers of those Microkernels don't maintain stable ABI for anything.

          The stable api nonsense document in the Linux kernel basically contains all the same reason those who make Microkernels without stable ABI give. This is not a monolithic/microkernel difference. This is a development/performance focus difference.

          This is kind of a horrible surprise to a lot that a large amount of the mind set that mdedetrich thinks is monolithic is in fact performance focused mind set that does infect different RTOS micro-kernels out there. Yes "performance at any price mind set" that includes not having stable ABI promise in at least some areas and that you will mainline parts if you want them to work. Yes some of those microkernel RTOS kernels you will be mainlining alterations to their c libraries and the that are userspace ring 3 running stuff if you want to be able to use those alterations in future. These microkernels don;t have stable kernel ABI instead you have to hope you have stable Library ABI for what you want to do. This is like windows where the syscalls are unstable in assignment from userspace to kernel space so you have to use libraries to interface with everything.

          Lots of problems are coming from the "performance at any price mind set" remember its not just the Linux kernel developers with this problem its the Linux distribution maintainers as well with this mind set. Getting cooperation at any level to have fully stable driver ABI does not seam to happen be it hardware vendors, distributions maintainers or kernel developers with Linux and other open source operating systems have the same problem including some open source microkernels when you have the same "performance at any price mind set". How to be free of the "performance at any price mind set"? once you have a few thousand developers from few thousand different companies working on the same project seams be an impossible ask.

          Originally posted by mdedetrich View Post
          ​The only thing the Linux kernel has a guarantee on is userspace syscalls, but user vs kernel space isn't a distinction that defines microkernels, its somewhat orthogonal. Microkernels were created before the concept of different ring's processors even existed. Having different ring levels is what allows you to verify a specific type isolation on the CPU level but they are not necessary for microkernels.
          Yes the Linux kernel having guarantee on the userspace is way better than some of those microkernel RTOS solutions have have promise on nothing in ABI stability.

          There are many things people presume are a microkernel feature like isolation and stable ABI between versions for drivers/userspace that are in fact optional features of a Microkernel.

          Kernel space/userspace define is partly a protected memory thing. You still find people messing around with the old "Kernel Mode Linux" patches. Yes the horrible patches where what would be user space Linux programs run in ring 0.

          Comment


          • Originally posted by xfcemint View Post

            That is quite interesting.

            Well, from my point of view you can still kind-of refactor a microkernel+services, like this: You rip out half of the services and replace them with refactored ones. As long as the interface to all other services and programs doesn't change, it works.
            Oh definitely, in the modern day the argument not being able to refactor/work on code unless its all in a single monolith is largely absurd. There is definitely extra effort in having to define an interface (making a well defined interface that needs to last ins't easy and its an art/skill in of itself) but its a skill that especially today people are very good at precisely because of the move away from monolithic designs in general. You can even take that argument further which is the act creating an interface actually forces you to think of the problem space and reveals potential issues rather than just iteratively hacking away on things while missing the forest from the trees.

            This ontop of the fact that Linus has a hatred towards micokernel/s (and especially GNU hurd), a lot of the arguments being put forward are technically speaking largely baseless and only exist because cultural and historical reasons, and increasingly that style of thought is been seen as ancient relic. While there are arguments that if you were going to make a general purpose kernel nowadays microkernels would be too extreme, you would really be scraping the bottom of the barrel to argue that a monolithic/linux like kernel is good design. I mean modern day linux desktop is having to deal with these design limitations, i.e. if you buy a brand new graphics card you have to run the latest Linux kernel version (which depending on your distribution/circumstances may not even be ideal or even possible), this is purely because Linux refuses to have a stable graphics ABI otherwise you would be able to install a graphics driver just like any package (regardless if the driver is open source or not) on any Linux kernel version as long as its somewhat modern (remember that Windows has had the same stable graphics ABI since Vista days).

            This is why the only Linux desktop devices that are successful to end users in the non server space are to a certain degree locked down devices (steam deck/android) and not general purpose PC's.
            Last edited by mdedetrich; 11 November 2022, 12:07 PM.

            Comment


            • Originally posted by xfcemint View Post
              Microkernels... I can't wait for the day when I'll have one on my desktop. I'll open a champaign and celebrate all night. I'll finaly have the entire software on my computer under my control. I'll be able to choose from a vide variety of services, where each one is individually replacable, and no bad apples can cause damage to stability, security or reliability of my compter.
              Hmmm none of that is necessarily true tho. Interfaces between userspace services may break (specifically protocols), both the microkernel and the services may be closed source and thus not under your control, etc. Tight coupling of userspace is not only possible, but one of the main criticisms to systemd.

              Comment


              • Originally posted by mdedetrich View Post
                This ontop of the fact that Linus has a hatred towards micokernel/s (and especially GNU hurd), a lot of the arguments being put forward are technically speaking largely baseless and only exist because cultural and historical reasons, and increasingly that style of thought is been seen as ancient relic. While there are arguments that if you were going to make a general purpose kernel nowadays microkernels would be too extreme, you would really be scraping the bottom of the barrel to argue that a monolithic/linux like kernel is good design. I mean modern day linux desktop is having to deal with these design limitations, i.e. if you buy a brand new graphics card you have to run the latest Linux kernel version (which depending on your distribution/circumstances may not even be ideal or even possible), this is purely because Linux refuses to have a stable graphics ABI otherwise you would be able to install a graphics driver just like any package (regardless if the driver is open source or not) on any Linux kernel version as long as its somewhat modern (remember that Windows has had the same stable graphics ABI since Vista days).
                No you are guilty of falsehoods as well. Vista graphic ABI is not the same as Windows 11 graphics ABI. There have been quite a few changes in the middle.

                This feature in Linux is matches up to 1 of the key features why Vista driver on Windows 11 appears to work but there are 3 key features in total why it works.
                1) Windows Kernel modules have version details. MODVERSION feature.
                2) Windows instead of failing like MODVERSION does with Linux can apply abstraction layer to the drivers call this does result in older drivers been lower performance than newer drivers.
                3) Microsofit is able to define what compiler/s developers are allowed to use to make drivers and this is a factor.

                Notice these points have nothing todo with being Microkernel.

                You start of by saying its extra effort to have defined interface you have missed what that in fact requires.

                In the section "Binary Kernel Interface"
                Depending on the version of the C compiler you use, different kernel
                data structures will contain different alignment of structures, and
                possibly include different functions in different ways (putting
                functions inline or not.) The individual function organization
                isn't that important, but the different data structure padding is
                very important.​
                This is the first point and it no error that it is. The problem with having multi compiler versions build OS leading to crashes and instability documented in Linux even raises it horrible head with Microkernels that used shared memory between driver parts this like QNX. Yes some of the cases of Microsoft updates result in some users systems not booting have also traced back to this same issue between Microsoft restricted list of allowed compilers.

                MODVERSION feature Linux already has so we can class that as even with Windows.
                The abstraction layer solution if you read on though the stable-api-nonsense notice the bit about new drivers using old USB interfaces that don't work right this happens under Windows and happens with all classes of drivers. So abstraction layer would need to be done better. But number compiler bit is a absolute killer. Without solving the compiler but you will have instability.

                Microsoft Singularity OS research project with managed OS was attempt of Microsoft to fix this problem before they started doing driver certification(where they can reject drivers built with wrong compiler). Basically bytecode abstraction. One thing about BPF bytecode and managed OS bytecode drivers is that this route is a solution to compiler miss match between the kernel and the drivers. Does come with a price of the driver in current managed OS and BPF designs of having compiler cost init time use BPF or managed OS driver.

                mdedetrich next option is get distribution for building Linux kernels to use a restricted list of compilers so that the abstraction layer does not need to be ultra complex. This is one of the hurding cats problem. Distributions want more performance than there competitors in benchmarks so will want to use non approved compilers. Being open source where the distributions are building the drivers and kernel themselves upstream kernel.org developers cannot control these actions in anyway. See Microsoft has control so they can pull this off.

                mdedetrich you might say stuff it just have the code in user space fully abstracted as Linux kernel userspace code is to be compiler neutral. Fuse and cuse and buse and uio and others have all been provided over the years. You have constant complaints about performance overhead vs in kernel code.


                Yes when you start solving the performance problems of fuse/cuse/buse.... other problems start turning up. Yes the QNX problem now it on Linux because of io_uring. Yes and very quickly you can end up back with hey kernel used X compiler user space application used Y compiler and the system dies. Or we cannot do X because all compliers don't support it. Like Linux kernel syscalls pass between kernel space and user space no 128 bit stuff as 128 bit stuff not because the hardware does not support 128bit stuff but because llvm and gcc implemented it differently. Yes BPF is able to use 128 bit operations because its 128 bit native code matches what ever the compiler that built the kernel did.

                Stable driver ABI has many problems. These problems exist be your OS a Microkernel or a Monolithic. When these problems don't appear to exist you have normally not looked close enough to see the mitigations. Like you missed Microsoft restricting compiler to make drivers. Stable API nonsense is not written in Monolithic unique way the problems it details apply to all OS and if there is appaerance of a stable API/ABI for drivers there has to be mitigations to the problems. The issue with Linux is may different mitigation options are not open to Linux kernel developers like restricting compiler versions completely. This also happens to many different open source Microkernels.

                Comment


                • Originally posted by xfcemint View Post
                  It won't be as good and as fast as a microkernel from scratch, but it would be an improvement in the right direction.
                  Not right away, but once that's established and has ample driver support you can start stripping stuff to gradually make it a microkernel proper. I think that should make it at least as fast as a microkernel from scratch at some point.

                  Comment


                  • Originally posted by xfcemint View Post
                    Another thing, perhaps related to what oiaohm said (he is sometimes right). Linux can move towards the microkernel "philosophy". It just needs to define an ABI for userspace drivers; add some additional, more mikrokernel-alike syscalls; add some resource-counting mechanism for userspace drivers; add a privilege system (capabilities) for userspace drivers; add some versioning mechanism (like APT) that can figure out which services are mutually compatible.

                    What do you get: a Linux-compatible microkernel-alike OS. It won't be as good and as fast as a microkernel from scratch, but it would be an improvement in the right direction. The first step: a decision has to be made by Linus and his close associates to start moving towards a microkernel philosophy.

                    You need to read this one then read the QNX page I quoted. Yes this io_uring problem that turned up is the same problem that effects QNX. Microkernel philosophy is not without its major hazards. As you attempt to remove the cost of context switching to gain performance in a Microkernel you start moving around structures that end up being linked back the compilers you are using behaving self/behaving the same . Yes the driver and kernel might be at two different ring levels but the issue of sharing memory space always has fixed set of issues.

                    This is the problem Microkernel is not silver bullet that magically fixes lots of these problems. Majority that is listed in the Linux kernel stable abi nonsense documentation is universal that does not care if your OS is microkernel or monolithic design. Stable ABI/API is it own unique problem that surprise surprise that is 99.99% independent to choice of OS kernel design. There are really only minor differences with the Stable ABI/API problem solution requirements when the OS is Microkernel or Monolithic.

                    1) Locking/limiting compiler choice by some means or the complete thing comes too complex.
                    2) Embedded versioning.
                    3) Abstraction to allow old versions of the interface on new this will have performance cost and if not done right will cause developers making new drivers not to move to new API/ABI from deprecated API/ABI that could be security flawed.
                    4) Trust if you want performance that stuff is not break the rules and make the system unstable.

                    Linux kernel developers only really have one option todo 1 due to Linux kernel being open source and those taking that source code can use what ever compiler they like that one option is BPF.
                    2 embeded versioning does exist just some kernel decide to be built without it.
                    Linux kernel missing 3 for binary and source kernel modules but BPF is getting it these days.
                    Linux kernel users want stability/security stuff so are not that trusting so you don't have 4.

                    None of those 4 points has anything to-do with OS being Microkernel or monolithic and these are the major points. This is the problem people hold up microkernel as silver built to not having Stable ABI the reality its not. the silver bullet at all. Stable ABI sounds simple but it hell complex to do.

                    Comment


                    • Originally posted by oiaohm View Post

                      https://www.graplsecurity.com/post/i...e-linux-kernel
                      You need to read this one then read the QNX page I quoted. Yes this io_uring problem that turned up is the same problem that effects QNX. Microkernel philosophy is not without its major hazards. As you attempt to remove the cost of context switching to gain performance in a Microkernel you start moving around structures that end up being linked back the compilers you are using behaving self/behaving the same . Yes the driver and kernel might be at two different ring levels but the issue of sharing memory space always has fixed set of issues.

                      This is the problem Microkernel is not silver bullet that magically fixes lots of these problems. Majority that is listed in the Linux kernel stable abi nonsense documentation is universal that does not care if your OS is microkernel or monolithic design. Stable ABI/API is it own unique problem that surprise surprise that is 99.99% independent to choice of OS kernel design. There are really only minor differences with the Stable ABI/API problem solution requirements when the OS is Microkernel or Monolithic.

                      1) Locking/limiting compiler choice by some means or the complete thing comes too complex.
                      2) Embedded versioning.
                      3) Abstraction to allow old versions of the interface on new this will have performance cost and if not done right will cause developers making new drivers not to move to new API/ABI from deprecated API/ABI that could be security flawed.
                      4) Trust if you want performance that stuff is not break the rules and make the system unstable.

                      Linux kernel developers only really have one option todo 1 due to Linux kernel being open source and those taking that source code can use what ever compiler they like that one option is BPF.
                      2 embeded versioning does exist just some kernel decide to be built without it.
                      Linux kernel missing 3 for binary and source kernel modules but BPF is getting it these days.
                      Linux kernel users want stability/security stuff so are not that trusting so you don't have 4.

                      None of those 4 points has anything to-do with OS being Microkernel or monolithic and these are the major points. This is the problem people hold up microkernel as silver built to not having Stable ABI the reality its not. the silver bullet at all. Stable ABI sounds simple but it hell complex to do.
                      Ah yes we are in a world where coordinating with hundreds if not thousands developers is easier than some ABI drama, which exists for windows likewise(xp->vista->8->10). The whole mess with nvidia and amd graphics drivers was evidence enough for me linux needs to get closer to a microkernel. Maybe not completely. But set some rules what goes into the kernel. For example vesa framebuffer, USB for kb/mouse support, the very basic essentials to get a basic working terminal. Anything to complex and not based on standards should be yeeted out. also these drivers need their own ring.
                      Last edited by cj.wijtmans; 11 November 2022, 08:28 PM.

                      Comment

                      Working...
                      X