Announcement

Collapse
No announcement yet.

The Linux Kernel Has Been Forcing Different Behavior For Processes Starting With "X"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by sinepgib View Post
    Note this is a bit of a false dichotomy. You can very well have your big monorepo with as many drivers as you want bundled within the tree, all the while keeping them running as separate processes talking via some efficient IPC.
    Besides, porting efforts, once the driver is done for a platform, should be the same or less for a microkernel, as it allows for fewer ad-hoc solutions. If your IPC is platform neutral then your driver will be as well (except for the hardware and bus it uses of course).
    No its not false dichotomy. You have to remember minix and many other open source microkernels are older than Linux. In theory you could have a big monolithic repo with drivers and a microkernel but this has never happened.

    Microkernels have all suffered the same fate restricting their take up. What reasons does a hardware vendor have to release their driver source code if the binary drivers work vs if their binary drivers don't work? That a serous question.

    There is a monolithic nature of Linux where drivers have to be mainlined that has driven linux massive platform support. Yes it would be nice at times of the Linux kernel had a stable ABI for drivers. But we have to remember Linux is what it is today because it does not.

    Yes not having a stable ABI for drivers has downside but it has a upside where vendors have end up-streaming to reduce their cost dealing with driver breakages due to upstream changes.

    So for a microkernel to grow like Linux kernel has it would have to have unstable driver ABI and get market share. Yes the intentional unstable ABI would be to force users to upstream drivers or not be on the current version so suffer the security nightmares and legal risk of using out of date kernel.

    Wait being slightly insecure design happens to put more pressure on vendors as well to upstream drivers.

    Remember sinepgib there is a long history of OS kernel that have been great in theory but have never grown a repo of drivers covering large number of platforms. Linux kernel is a outlier very strange with the amount of platform support it has. Interesting point is what is unique about linux vs all the other that never grew is no stable kernel driver ABI.

    sinepgib this is not a false dichotomy no matter how many times people try that line. Linux kernel has developed its huge platform support with a core fact not having a stable driver ABI so forcing mainline of code.

    This is why this harder than it first seams. There is a upside to Linux kernel monolithic nature. You don't want to throw the baby out with the bath water. The baby in what Linux has done is massive platform support I am sure we don't want to lose.

    Comment


    • Originally posted by xfcemint View Post
      Second, you have to consider the very definition of an OS which is: noone knows what an OS is. The issue is in boundaries. Where does an OS end, and where do applications begin? Where do you draw the line? In a monolithic kernel, you are forced to draw the line somewhere, and then the issue of stable/unstable ABIs suddenly appears as a consequence (surprise surprise).
      To add on this, it's pretty much the (somewhat accepted) systemd argument: it's not an application you're expected to mix and match, but just a userspace portion of the OS. Of course, that's a forever open debate, but it means it's not just something you came up with.

      Originally posted by xfcemint View Post
      ​In a microkernel, a hardware vendor doesn't have to release the source code for a driver. That has some benefits and some drawbacks. But your conclusion, that this is what makes Linux successful, is far-fetched. I think it is OK to give hardware manufacturers a choice of open-source vs. closed source. I certainly wouldn't want to force them to open-source the schematics of their hardware, so I don't see why the drivers can't also be closed source.
      Considering the main argument for having in-tree drivers tends to be that boundary between debugging responsibility, just the fact drivers become just another userspace application makes that much more clear: if IPC fails, maybe the kernel is faulty, but everything else you complain to the driver maker, so the issue kernel maintainers seem to have pretty much vanishes. They don't need to inspect any driver source code to find the bug because the bugs is no longer (suspected to be) their responsibility any more.

      Originally posted by xfcemint View Post
      ​As to the "fate" of other microkernels and OS-es, I argue that the current "sample of microkernels" is not representative of what we are discussing, so no conclusions can be made.
      Yeah. While MINIX3 intends to be a general purpose OS, in practice it's still just a research ground for academic theses, and that makes it a bit incompatible with being general purpose, so nobody other than hobbyists and academics will really bother implementing hardware support. Its failure is not because it is a microkernel but because it tries to be two incompatible things.

      Comment


      • Originally posted by sinepgib View Post
        Possibly a stupid question, but would mitigations for side-channel attacks change the performance tradeoff between monolithic and microkernels?
        I think either it makes it much worse (if still necessary, more syscalls will increase the performance hit), which in turn would obsolete many of the claims that "it's not that bad anymore" compared to monolithic, or maybe makes them unnecessary due to reduced sharing and might actually end up performing better than the monolithic kernel with mitigations enabled?
        I would guess that hasn't been measured but that it would have been analyzed theoretically in some paper, but I have no idea how to look for it.
        If anything, the gap would widen due to all of the process boundary transitions done by microkernels.

        Comment


        • Originally posted by xfcemint View Post
          memory copies -> shared memory
          This sounds like doing away with virtual memory protection. Now things sound like a monolithic kernel.

          Originally posted by xfcemint View Post
          context switches -> clustered OS calls
          Like preadv? Or perhaps the more complex syscalls that you have with a monolithic kernel design. Just implementing them would be a departure from a microkernel design, since you often need to have additional information to be able to keep executing and that information is only known to the “server”.

          Originally posted by xfcemint View Post
          context switches -> automatic or semi-automatic option to integrate services into kernel address space (that's not the same as a hybrid kernel, not even close to it. Why? The difference is that the user has a CHOICE.)
          As far as I know, microkernel designs do not typically support having services be optionally inside the kernel. The entire point is to get code outside of the kernel so that the critical code needed for the system to operate is isolated and can be proven to be bug free via formal methods for security and reliability purposes. If you start moving services into a micro kernel, you get a monolithic kernel with micro kernel heritage like XNU.

          If you want to do this dynamically at runtime, you need to stop giving processes overlapping virtual address spaces. You would need to keep track of all of the allocations so that virtual memory allocations would not overlap. This would be a potential serialization point. There are tricks you can do, but they are likely to worsen memory fragmentation inside your unified address space, on top of the implications that this would have for memory fragmentation issues.

          Imagine having a hostile process that remains benign while it tries to get promoted to the kernel address space. Then upon being promoted, it will then have complete control over your system and can do whatever it wants.

          This feels very much like MSDOS, which is an example of how not to design an OS. MSDOS would certainly allow software to run faster on it than on any modern OS, at the price of system stability and security. Also, the amount of work needed to realize the higher performance is huge.

          Microkernels are meant to be reliable and secure, not fast.

          Originally posted by xfcemint View Post
          microkernel support = tagged/colored caches, only a small part of the cache is flushed on a context switch (or nothing is flushed when repeatedly switching between a small number of contexts).
          This sounds like a speculative execution side channel vulnerability.
          Last edited by ryao; 14 November 2022, 08:51 PM.

          Comment


          • Originally posted by ryao View Post
            This sounds like doing away with virtual memory protection. Now things sound like a monolithic kernel.
            Sure, because sharing selected pages between userspace processes has equal chances of corrupting everybody's memory as a salad of drivers running on ring 0.

            Comment


            • Originally posted by sinepgib View Post
              Sure, because sharing selected pages between userspace processes has equal chances of corrupting everybody's memory as a salad of drivers running on ring 0.
              QNX driver failures that have all drivers running as userspace processors due to shared memory between drivers does create all the same failures as if the driver was ring 0 with memory. Yes like having Linux kernel with totally unrestricted /dev/mem to userspace. So there are real world examples that once you start using shared memory to create performance with a Microkernel that it common to completely undermine all the theoretical advantage of a Microkernel. Yes QNX puts this stuff in documentation about drivers.

              Something to remember here there was a time with Microkernels where you would have kernel as ring 0, userspace and ring 3 and drivers at ring 1 and 2. Of course this kind was not great on performance.

              Comment


              • Originally posted by xfcemint View Post
                First, I have never said that ABIs of a microkernel are stable, or that they should be stable. I have said that the expected or likely outcome is that ABIs/protocols in a microkernel gravitate in time, they just fall into place. Those protocols start as unstable, but end up being mostly stable.

                The reality here is history of microkernel the more performance you have the more fuzzy the boundary becomes and the problems start appearing.

                Originally posted by xfcemint View Post
                Second, you have to consider the very definition of an OS which is: noone knows what an OS is. The issue is in boundaries. Where does an OS end, and where do applications begin? Where do you draw the line? In a monolithic kernel, you are forced to draw the line somewhere, and then the issue of stable/unstable ABIs suddenly appears as a consequence (surprise surprise).
                This is correct you are forced to draw line somewhere.

                Originally posted by xfcemint View Post
                ​The way the Linux tries to solve the problem of unstable ABIs is very elegant: shift the blame around, so that noone knows who is responsible. Each developer can imagine the boundary of an OS as being somewhere else: privileged/unprivileged code, syscalls, userspace libraries, virtual machines.


                This is a bold face lie xfcemint. The Linux kernel has strictly over time documented where stable ABI is. Yes include adding documentation that new kernel must be able to use old firmware files because those firmware files are classed as userspace this rule caused Intel to have to recode one of their drivers.

                Originally posted by xfcemint View Post
                ​​Microkernel solves this problem also very elegantly: the boundary between the OS and applications is fuzzy. Noone has to worry where the boundary is, since there is no boudary. It can be at any self-sufficient set of microkernel services. But how do you achieve that? The answer is simple: you extend the package management (like APT/dpkg) so that is covers both the applications and the OS services. You get total flexibility, but what about compatibility? Well, the "gravitation" should mostly keep the compatibility issue in check, or certainly much better than the current situation.
                No boundary is path to security hell.

                Originally posted by xfcemint View Post
                ​​In a microkernel, a hardware vendor doesn't have to release the source code for a driver. That has some benefits and some drawbacks. But your conclusion, that this is what makes Linux successful, is far-fetched. I think it is OK to give hardware manufacturers a choice of open-source vs. closed source. I certainly wouldn't want to force them to open-source the schematics of their hardware, so I don't see why the drivers can't also be closed source.
                What you just quoted is freebsd and minix and other OS that have not grown logic. Linux kernel has not got to where it is by being nice. GPL license has been used to legal crush smaller companies into releasing drivers open source. When I say crush I serousally mean crush. What are you going to do if your market is the USA and all your product gets blocked at customs because you have used Linux without releasing the source code. This is one way the Linux kernel has grown.

                Another is the unstable driver ABI causing companies to have massive workloads if they don't upstream drivers. Linux environment nasty nature to key to its growth.

                My idea of what makes linux successful is not far-fetched. This is more you not seeing how the Linux kernel has developed to what it is. Remember you said I think it ok to give hardware manufacturers choice between open-source and closed source in drivers all the OS that have given that choice have end up with limited architecture support. Even Microsoft runs into problem where their arm powered devices don't support lots of add on devices their x86/x86_64 stuff does because vendor has not released new binary driver.

                Linux key to why it so popular is amount of hardware that by using the Linux kernel you will have drivers for on all platforms the Linux kernel supports including if you just create a new one. This feature has allowed Linux to be very anti-closed source drivers and anti non mainline drivers. Anti-closed source drivers and anti non mainline increases hardware support so making Linux kernel a more tempting item to use. That tempt to use it what gets companies over the barrel into releasing their drivers open source.

                Just take a look at the companies providing developers to Linux these days closely you will find lots who said they would never release any source code ever under GPLv2 yet here they do for the Linux kernel because the benefit that the mono-culture mainline mass volume of drivers that Linux nasty actions have caused out weigh their company personal hate of GPLv2 and open sourcing stuff.

                Comment


                • Originally posted by xfcemint View Post
                  Technically yes, but in reality, no. Besides, if there was a USEFULL stable Linux ABI, then the desktop applications would be shipping in a binary form, not as source code. If you are correct, then why are there almost no binaries for Linux on desktop?
                  This the nature of the beast. flatpak, steam and so on have show it possible to have binary applications on the desktop. The Linux kernel strictly does define stable interface is how flatpak and steam is able work in the first place.

                  Again are not doing homework. Define USEFULL the reality is the stable linux kernel userspace ABI are useful to flatpak steam....


                  Originally posted by xfcemint View Post
                  You can also have a standardization comitee defining the ABIs when using a microkernel. You are missing the point: When there is a microkernel, quasi-stable ABIs appear on their own, without any need for external coercions. That is the advantage. Therefore, what you are saying here is completely irrelevant.
                  The provider of user space libraries and so on even with a Microkernel can have distributions do exactly the same things as what has happened with Linux resulting in not having binary compatibility. The reality if you follow the Linux distributions coming into existence you will find this aligns with Linux growing in market share into into more architectures.

                  The reality here is quasi-stable ABI don't appear on there own even with Microkernels effort is required.

                  Originally posted by xfcemint View Post
                  ​So you say. I see no compelling reason that would confirm this statement of yours. Quite the opposite: compared to a microkernel, a monolithic kernel is a breeding ground for security bugs. Or, at least, this is a widely accepted opinion based on sound analyses of the problem.
                  To be correct you missed something very important and have generalized something very incorrectly. You need to look closer at the sound analyses all of them say "verified micro-kernel" has some advantages over monolithic. Not general microkernel have advantage over monolithic. Unverified microkernel designs have turned out to be as bad as monolithic. Issue we have is something the size of a Microkernel with current technology we can verify.

                  Even something like sel4 https://sel4.systems/Info/FAQ/proof.pml the formal proof/verified that the kernel is right does not apply to massive number of architectures.

                  xfcemint incorrect statement that Microkernels are instantly better than monolithic. Monolithic has issue due to it size with verification. Microkernels also for secure usage run into the nightmare of verification Microkernel with a non verified user space drivers using direct memory solutions for speed very quickly turn out to be just as insecure as using the monolithic Linux kernel.

                  Verified Micro-kernel can just move the security faults into the non verified user space drivers that happen to have way too much memory access and they have this way too memory access out of needs of performance.

                  Platform support Linux has is a true double sided sword here as well.

                  Widely accept opinion when person makes these claims lot of cases they have only roughly read stuff and missed something important. Yes that verified Micro-kernels is a very restricted subset of Micro-kernels and that our current day verification systems only support handful of architectures. The problem space Linix kernel exist in there is no existing solution for it.

                  Comment


                  • Originally posted by oiaohm View Post
                    To be correct you missed something very important and have generalized something very incorrectly. You need to look closer at the sound analyses all of them say "verified micro-kernel" has some advantages over monolithic. Not general microkernel have advantage over monolithic. Unverified microkernel designs have turned out to be as bad as monolithic. Issue we have is something the size of a Microkernel with current technology we can verify.
                    I don't know, this sounds just like academics defined the usefulness in theoretical grounds, just like they tend to consider big-O complexity as the only predictor of performance, which is often wrong in the real world. Isolation between processes has been proved empirically to be a major improvement to security and stability, and all* microkernels provide that, formally verified or not.

                    Originally posted by oiaohm View Post
                    xfcemint incorrect statement that Microkernels are instantly better than monolithic. Monolithic has issue due to it size with verification. Microkernels also for secure usage run into the nightmare of verification Microkernel with a non verified user space drivers using direct memory solutions for speed very quickly turn out to be just as insecure as using the monolithic Linux kernel.
                    You're missing that this shared memory is between user processes with prior explicit authorization from both processes. It's certainly not the same as a monolithic kernel and carries in fact a reduced chance of kernel corruption, as now there are no reads or writes to memory from userspace in kernel context. Now overflows can't corrupt the kernel, and they can't corrupt the other process either. An overflow simply crashes the writing or reading process, depending who accesses out of bounds. The splash damage is reduced by an arguably very big margin.

                    Originally posted by oiaohm View Post
                    The problem space Linix kernel exist in there is no existing solution for it.
                    That part we agree. The only viable way forward to a mass use microkernel is to gradually extract portions of Linux (at first optionally) to userspace until it can be shrunk. You won't get a replacement written from scratch anytime soon. Specially due to the lack of commercial incentives that brings an egg-chicken problem.

                    Originally posted by xfcemint View Post
                    Technically yes, but in reality, no. Besides, if there was a USEFULL stable Linux ABI, then the desktop applications would be shipping in a binary form, not as source code. If you are correct, then why are there almost no binaries for Linux on desktop?
                    AFAICT that's more a matter of no userspace stable ABI (nor API really) than the kernel being at issue. The syscalls (which are the main edge with userspace programs) are actually is rather stable. The thing is that's worthless if your libc and mainly your GUI toolkits break compatibility at every release.​

                    Comment


                    • Originally posted by xfcemint View Post
                      The key questions are (regarding a stable userspace ABI):
                      1. What is an "usefull ABI"? What application do we want to work as binaries out-of-the-box (games/OpenGL? terminal/shell? browser/network? desktop windows?)
                      2. What is the current situation? Also, will the situation improve in the future, or will it get worse?
                      3. Do microkernels offer any expected advanatges here, or not?
                      1. We expect as much as possible I would say. Games/OpenGL and proprietary drivers are probably most important in the sense that those are the ones where there's a very slim (or no) chance to find an equivalent replacement for. Games because what matters is the content, you can't replace Fallout with Xonotic regardless of whether the engine is great or not because the content has IP you can't simply take and copy and nouveau isn't really an appropriate replacement for nvidia drivers at a functional level due to signed firmware and what not. For much of the other software you may get a free implementation that could be as good as the proprietary one (even if theoretically only).
                      2. The current situation is a mess. Whether they will improve or get worse depends on the decisions taken later on, which leads to:
                      3. Microkernels are likely to improve in the driver compatibility side, specially since you won't be able to use them unless you comply with a protocol for that kind of driver, but not really in any other case: it's still a lack of stability that brings the problems for current userspace programs, and that's something that can and probably will still happen whether the userspace libraries target a monolithic or a microkernel. The net positive is that drivers are likely to improve and userspace libraries are likely to remain as problematic as today, but not worse.

                      The bottom line is that for userspace applications our current problem is not in kernel design, but on management of userspace libraries. GTK is free to break every release whether you have a microkernel, a monolithic kernel or even a unikernel below it, and the same is true for the libc implementation.

                      Originally posted by xfcemint View Post
                      I think that, even without debating those fine points, there are sufficient other advantages of microkernels to make them very desirable.
                      I certainly agree.

                      Comment

                      Working...
                      X