Announcement

Collapse
No announcement yet.

The Linux Kernel Has Been Forcing Different Behavior For Processes Starting With "X"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • oiaohm
    replied
    Originally posted by xfcemint View Post
    His argument relies on an endless stream of specific issues (I can't figure out is how he knows about them all). Every human-designed system will have specific issues. Every system that has ever been designed by humans has failed, or it is expected to fail.
    There is a reason why I know the specific issues lots of time working with OS in embed usage.

    There is a catch there is a repeating set of issues that turn up.

    To be correct there is a direct conflict.

    1)Core drivers need physical memory access.
    2) Non driver application never need physical memory access virtual memory that you can use any security control you like with will do.

    Do note that sinepgib said that /dev/mem is a really bad idea. Having all drivers in userspace means you have to implement this really bad idea of /dev/mem hopefully better.

    QNX, Samsung in 2012 with Linux and many others end up with physical memory that basically access anything exposed to userspace.

    The monolithic split of drivers in ring 0 and userspace applications in ring 3. In a correctly setup of monolithic you don't end up randomly giving user space application direct physical memory access as that is restricted to code in kernel space.ring 0.

    Historic examples of microkernels had the core kernel on ring 0 with drivers on ring 1 with services on ring 2 and userspace on ring 3. Context switching between all those rings was highly expensive on performance. This did reduce the memory assignment problem.

    Windows NT was meant to be a microkernel but where are its drivers and services related to drivers in ring 0 these days.

    Something you missed I have given a list of specific issues from different solutions that are in fact a single problem that keeps on turning up with userspace drivers. There are practical problems you have to get over when you write drivers.

    Remember driver need to communicate with the hardware and userspace. Lets say you have raw physical memory access in the driver in userspace so it need to work. What this is mapped into user space and you are going to need to be sharing memory with application in userspace how close are you to screwing up at this point.

    The monolithic ring split between "driver/kernel services" and "userspace application" is happens to make sense on a security ground. Microkernel split being kernel/userspace makes sense on a simpler kernel but it then bundling drivers and general application with each other.

    Reality here microkernel and monolithic kernel neither is 100 percent right.

    The historic secure design of microkernel.
    Ring 0 : Kernel
    Ring 1 : Drivers
    Ring 2 : Servers
    Ring 3 : Userspace programs​

    Yes this historic design ring 3 userspace programs would only interface with servers. ring 2 Servers would interface with drivers userspace and kernel. and drivers would interface with hardware and kernel and servers. The servers was barrier between driver developer created issues leaking to userspace applications.

    Remember every ring change is like context switch. Lots of overahead is the reason why don't have operating systems design like this.

    Monolithic kernel takes ring 0 1 and 2 of the historic secure design microcode and fuses all that into ring 0. Modern performant Microkernel takes ring 1 2 and 3 and fuses that all into ring 3. Each way you have instantly degraded the security of the historic secure microkernel designs.

    xfcemint get it yet Microkernel is not all the same thing.

    The 286 and latter x86 processors was designed to have 4 rings to suite being used for secure microkernel. There are repeating issues with Microkernels caused by solving the Microkernel performance problem. Why because every time a Microkernel developer solves the performance problem they under mine what would have been natural security of the Microkernel design.

    Yes having drivers and servers for hardware in ring 0 as monolithic kernel does has danger. Problem is putting drivers and servers all in userspace then having to provide them with the access they need to work also has it dangers yes these dangers are just as bad as the monolithic kernel problem if not worse. This is why over and over again monolithic and microkernels in real world examples have end up just as security flawed.

    There are lot of papers written saying microkernel can solve X/Y and Z problems but these brush over for the microkernel to perform you have undermined security in another way. Also those paper nicely ignore that the split line monolithic kernel has happens to have a valid security reason to be there.

    Making a high performing micro-kernel and high security microkenrel at the same time from the microkernels developed so far seams like impossible task.
    Last edited by oiaohm; 16 November 2022, 07:08 AM.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by sinepgib View Post
    /dev/mem pretty much shits over everything. I'm assuming it's out of the question that such device node shouldn't even exist, whether we talk of a microkernel or a monolithic one. It's the dumbest hack that was ever invented and AFAIK distros tend to disable it nowadays. But because of that I refuse to even take into account that scenario in any hypothetical microkernel design, as its mere existence just makes the system a whole unikernel in disguise.

    EDIT: For general discussion about this topic, please post in the following location (and not here): http://forum.xda-developers.com/showthread.php?t=2057818 Now find a one-click root application at...

    This is 2012 yes here is samsung reimplementing /dev/mem under another name on Linux because it been disabled. There user space driver developers did not want to go though the process of authorizing memory access correctly..

    Please note we don't need to use "hypothetical microkernels" for the problem of implementing /dev/mem like feature. QNX implements a /dev/mem equal for drivers and it in the documentation. There are a lot of items people hold up claiming this is why you should use a Microkernel that by your define sinepgib is just unikernel in disguise.

    Originally posted by sinepgib View Post
    Not in general, but the discussion is not whether we just use any mechanism, but what mechanisms exist. You can share memory and you can make the OS check for the right authorization before mapping just any page.
    ​​
    Also to remember the Linux kernel in ring 0 is also doing lot of memory checks on what can access what. https://www.kernel.org/doc/html/late...rotection.html

    One of the realities here checking authorization to access X memory can be performed in ring 0. This it self opens up another question.

    Originally posted by sinepgib View Post
    You can verify that insane design too, so I'd say we should separate the concept of design (specification) and its verification. If your specs are shit no formal verification will fix them, they will just prove it's shit just as designed.
    I will agree with this. That you can verify insane design. But to perform verification you have to have a design specification. Something like sel4 that is a formally verified micro-kernel the design has had to go though formal verification as well that also forbids lots of stupidity. Like a formal verified design is not going to allow unrestricted /dev/mem because that is a bipass to the security framework.

    There are thousands of Windows drivers that if you look closely you find that the developer has in fact implemented /dev/mem again and again and again. This is a problem is a wheel that keeps on being reinvented.

    This is one of the problems of closed source drivers no proper peer review or proper form of verification that they are not doing something completely stupid.

    sinepgib you are wanting to take the ideal version of a microkernel. The big catch here is that there are tones of examples with microkernels where you have the non ideal outcome with some form of /dev/mem resulting in all the security or microkernel not really existing and yes these are some of the most used microkernels. Then you have multi examples of different vendors reimplementing /dev/mem in their non mainlined Linux drivers. Then you have multi examples of different developers implementing equal to /dev/mem under windows as well.

    The Linux kernel monolithic core model is not exactly the safest option. But there are many items that people call Microkernels that are really no better.

    Also there is another question is Microkernel kernel space/userspace even the right model. Think we have virtualization and NUMA intel developer did experement with Linux of running a hypervisor above the Linux kernel ring 0 providing extra protective primitives into the core Linux kernel of course this change would not alter the Linux kernel unstable driver ABI.

    Remember with the spectuive execution faults core assignment comes important. So at ring 0 use NUMA and hypervisor restriction around drivers so drivers run at ring 0 instead of userspace ring 3 is another option. Of course the driver ABI in this case would not have to above the Linux kernel stable ABI to userspace. Yes this could in theory do all the protections the microkernel kernel/userspace split does around drivers and more. Remember this would have drivers wrapped in one unique set of protections and userspace code wrapped in a different set of protections.

    sinepgib big question that could complete nuke common microkernel idea. The fact that QNX and others end up implementing /dev/mem equal for drivers to allow direct hardware memory access and low performance overhead. Should drivers exist in their own unique area with their own unque ABI/API with their own unique protections different to userspace applications.

    The reality here Microkernel may not be the correct fit.

    Leave a comment:


  • sinepgib
    replied
    Originally posted by oiaohm View Post
    These UMS microkernel drivers mandated full /dev/mem access.
    /dev/mem pretty much shits over everything. I'm assuming it's out of the question that such device node shouldn't even exist, whether we talk of a microkernel or a monolithic one. It's the dumbest hack that was ever invented and AFAIK distros tend to disable it nowadays. But because of that I refuse to even take into account that scenario in any hypothetical microkernel design, as its mere existence just makes the system a whole unikernel in disguise.

    Originally posted by oiaohm View Post
    Share memory between processes does not need prior explicit authorization from both processes todo so this is true under monolithic kernels and microkernel its just a question of authorization. Think debugging lots of OS only the debugging process need authorization to access another processes memory. What is authorized comes very important.
    Not in general, but the discussion is not whether we just use any mechanism, but what mechanisms exist. You can share memory and you can make the OS check for the right authorization before mapping just any page.

    Originally posted by oiaohm View Post
    This is why its absolutely critical to have a verified microkernel design not just random microkernel design. This checks that the authorization design is sane and functional. " X11 server UMS drivers are example of not sane or functional authorization.
    You can verify that insane design too, so I'd say we should separate the concept of design (specification) and its verification. If your specs are shit no formal verification will fix them, they will just prove it's shit just as designed.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by sinepgib View Post
    I don't know, this sounds just like academics defined the usefulness in theoretical grounds, just like they tend to consider big-O complexity as the only predictor of performance, which is often wrong in the real world. Isolation between processes has been proved empirically to be a major improvement to security and stability, and all* microkernels provide that, formally verified or not.
    There are a lot of presume here that turn out not to be back up in real world microkernels.

    Originally posted by sinepgib View Post
    You're missing that this shared memory is between user processes with prior explicit authorization from both processes. It's certainly not the same as a monolithic kernel and carries in fact a reduced chance of kernel corruption, as now there are no reads or writes to memory from userspace in kernel context. Now overflows can't corrupt the kernel, and they can't corrupt the other process either. An overflow simply crashes the writing or reading process, depending who accesses out of bounds. The splash damage is reduced by an arguably very big margin.

    Lets take one of the worst real world examples that was massively spread. "X11 server user mode setting drivers(UMS)" yes this are Microkernel style drivers. First UMS designed for a Microkernel Unix in fact not monolithic Linux. Now what is the fatal problem here. These UMS microkernel drivers mandated full /dev/mem access.under Linux and every other platform they were used on because there design mandated full physical memory access to the userspace drivers. Think about it you have just authorized user space process to have full system wide memory access there is now no separation between the kernel and user-space or user-space to user-space any more. Yes people think X11 server running as root was the worst problem the worst problem was that the UMS drivers had full run of complete OS memory.

    Share memory between processes does not need prior explicit authorization from both processes todo so this is true under monolithic kernels and microkernel its just a question of authorization. Think debugging lots of OS only the debugging process need authorization to access another processes memory. What is authorized comes very important.

    This is why its absolutely critical to have a verified microkernel design not just random microkernel design. This checks that the authorization design is sane and functional. " X11 server UMS drivers are example of not sane or functional authorization.

    Originally posted by sinepgib View Post
    That part we agree. The only viable way forward to a mass use microkernel is to gradually extract portions of Linux (at first optionally) to userspace until it can be shrunk. You won't get a replacement written from scratch anytime soon. Specially due to the lack of commercial incentives that brings an egg-chicken problem.
    IO_URING and other parts also need to be developed and security issues worked though. Because end result need to be a verified design. There are lots of highly.insecure microkernel designs out there some where ultra popular. The end result of verified design might not end in a Microkernel. Managed OS verification process and so on may be successful.

    Leave a comment:


  • sinepgib
    replied
    Originally posted by xfcemint View Post
    The key questions are (regarding a stable userspace ABI):
    1. What is an "usefull ABI"? What application do we want to work as binaries out-of-the-box (games/OpenGL? terminal/shell? browser/network? desktop windows?)
    2. What is the current situation? Also, will the situation improve in the future, or will it get worse?
    3. Do microkernels offer any expected advanatges here, or not?
    1. We expect as much as possible I would say. Games/OpenGL and proprietary drivers are probably most important in the sense that those are the ones where there's a very slim (or no) chance to find an equivalent replacement for. Games because what matters is the content, you can't replace Fallout with Xonotic regardless of whether the engine is great or not because the content has IP you can't simply take and copy and nouveau isn't really an appropriate replacement for nvidia drivers at a functional level due to signed firmware and what not. For much of the other software you may get a free implementation that could be as good as the proprietary one (even if theoretically only).
    2. The current situation is a mess. Whether they will improve or get worse depends on the decisions taken later on, which leads to:
    3. Microkernels are likely to improve in the driver compatibility side, specially since you won't be able to use them unless you comply with a protocol for that kind of driver, but not really in any other case: it's still a lack of stability that brings the problems for current userspace programs, and that's something that can and probably will still happen whether the userspace libraries target a monolithic or a microkernel. The net positive is that drivers are likely to improve and userspace libraries are likely to remain as problematic as today, but not worse.

    The bottom line is that for userspace applications our current problem is not in kernel design, but on management of userspace libraries. GTK is free to break every release whether you have a microkernel, a monolithic kernel or even a unikernel below it, and the same is true for the libc implementation.

    Originally posted by xfcemint View Post
    I think that, even without debating those fine points, there are sufficient other advantages of microkernels to make them very desirable.
    I certainly agree.

    Leave a comment:


  • sinepgib
    replied
    Originally posted by oiaohm View Post
    To be correct you missed something very important and have generalized something very incorrectly. You need to look closer at the sound analyses all of them say "verified micro-kernel" has some advantages over monolithic. Not general microkernel have advantage over monolithic. Unverified microkernel designs have turned out to be as bad as monolithic. Issue we have is something the size of a Microkernel with current technology we can verify.
    I don't know, this sounds just like academics defined the usefulness in theoretical grounds, just like they tend to consider big-O complexity as the only predictor of performance, which is often wrong in the real world. Isolation between processes has been proved empirically to be a major improvement to security and stability, and all* microkernels provide that, formally verified or not.

    Originally posted by oiaohm View Post
    xfcemint incorrect statement that Microkernels are instantly better than monolithic. Monolithic has issue due to it size with verification. Microkernels also for secure usage run into the nightmare of verification Microkernel with a non verified user space drivers using direct memory solutions for speed very quickly turn out to be just as insecure as using the monolithic Linux kernel.
    You're missing that this shared memory is between user processes with prior explicit authorization from both processes. It's certainly not the same as a monolithic kernel and carries in fact a reduced chance of kernel corruption, as now there are no reads or writes to memory from userspace in kernel context. Now overflows can't corrupt the kernel, and they can't corrupt the other process either. An overflow simply crashes the writing or reading process, depending who accesses out of bounds. The splash damage is reduced by an arguably very big margin.

    Originally posted by oiaohm View Post
    The problem space Linix kernel exist in there is no existing solution for it.
    That part we agree. The only viable way forward to a mass use microkernel is to gradually extract portions of Linux (at first optionally) to userspace until it can be shrunk. You won't get a replacement written from scratch anytime soon. Specially due to the lack of commercial incentives that brings an egg-chicken problem.

    Originally posted by xfcemint View Post
    Technically yes, but in reality, no. Besides, if there was a USEFULL stable Linux ABI, then the desktop applications would be shipping in a binary form, not as source code. If you are correct, then why are there almost no binaries for Linux on desktop?
    AFAICT that's more a matter of no userspace stable ABI (nor API really) than the kernel being at issue. The syscalls (which are the main edge with userspace programs) are actually is rather stable. The thing is that's worthless if your libc and mainly your GUI toolkits break compatibility at every release.​

    Leave a comment:


  • oiaohm
    replied
    Originally posted by xfcemint View Post
    Technically yes, but in reality, no. Besides, if there was a USEFULL stable Linux ABI, then the desktop applications would be shipping in a binary form, not as source code. If you are correct, then why are there almost no binaries for Linux on desktop?
    This the nature of the beast. flatpak, steam and so on have show it possible to have binary applications on the desktop. The Linux kernel strictly does define stable interface is how flatpak and steam is able work in the first place.

    Again are not doing homework. Define USEFULL the reality is the stable linux kernel userspace ABI are useful to flatpak steam....


    Originally posted by xfcemint View Post
    You can also have a standardization comitee defining the ABIs when using a microkernel. You are missing the point: When there is a microkernel, quasi-stable ABIs appear on their own, without any need for external coercions. That is the advantage. Therefore, what you are saying here is completely irrelevant.
    The provider of user space libraries and so on even with a Microkernel can have distributions do exactly the same things as what has happened with Linux resulting in not having binary compatibility. The reality if you follow the Linux distributions coming into existence you will find this aligns with Linux growing in market share into into more architectures.

    The reality here is quasi-stable ABI don't appear on there own even with Microkernels effort is required.

    Originally posted by xfcemint View Post
    ​So you say. I see no compelling reason that would confirm this statement of yours. Quite the opposite: compared to a microkernel, a monolithic kernel is a breeding ground for security bugs. Or, at least, this is a widely accepted opinion based on sound analyses of the problem.
    To be correct you missed something very important and have generalized something very incorrectly. You need to look closer at the sound analyses all of them say "verified micro-kernel" has some advantages over monolithic. Not general microkernel have advantage over monolithic. Unverified microkernel designs have turned out to be as bad as monolithic. Issue we have is something the size of a Microkernel with current technology we can verify.

    Even something like sel4 https://sel4.systems/Info/FAQ/proof.pml the formal proof/verified that the kernel is right does not apply to massive number of architectures.

    xfcemint incorrect statement that Microkernels are instantly better than monolithic. Monolithic has issue due to it size with verification. Microkernels also for secure usage run into the nightmare of verification Microkernel with a non verified user space drivers using direct memory solutions for speed very quickly turn out to be just as insecure as using the monolithic Linux kernel.

    Verified Micro-kernel can just move the security faults into the non verified user space drivers that happen to have way too much memory access and they have this way too memory access out of needs of performance.

    Platform support Linux has is a true double sided sword here as well.

    Widely accept opinion when person makes these claims lot of cases they have only roughly read stuff and missed something important. Yes that verified Micro-kernels is a very restricted subset of Micro-kernels and that our current day verification systems only support handful of architectures. The problem space Linix kernel exist in there is no existing solution for it.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by xfcemint View Post
    First, I have never said that ABIs of a microkernel are stable, or that they should be stable. I have said that the expected or likely outcome is that ABIs/protocols in a microkernel gravitate in time, they just fall into place. Those protocols start as unstable, but end up being mostly stable.

    The reality here is history of microkernel the more performance you have the more fuzzy the boundary becomes and the problems start appearing.

    Originally posted by xfcemint View Post
    Second, you have to consider the very definition of an OS which is: noone knows what an OS is. The issue is in boundaries. Where does an OS end, and where do applications begin? Where do you draw the line? In a monolithic kernel, you are forced to draw the line somewhere, and then the issue of stable/unstable ABIs suddenly appears as a consequence (surprise surprise).
    This is correct you are forced to draw line somewhere.

    Originally posted by xfcemint View Post
    ​The way the Linux tries to solve the problem of unstable ABIs is very elegant: shift the blame around, so that noone knows who is responsible. Each developer can imagine the boundary of an OS as being somewhere else: privileged/unprivileged code, syscalls, userspace libraries, virtual machines.


    This is a bold face lie xfcemint. The Linux kernel has strictly over time documented where stable ABI is. Yes include adding documentation that new kernel must be able to use old firmware files because those firmware files are classed as userspace this rule caused Intel to have to recode one of their drivers.

    Originally posted by xfcemint View Post
    ​​Microkernel solves this problem also very elegantly: the boundary between the OS and applications is fuzzy. Noone has to worry where the boundary is, since there is no boudary. It can be at any self-sufficient set of microkernel services. But how do you achieve that? The answer is simple: you extend the package management (like APT/dpkg) so that is covers both the applications and the OS services. You get total flexibility, but what about compatibility? Well, the "gravitation" should mostly keep the compatibility issue in check, or certainly much better than the current situation.
    No boundary is path to security hell.

    Originally posted by xfcemint View Post
    ​​In a microkernel, a hardware vendor doesn't have to release the source code for a driver. That has some benefits and some drawbacks. But your conclusion, that this is what makes Linux successful, is far-fetched. I think it is OK to give hardware manufacturers a choice of open-source vs. closed source. I certainly wouldn't want to force them to open-source the schematics of their hardware, so I don't see why the drivers can't also be closed source.
    What you just quoted is freebsd and minix and other OS that have not grown logic. Linux kernel has not got to where it is by being nice. GPL license has been used to legal crush smaller companies into releasing drivers open source. When I say crush I serousally mean crush. What are you going to do if your market is the USA and all your product gets blocked at customs because you have used Linux without releasing the source code. This is one way the Linux kernel has grown.

    Another is the unstable driver ABI causing companies to have massive workloads if they don't upstream drivers. Linux environment nasty nature to key to its growth.

    My idea of what makes linux successful is not far-fetched. This is more you not seeing how the Linux kernel has developed to what it is. Remember you said I think it ok to give hardware manufacturers choice between open-source and closed source in drivers all the OS that have given that choice have end up with limited architecture support. Even Microsoft runs into problem where their arm powered devices don't support lots of add on devices their x86/x86_64 stuff does because vendor has not released new binary driver.

    Linux key to why it so popular is amount of hardware that by using the Linux kernel you will have drivers for on all platforms the Linux kernel supports including if you just create a new one. This feature has allowed Linux to be very anti-closed source drivers and anti non mainline drivers. Anti-closed source drivers and anti non mainline increases hardware support so making Linux kernel a more tempting item to use. That tempt to use it what gets companies over the barrel into releasing their drivers open source.

    Just take a look at the companies providing developers to Linux these days closely you will find lots who said they would never release any source code ever under GPLv2 yet here they do for the Linux kernel because the benefit that the mono-culture mainline mass volume of drivers that Linux nasty actions have caused out weigh their company personal hate of GPLv2 and open sourcing stuff.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by sinepgib View Post
    Sure, because sharing selected pages between userspace processes has equal chances of corrupting everybody's memory as a salad of drivers running on ring 0.
    QNX driver failures that have all drivers running as userspace processors due to shared memory between drivers does create all the same failures as if the driver was ring 0 with memory. Yes like having Linux kernel with totally unrestricted /dev/mem to userspace. So there are real world examples that once you start using shared memory to create performance with a Microkernel that it common to completely undermine all the theoretical advantage of a Microkernel. Yes QNX puts this stuff in documentation about drivers.

    Something to remember here there was a time with Microkernels where you would have kernel as ring 0, userspace and ring 3 and drivers at ring 1 and 2. Of course this kind was not great on performance.

    Leave a comment:


  • sinepgib
    replied
    Originally posted by ryao View Post
    This sounds like doing away with virtual memory protection. Now things sound like a monolithic kernel.
    Sure, because sharing selected pages between userspace processes has equal chances of corrupting everybody's memory as a salad of drivers running on ring 0.

    Leave a comment:

Working...
X