Announcement

Collapse
No announcement yet.

The Linux Kernel Has Been Forcing Different Behavior For Processes Starting With "X"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • ryao
    replied
    Originally posted by xfcemint View Post
    memory copies -> shared memory
    This sounds like doing away with virtual memory protection. Now things sound like a monolithic kernel.

    Originally posted by xfcemint View Post
    context switches -> clustered OS calls
    Like preadv? Or perhaps the more complex syscalls that you have with a monolithic kernel design. Just implementing them would be a departure from a microkernel design, since you often need to have additional information to be able to keep executing and that information is only known to the “server”.

    Originally posted by xfcemint View Post
    context switches -> automatic or semi-automatic option to integrate services into kernel address space (that's not the same as a hybrid kernel, not even close to it. Why? The difference is that the user has a CHOICE.)
    As far as I know, microkernel designs do not typically support having services be optionally inside the kernel. The entire point is to get code outside of the kernel so that the critical code needed for the system to operate is isolated and can be proven to be bug free via formal methods for security and reliability purposes. If you start moving services into a micro kernel, you get a monolithic kernel with micro kernel heritage like XNU.

    If you want to do this dynamically at runtime, you need to stop giving processes overlapping virtual address spaces. You would need to keep track of all of the allocations so that virtual memory allocations would not overlap. This would be a potential serialization point. There are tricks you can do, but they are likely to worsen memory fragmentation inside your unified address space, on top of the implications that this would have for memory fragmentation issues.

    Imagine having a hostile process that remains benign while it tries to get promoted to the kernel address space. Then upon being promoted, it will then have complete control over your system and can do whatever it wants.

    This feels very much like MSDOS, which is an example of how not to design an OS. MSDOS would certainly allow software to run faster on it than on any modern OS, at the price of system stability and security. Also, the amount of work needed to realize the higher performance is huge.

    Microkernels are meant to be reliable and secure, not fast.

    Originally posted by xfcemint View Post
    microkernel support = tagged/colored caches, only a small part of the cache is flushed on a context switch (or nothing is flushed when repeatedly switching between a small number of contexts).
    This sounds like a speculative execution side channel vulnerability.
    Last edited by ryao; 14 November 2022, 08:51 PM.

    Leave a comment:


  • ryao
    replied
    Originally posted by sinepgib View Post
    Possibly a stupid question, but would mitigations for side-channel attacks change the performance tradeoff between monolithic and microkernels?
    I think either it makes it much worse (if still necessary, more syscalls will increase the performance hit), which in turn would obsolete many of the claims that "it's not that bad anymore" compared to monolithic, or maybe makes them unnecessary due to reduced sharing and might actually end up performing better than the monolithic kernel with mitigations enabled?
    I would guess that hasn't been measured but that it would have been analyzed theoretically in some paper, but I have no idea how to look for it.
    If anything, the gap would widen due to all of the process boundary transitions done by microkernels.

    Leave a comment:


  • sinepgib
    replied
    Originally posted by xfcemint View Post
    Second, you have to consider the very definition of an OS which is: noone knows what an OS is. The issue is in boundaries. Where does an OS end, and where do applications begin? Where do you draw the line? In a monolithic kernel, you are forced to draw the line somewhere, and then the issue of stable/unstable ABIs suddenly appears as a consequence (surprise surprise).
    To add on this, it's pretty much the (somewhat accepted) systemd argument: it's not an application you're expected to mix and match, but just a userspace portion of the OS. Of course, that's a forever open debate, but it means it's not just something you came up with.

    Originally posted by xfcemint View Post
    ​In a microkernel, a hardware vendor doesn't have to release the source code for a driver. That has some benefits and some drawbacks. But your conclusion, that this is what makes Linux successful, is far-fetched. I think it is OK to give hardware manufacturers a choice of open-source vs. closed source. I certainly wouldn't want to force them to open-source the schematics of their hardware, so I don't see why the drivers can't also be closed source.
    Considering the main argument for having in-tree drivers tends to be that boundary between debugging responsibility, just the fact drivers become just another userspace application makes that much more clear: if IPC fails, maybe the kernel is faulty, but everything else you complain to the driver maker, so the issue kernel maintainers seem to have pretty much vanishes. They don't need to inspect any driver source code to find the bug because the bugs is no longer (suspected to be) their responsibility any more.

    Originally posted by xfcemint View Post
    ​As to the "fate" of other microkernels and OS-es, I argue that the current "sample of microkernels" is not representative of what we are discussing, so no conclusions can be made.
    Yeah. While MINIX3 intends to be a general purpose OS, in practice it's still just a research ground for academic theses, and that makes it a bit incompatible with being general purpose, so nobody other than hobbyists and academics will really bother implementing hardware support. Its failure is not because it is a microkernel but because it tries to be two incompatible things.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by sinepgib View Post
    Note this is a bit of a false dichotomy. You can very well have your big monorepo with as many drivers as you want bundled within the tree, all the while keeping them running as separate processes talking via some efficient IPC.
    Besides, porting efforts, once the driver is done for a platform, should be the same or less for a microkernel, as it allows for fewer ad-hoc solutions. If your IPC is platform neutral then your driver will be as well (except for the hardware and bus it uses of course).
    No its not false dichotomy. You have to remember minix and many other open source microkernels are older than Linux. In theory you could have a big monolithic repo with drivers and a microkernel but this has never happened.

    Microkernels have all suffered the same fate restricting their take up. What reasons does a hardware vendor have to release their driver source code if the binary drivers work vs if their binary drivers don't work? That a serous question.

    There is a monolithic nature of Linux where drivers have to be mainlined that has driven linux massive platform support. Yes it would be nice at times of the Linux kernel had a stable ABI for drivers. But we have to remember Linux is what it is today because it does not.

    Yes not having a stable ABI for drivers has downside but it has a upside where vendors have end up-streaming to reduce their cost dealing with driver breakages due to upstream changes.

    So for a microkernel to grow like Linux kernel has it would have to have unstable driver ABI and get market share. Yes the intentional unstable ABI would be to force users to upstream drivers or not be on the current version so suffer the security nightmares and legal risk of using out of date kernel.

    Wait being slightly insecure design happens to put more pressure on vendors as well to upstream drivers.

    Remember sinepgib there is a long history of OS kernel that have been great in theory but have never grown a repo of drivers covering large number of platforms. Linux kernel is a outlier very strange with the amount of platform support it has. Interesting point is what is unique about linux vs all the other that never grew is no stable kernel driver ABI.

    sinepgib this is not a false dichotomy no matter how many times people try that line. Linux kernel has developed its huge platform support with a core fact not having a stable driver ABI so forcing mainline of code.

    This is why this harder than it first seams. There is a upside to Linux kernel monolithic nature. You don't want to throw the baby out with the bath water. The baby in what Linux has done is massive platform support I am sure we don't want to lose.

    Leave a comment:


  • sinepgib
    replied
    Originally posted by oiaohm View Post
    There are reasons why mainlining everything with Linux has been a very good thing for Linux. Big one is large platform support due to spinning up a new platform you don't need to remake all the drivers from scratch because they were mainlined. This is why reducing the scope of Linux does not work you reduce the scope of Linux then you have removed what makes Linux popular to use so its not Linux any more.
    Note this is a bit of a false dichotomy. You can very well have your big monorepo with as many drivers as you want bundled within the tree, all the while keeping them running as separate processes talking via some efficient IPC.
    Besides, porting efforts, once the driver is done for a platform, should be the same or less for a microkernel, as it allows for fewer ad-hoc solutions. If your IPC is platform neutral then your driver will be as well (except for the hardware and bus it uses of course).

    Leave a comment:


  • oiaohm
    replied
    Originally posted by xfcemint View Post
    But, if Linux goes microkernel route, then it doesn't really need corporate funding, since the scope of the Linux project gets reduced. Linus can then apply for EU funds or something similar. Wouldn't it be better to remove corporate influence from an important piece of software such as an OS kernel?

    EDIT: Also, what I have been saying so far is actually a route through a hybrid kernel (for compatibility reasons), and the kernel would remain a hybrid for at least a decade. Therefore, in the near future it would be business as usual.
    https://wiki.minix3.org/doku.php?id=...rerequirements vs https://en.wikipedia.org/wiki/List_o..._architectures

    Reducing the scope would result in reduced platform support. Lets take the hell Microsoft has had getting Windows on arm or the fact minix and others with reduced scope end up with reduced platform support. Mainlining the drivers the way Linux does means those drivers are open on other platforms as a starting point.

    Corporate influence is required to have drivers for hardware. EU funds normally not going to cover writing drivers to support hardware. Hardware vendors are corporate like it or not.

    The biggest problem with arguments for microkernels is that, if those are right, then it automatically implies that Linus & company are wrong. The venerated heroes-of-yesterday are now suddenly as ignorant and as stubborn as ordinary people, and that is the hardest thing to chew through.
    Yes that is correct. Problem here is there are known issues with microkernels. Research into managed OS happened because there was valid reasons for managed OS design as well.

    Monolithic, Microkernel and Managed OS all have their strengths and weaknesses.

    There are reasons why mainlining everything with Linux has been a very good thing for Linux. Big one is large platform support due to spinning up a new platform you don't need to remake all the drivers from scratch because they were mainlined. This is why reducing the scope of Linux does not work you reduce the scope of Linux then you have removed what makes Linux popular to use so its not Linux any more.

    This is what makes Linux hard problem the huge scope of Linux is what the solution has to work with and not undermine.

    Leave a comment:


  • cj.wijtmans
    replied
    Originally posted by xfcemint View Post

    When I think of it, the most obvious targets for a microkernel are hypervisors and "consumer desktop" = home computers/laptops. The home computer market is a target because security, stability and reliability are important there, and also because some (possible/potential) performance penalty there would matter none.

    It is a bit sad to hear that Linus is such a big opponent of microkernels. It shuldn't be surpising, especaially not after that famous debate of his. Like other people, after he has choosen a side, it is stuborness to the end in ever rising amounts. The sad thing is that by doing this he has actually turned himself into the exact same problem that he once helped to solve/defeat.

    It also makes me wonder what other OSS institutions besides Linus are doing regarding microkernels. I mean, anyone can just fork the Linux kernel and add microkernel facilities, although that requires a big amount of work. Has Linus managed to convince everyone to his side? Is there a lack of funding? Is there a lack of interest? Is there a lack of supportes in the academic community?

    Also, I was re-reading this fascinating thread a few times. Look at all the ignorance that exists out there. There certainly won't be much support for microkernels from the side of "advanced users": sysadmins, programmers and such. They are all happy to simply recite the common wisdom and glorify their heroes.

    The biggest problem with arguments for microkernels is that, if those are right, then it automatically implies that Linus & company are wrong. The venerated heroes-of-yesterday are now suddenly as ignorant and as stubborn as ordinary people, and that is the hardest thing to chew through.
    I think if the linux were to go more microkernel route the corporate funding would not make much sense anymore. That would be an issue.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by mdedetrich View Post
    Of course there are changes, the point is that the API is forwards and backwards compatible during thay timeframe. Loading newer drivers on older versions of Windows just means that the newer functions don't get used and loading older drivers on newer Windows versions means the newer Windows version detects the driver doesn't support certain functions and doesn't enable those features.

    If you have a Windows machine you can try this out yourself, Microsoft is very serious when it comes to backwards and forwards compatibility.
    No I have tried it and that the problem. You need to go and try it out properly this time build some sample drivers with the WDKs and see how it really behaves. I have drivers from vista items that the maker is no more that don't work on Windows 7. Microsoft is very serous about backwards and forward compatibility this is true but they don't get it to work all the time and they have a restricted set of compilers and still get done in by it.

    By the way its not true that you have backwards and forwards compatibility . If you build a driver for Windows 11, version 22H2 and you attempt to load it on Windows 11, version 22H1 it don't work at all. This is true with all windows driver development kits. Reality you cannot load drivers build for newer on older with windows at all because the Windows driver loader forbids it.

    Windows only has backwards compatibility with drivers. As in newer Windows able to load older drivers some of the time.

    Those providing drivers are providing multi versions of drivers because it required if they want to use newer features with Windows. Basically you just wrote what has not been true since Windows Vista.

    Before Windows Vista with XP and before you use to have that where you load driver and it would attempt to nop out the not support functions. The result was unstable drivers. Yes XP driver on windows 2000 use to play up badly at times. Microsoft learnt what you described was a huge mistake long ago.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by xfcemint View Post
    Why do you think that a managed OS cannot work together with a microkernel? This "managed OS" would also be related to clustered OS calls, as the easiest way to do clustered calls would be via a byte code interpreter. If I were to quickly judge the managed OS idea, I would say that it is too complicated, too risky, for a too small (potential) gain compared to a microkernel. It essentially boils down to integrating an entire Javascript JIT inside an OS.

    Managed OS can have a Microkernel but a Managed OS is more like a Hybrid OS. You would not use a Javascript JIT no managed OS ever has. BPF with Linux is what would come the managed OS part of it language. Microsoft did restricted CLR as in .net and so have others. Different other groups have done Java.


    Managed OS that is a Microkenrel mostly use Language based system for the protection. So not using your hardware ring levels. All the managed byte code drivers once converted to native code run in ring 0 with the Microkernel. Of course as Linux BPF does is bits in the way you would with managed OS just only small segments.

    Clustered OS calls that would be like IO_Uring being looked at for doing syscalls with Linux. Managed OS stuff is a bytecode that can perform logical compare operations and so on.


    There is something interesting about BPF and most of the best managed OS is that the bytecode language is designed intentionally not to be Turing-complete for the driver bytecode. This provides a very interesting security difference. Yes event responsive driver designs with fixed amount operations processing before the code must stop this is a common feature of the managed OS driver bytecode designs. Managed OS are not trusting the driver developers not to attempt to code infinity loops or buffer overflows or other horrible things instead prevent those things by the bytecode design or the bytecode verifier.

    Originally posted by xfcemint View Post
    Highest performance is overrated, and it always was. ARM failed with highest performance, only to succeed later with low power and simplicity. How do you get highest performance? Who knows, you deal with that issue later. The issue here is that the house is falling appart because the foundations cannot withstand the weight, so we should not be discussing whether the new house can ever be as cosy as the old one.
    The highest performance might be over rated but too low of performance is also not acceptable. The weight of massive hardware support Linux has no microkernel has ever been able to support it in history. Massive hardware support is another thing that increases the number of compilers Linux kernel is building with.

    Linux is a very unique problem space. Most likely the solve to the Linux kernel problem space will not be a pure Mirokernel. Linux kernel most likely will end up a mixture of concepts because of the problem space Linux kernel exists in. Yes the compiler nightmare is part of the Linux problem space caused by the massive hardware support.

    Originally posted by xfcemint View Post
    About compilers for Linux... there exist align directives. Some additional directives can be added that mean slightly different things on different architectures/platforms, in order to maintain high performance. Historically, the type int did not have a fixed size, but programs can be made compatible even if int is 36 bits on one platform and 16 bits on another.
    The is a hard reality here. Even microsoft with the compilers they ship with their driver development kits have not managed to keep align directives doing the same things all the time. Yes they were attempting directly todo this. Developer errors in compilers are a true curse. More compilers more developer errors in compilers you have to deal with the hard it is to have a functional ABI.

    The hard reality is that compilers are high precision tool not high accuracy tool when it comes to alignments. Please note with gcc and llvm you change the optimization level same compiler and your alignments at times can be different even if you used directives. Of course every run at the same optimization level generates the same alignments. So high precision on alignments is true closer inspection shows very poor accuracy with alignments.

    Leave a comment:


  • mdedetrich
    replied
    Originally posted by oiaohm View Post

    No you are guilty of falsehoods as well. Vista graphic ABI is not the same as Windows 11 graphics ABI. There have been quite a few changes in the middle.

    This feature in Linux is matches up to 1 of the key features why Vista driver on Windows 11 appears to work but there are 3 key features in total why it works.
    1) Windows Kernel modules have version details. MODVERSION feature.
    2) Windows instead of failing like MODVERSION does with Linux can apply abstraction layer to the drivers call this does result in older drivers been lower performance than newer drivers.
    3) Microsofit is able to define what compiler/s developers are allowed to use to make drivers and this is a factor.

    Notice these points have nothing todo with being Microkernel.

    You start of by saying its extra effort to have defined interface you have missed what that in fact requires.
    https://www.kernel.org/doc/Documenta...i-nonsense.rst
    In the section "Binary Kernel Interface"


    This is the first point and it no error that it is. The problem with having multi compiler versions build OS leading to crashes and instability documented in Linux even raises it horrible head with Microkernels that used shared memory between driver parts this like QNX. Yes some of the cases of Microsoft updates result in some users systems not booting have also traced back to this same issue between Microsoft restricted list of allowed compilers.

    MODVERSION feature Linux already has so we can class that as even with Windows.
    The abstraction layer solution if you read on though the stable-api-nonsense notice the bit about new drivers using old USB interfaces that don't work right this happens under Windows and happens with all classes of drivers. So abstraction layer would need to be done better. But number compiler bit is a absolute killer. Without solving the compiler but you will have instability.

    Microsoft Singularity OS research project with managed OS was attempt of Microsoft to fix this problem before they started doing driver certification(where they can reject drivers built with wrong compiler). Basically bytecode abstraction. One thing about BPF bytecode and managed OS bytecode drivers is that this route is a solution to compiler miss match between the kernel and the drivers. Does come with a price of the driver in current managed OS and BPF designs of having compiler cost init time use BPF or managed OS driver.

    mdedetrich next option is get distribution for building Linux kernels to use a restricted list of compilers so that the abstraction layer does not need to be ultra complex. This is one of the hurding cats problem. Distributions want more performance than there competitors in benchmarks so will want to use non approved compilers. Being open source where the distributions are building the drivers and kernel themselves upstream kernel.org developers cannot control these actions in anyway. See Microsoft has control so they can pull this off.

    mdedetrich you might say stuff it just have the code in user space fully abstracted as Linux kernel userspace code is to be compiler neutral. Fuse and cuse and buse and uio and others have all been provided over the years. You have constant complaints about performance overhead vs in kernel code.

    https://www.graplsecurity.com/post/i...e-linux-kernel
    Yes when you start solving the performance problems of fuse/cuse/buse.... other problems start turning up. Yes the QNX problem now it on Linux because of io_uring. Yes and very quickly you can end up back with hey kernel used X compiler user space application used Y compiler and the system dies. Or we cannot do X because all compliers don't support it. Like Linux kernel syscalls pass between kernel space and user space no 128 bit stuff as 128 bit stuff not because the hardware does not support 128bit stuff but because llvm and gcc implemented it differently. Yes BPF is able to use 128 bit operations because its 128 bit native code matches what ever the compiler that built the kernel did.

    Stable driver ABI has many problems. These problems exist be your OS a Microkernel or a Monolithic. When these problems don't appear to exist you have normally not looked close enough to see the mitigations. Like you missed Microsoft restricting compiler to make drivers. Stable API nonsense is not written in Monolithic unique way the problems it details apply to all OS and if there is appaerance of a stable API/ABI for drivers there has to be mitigations to the problems. The issue with Linux is may different mitigation options are not open to Linux kernel developers like restricting compiler versions completely. This also happens to many different open source Microkernels.
    Of course there are changes, the point is that the API is forwards and backwards compatible during thay timeframe. Loading newer drivers on older versions of Windows just means that the newer functions don't get used and loading older drivers on newer Windows versions means the newer Windows version detects the driver doesn't support certain functions and doesn't enable those features.

    If you have a Windows machine you can try this out yourself, Microsoft is very serious when it comes to backwards and forwards compatibility.
    Last edited by mdedetrich; 12 November 2022, 06:05 PM.

    Leave a comment:

Working...
X