Announcement

Collapse
No announcement yet.

The Linux Kernel Has Been Forcing Different Behavior For Processes Starting With "X"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Rabiator
    replied
    Originally posted by xfcemint View Post
    I was citing sinepgib​, to the level that I was just repeating his sentences almost word-for-word. My point is: what would sinepgib​​ argument look like if the year is somewhere about 1991, and he was arguing that operating systems should have preemptive multitasking and protected memory, in comparison to what I'm arguing now: that an OS needs to have a microkernel. But I switched our roles: I'm arguing AGAINST preemptive multitasking and protected memory (which is, of course, ridiculous).

    Therefore, a sentence of mine like: "There are not many user-level applications. There are just a few important ones: "Word", a spreadsheet processor, and a BASIC programming language" must be understood in the context of the year 1991 and the short-sightnedness of the wisdom-of-the-time from today's perspective.
    Then I missed the context of the sarcasm. In my defense, users only caring about Office, browser and e-mail is not too far off from your sentence, and I believe there are quite a few of those out there.


    Leave a comment:


  • Rabiator
    replied
    Originally posted by xfcemint View Post
    You don't need to enforce permissions on personal consumer-grade computers. By definition, only one user is using them, and he knows to not install suspicious software. A valuable item like a personal computer should be kept in a room behind a key, anyway. User's don't want to use a permission system, it is too complicated for them, they just want the computer to work.

    Eventually, all important bugs in all important applications will be found and corrected. The user's won't be able notice the difference between a system featuring proteted memory and preemptive multitasking, and the one without.

    Users won't appreciate protected memory and preemptive multitasking. Those two items are too complicated for users to appreciate or realize the benefits. The users won't know why their computer is crashing so often, because they will blame it all on low quality applications.
    The sort of user who knows and cares about installing only trustworthy software is also the sort that might appreciate a permission system. We already have the distinction between "ordinary" user and root on Linux, and I think people here know and appreciate it.
    Even a much more fine-grained permission system might make sense. I could imagine a sort of "sandbox light" where you restrict an application to its sub-directory in the home directory, without going all the way to setting up a virtual machine. Perhaps that already exists and I'm just not aware of it.

    Fixing all important bugs in all important applications is very optimistic. Because most developers have a tendency to add new stuff before hunting down the very latest of the old bugs, and new features will probably come with new bugs. And then there are the business people whose priorities are even more towards new features to sell.

    On the OS side, even most non-expert users will eventually notice that the same applications crash more often on some OSes than on others. Take Windows 3.1 vs. Windows 9x vs. the NT series for example.

    Originally posted by xfcemint View Post
    Implementing protected memory and preemptive multitasking is costly, so you'll need to convince the business people, too. All other sucessfull home computers don't have protected memory and preemptive multitasking, just look at IBM PC, Macintosh, Atari ST and Amiga 500 (note: I think A500 is preemptive but not protected). So the bussiness people won't believe you, because it has been like this for too long.

    The most likely path to protected memory and preemptive multitasking succeeding, IMO, would be trickle down from servers and workstations. That's how the Linux kernel ended up in consumer-grade hardware after all. It had been the workstation and server OS for several years before the consumer-grade home computers got some serious attention from the Linux community.​
    I think you are right about the trickle down. In the UNIX world, multi-user systems were a big thing and you certainly did not want one clumsy or malicious user to bring down the system for all users. Hence, limited permissions and preemptive multitasking. I guess the business people were eventually convinced by too many cases of system unavailability.

    In the PC world, a desire for these things might come from wanting to run several applications in parallel, without one bad app pulling down the whole system. At least I remember the huge difference between Windows 9x and Windows 2000 in this respect. Some trickle down from the server world would be an obvious approach on the technology side.

    Originally posted by xfcemint View Post
    My guess their reason to not go at it at first was a combination of consumer hardware being too slow before (it did come with more context switching overhead compared to building upon a monolithic kernel) and probably some missing compatibility with software. Because consumer hardware is nowadays just a weaker version of what runs on servers, a protected m. and preemptive m. developed with multi-user time-sharing machines in mind may end up running in a consumer machine.

    But even then it's something that will probably take no less than a decade due to migration costs. Only then and when compatible with current userspace is good enough it's likely that computers will start shipping a proper protected mode and preemptive multitasking.​
    Right on most counts, although I think small memory sizes were more relevant than processor speed. My first IBM compatible machine was a 80386X with 4 MB RAM, which was a generous amount of RAM at the time. A lot of PCs were still sold with one MB. But when I got my hands on a copy of OS/2, it still needed most of the memory for itself. Unix was also said to need at least 4 MB.

    The migration in the Windows world was indeed a slow and expensive affair. Mostly slow because software was not always upgraded by its makers, but lingered until it died from lack of user interest. Also, Microsoft was bending over a lot to accommodate programming habits from the Windows 3.x era. Such as dumping configuration data into the installation directory. Of course, that reduced pressure on developers to fix their shit.
    Reportedly, MS even had dedicated code in Windows 95 to avoid breaking SimCity. It was SimCity that had a bug in memory management, but MS changed the memory manager in Win95 to accommodate that.
    "Not breaking Userland" was almost as important to Microsoft as to Linus Torvalds. Linux just had it easier because it came from the UNIX world where having to stick to one's home directory was already well established.
    Last edited by Rabiator; 12 November 2022, 10:19 AM.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by cj.wijtmans View Post
    Ah yes we are in a world where coordinating with hundreds if not thousands developers is easier than some ABI drama, which exists for windows likewise(xp->vista->8->10). The whole mess with nvidia and amd graphics drivers was evidence enough for me linux needs to get closer to a microkernel. Maybe not completely. But set some rules what goes into the kernel. For example vesa framebuffer, USB for kb/mouse support, the very basic essentials to get a basic working terminal. Anything to complex and not based on standards should be yeeted out. also these drivers need their own ring.
    https://kernel-recipes.org/en/2022/talks/hid-bpf/ for the USB kb/mouse what is being looked at here is a managed solution being use ebpf to do quirks driver work.
    USB kb and mouse most are very standard just there are a lot of devices that are slightly off specification. Lot of the USB HID stuff is this way. Windows you do end up with thousands of drivers for USB HID that are 99% the same with just some minor alteration to deal with some vendor particular quirk.

    Microkernel idea might make sense in some areas. Managed OS methods does make more sense in others. Managed OS are OS where drivers are bytecode that the kernel has a built in JIT/AOT compiler to turn into native code. Remember Managed OS you have start up overhead of the JIT/AOT but then you don't have context switch and in many cases can totally avoid IPC overheads items like keyboard and mouse were latency could be issue Managed OS solution may be better. Running drivers in their own ring has it own set of problems.

    Linux kernel is developing some managed OS features. First appearance of managed OS solution instead of individual kernel drivers with Linux was ebpf for IR remotes. IR remotes absolutely can be quirky.

    The reality there is more than 1 way to solve this problem cj.wijtmans. Also you suggested let solve this with Microkernel ideal you did not cover how will i do this at high performance and how will I be sure this copes with thousands of different compilers Linux distributions are going to use. Yes the number of compilers used by Linux distributions to build core parts and the interaction issues this causes is a very large problem. You will mainline everything that the upstream Linux kernel developers want to mandate is to reduce the number of different compilers interacting with each other.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by xfcemint View Post
    Amazing, I am simply speechless. That changes everything.
    I guess being smart response.
    Vista graphic ABI is not the same as Windows 11 graphics ABI. There have been quite a few changes in the middle.


    There are quite a few changes in the middle. Big thing here is Microsoft is locking the compiler you use to make kernel drivers.
    https://learn.microsoft.com/en-us/wi...-wdk-downloads
    Windows 11, version 22H2 Download the Windows Driver Kit (WDK) Download the Windows Driver Kit (WDK)
    Windows 11, version 21H2 Windows 11, version 21H2 WDK
    Windows Server 2022 WDK for Windows Server 2022
    Windows 10, version 22H2
    Windows 10, version 21H2
    Windows 10, version 21H1
    Windows 10, version 20H2
    Windows 10, version 2004
    WDK for Windows 10, version 2004
    Windows 10, version 1909
    Windows 10, version 1903
    WDK for Windows 10, version 1903
    Windows 10, version 1809
    Windows Server 2019
    WDK for Windows 10, version 1809
    Windows 10, version 1803 WDK for Windows 10, version 1803
    Windows 10, version 1709 WDK for Windows 10, version 1709
    Windows 10, version 1703 WDK for Windows 10, version 1703
    Windows 10, version 1607
    Windows 10, version 1511
    Windows 10, version 1507
    Windows Server 2016
    WDK for Windows 10, version 1607
    Windows 8.1 Update WDK 8.1 Update (English only) - temporarily unavailable
    WDK 8.1 Update Test Pack (English only) - temporarily unavailable
    WDK 8.1 Samples
    Windows 8 WDK 8 (English only)
    WDK 8 redistributable components (English only)
    WDK 8 Samples
    Windows 7 WDK 7.1.0

    Yes from Windows 7 to Windows 11 there 13 different driver development kits. A driver make with Windows 11 22H2 driver development kit will not work on Windows 11 21H2 system. Not all Windows 7 drivers as in one made with WDK 7.1.0 will work with Windows 8 let alone going forwards to Windows 11.

    Yes a Windows Vista driver might load in windows 11 but that if you are lucky. Yes better chance than Linux kernel CONFIG_MODVERSIONS but its the same basic thing of check the version information in the driver and attempt to link it up right that going on inside windows with a little bit extra abstraction for known cases where its not going to work.

    So Windows 7 to 11 you have 13 different compilers. That is 13 different compiler quirks to deal with and Microsoft still has failures because this is a big enough problem space to stuff you over. Now you look at Linux there are over 600 different Linux distributions in active development large number making their own kernels. Worse each kernel version they release could have used different compiler version and had distribution own unique patches added altering offsets of things. So you now have a many thousand wide problem of compiler quirks with upstream API/ABI changes as well..

    xfcemint I guess I was not detail enough. The reality is if a driver from the prior version of windows works on current version of windows there is a lot of luck involved. Yes lot of cherry picking the times it successful and ignoring all the times it not for anyone claim it works. Now the reason why it does not work under Windows all the time big one is the differences between driver development kit compilers. Now you look at Linux distributions and start counting the compilers used its like darn I am stuffed. Lets take debian changes the compiler version used almost as often as they make a new kernel and of course ubuntu does not use the same compliers as debian in their builds then Redhat does not use the same compilers as everyone else.... Start seeing the level of doomed yet.

    Now the fact that everyone using the Linux kernel is using different compilers with different quirks to build kernel means even attempting to make a high performance microkernel you are screwed because those compiler quirks are going to get you with alterations to memory alignments and other things as you attempt to use shared memory to improve performance..

    There is a downside to freedom of open source for OS kernel be it microkernel or monolithic the lack of ablity to control the compiler that is being used to build it once enough independent parties get involved.

    The reality people claim windows works for driver compatibility without looking at how Microsoft Windows is doing it and the failures it is suffering from. Linux is not the only thing with problem in this department. Linux problem space is worse and the ones that expand the problem space to hell is not the alone upstream kernel developers you need to include the downstream distribution makers who do not cooperate with each other..

    Leave a comment:


  • cj.wijtmans
    replied
    Originally posted by oiaohm View Post

    https://www.graplsecurity.com/post/i...e-linux-kernel
    You need to read this one then read the QNX page I quoted. Yes this io_uring problem that turned up is the same problem that effects QNX. Microkernel philosophy is not without its major hazards. As you attempt to remove the cost of context switching to gain performance in a Microkernel you start moving around structures that end up being linked back the compilers you are using behaving self/behaving the same . Yes the driver and kernel might be at two different ring levels but the issue of sharing memory space always has fixed set of issues.

    This is the problem Microkernel is not silver bullet that magically fixes lots of these problems. Majority that is listed in the Linux kernel stable abi nonsense documentation is universal that does not care if your OS is microkernel or monolithic design. Stable ABI/API is it own unique problem that surprise surprise that is 99.99% independent to choice of OS kernel design. There are really only minor differences with the Stable ABI/API problem solution requirements when the OS is Microkernel or Monolithic.

    1) Locking/limiting compiler choice by some means or the complete thing comes too complex.
    2) Embedded versioning.
    3) Abstraction to allow old versions of the interface on new this will have performance cost and if not done right will cause developers making new drivers not to move to new API/ABI from deprecated API/ABI that could be security flawed.
    4) Trust if you want performance that stuff is not break the rules and make the system unstable.

    Linux kernel developers only really have one option todo 1 due to Linux kernel being open source and those taking that source code can use what ever compiler they like that one option is BPF.
    2 embeded versioning does exist just some kernel decide to be built without it.
    Linux kernel missing 3 for binary and source kernel modules but BPF is getting it these days.
    Linux kernel users want stability/security stuff so are not that trusting so you don't have 4.

    None of those 4 points has anything to-do with OS being Microkernel or monolithic and these are the major points. This is the problem people hold up microkernel as silver built to not having Stable ABI the reality its not. the silver bullet at all. Stable ABI sounds simple but it hell complex to do.
    Ah yes we are in a world where coordinating with hundreds if not thousands developers is easier than some ABI drama, which exists for windows likewise(xp->vista->8->10). The whole mess with nvidia and amd graphics drivers was evidence enough for me linux needs to get closer to a microkernel. Maybe not completely. But set some rules what goes into the kernel. For example vesa framebuffer, USB for kb/mouse support, the very basic essentials to get a basic working terminal. Anything to complex and not based on standards should be yeeted out. also these drivers need their own ring.
    Last edited by cj.wijtmans; 11 November 2022, 08:28 PM.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by xfcemint View Post
    Another thing, perhaps related to what oiaohm said (he is sometimes right). Linux can move towards the microkernel "philosophy". It just needs to define an ABI for userspace drivers; add some additional, more mikrokernel-alike syscalls; add some resource-counting mechanism for userspace drivers; add a privilege system (capabilities) for userspace drivers; add some versioning mechanism (like APT) that can figure out which services are mutually compatible.

    What do you get: a Linux-compatible microkernel-alike OS. It won't be as good and as fast as a microkernel from scratch, but it would be an improvement in the right direction. The first step: a decision has to be made by Linus and his close associates to start moving towards a microkernel philosophy.

    You need to read this one then read the QNX page I quoted. Yes this io_uring problem that turned up is the same problem that effects QNX. Microkernel philosophy is not without its major hazards. As you attempt to remove the cost of context switching to gain performance in a Microkernel you start moving around structures that end up being linked back the compilers you are using behaving self/behaving the same . Yes the driver and kernel might be at two different ring levels but the issue of sharing memory space always has fixed set of issues.

    This is the problem Microkernel is not silver bullet that magically fixes lots of these problems. Majority that is listed in the Linux kernel stable abi nonsense documentation is universal that does not care if your OS is microkernel or monolithic design. Stable ABI/API is it own unique problem that surprise surprise that is 99.99% independent to choice of OS kernel design. There are really only minor differences with the Stable ABI/API problem solution requirements when the OS is Microkernel or Monolithic.

    1) Locking/limiting compiler choice by some means or the complete thing comes too complex.
    2) Embedded versioning.
    3) Abstraction to allow old versions of the interface on new this will have performance cost and if not done right will cause developers making new drivers not to move to new API/ABI from deprecated API/ABI that could be security flawed.
    4) Trust if you want performance that stuff is not break the rules and make the system unstable.

    Linux kernel developers only really have one option todo 1 due to Linux kernel being open source and those taking that source code can use what ever compiler they like that one option is BPF.
    2 embeded versioning does exist just some kernel decide to be built without it.
    Linux kernel missing 3 for binary and source kernel modules but BPF is getting it these days.
    Linux kernel users want stability/security stuff so are not that trusting so you don't have 4.

    None of those 4 points has anything to-do with OS being Microkernel or monolithic and these are the major points. This is the problem people hold up microkernel as silver built to not having Stable ABI the reality its not. the silver bullet at all. Stable ABI sounds simple but it hell complex to do.

    Leave a comment:


  • sinepgib
    replied
    Originally posted by xfcemint View Post
    It won't be as good and as fast as a microkernel from scratch, but it would be an improvement in the right direction.
    Not right away, but once that's established and has ample driver support you can start stripping stuff to gradually make it a microkernel proper. I think that should make it at least as fast as a microkernel from scratch at some point.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by mdedetrich View Post
    This ontop of the fact that Linus has a hatred towards micokernel/s (and especially GNU hurd), a lot of the arguments being put forward are technically speaking largely baseless and only exist because cultural and historical reasons, and increasingly that style of thought is been seen as ancient relic. While there are arguments that if you were going to make a general purpose kernel nowadays microkernels would be too extreme, you would really be scraping the bottom of the barrel to argue that a monolithic/linux like kernel is good design. I mean modern day linux desktop is having to deal with these design limitations, i.e. if you buy a brand new graphics card you have to run the latest Linux kernel version (which depending on your distribution/circumstances may not even be ideal or even possible), this is purely because Linux refuses to have a stable graphics ABI otherwise you would be able to install a graphics driver just like any package (regardless if the driver is open source or not) on any Linux kernel version as long as its somewhat modern (remember that Windows has had the same stable graphics ABI since Vista days).
    No you are guilty of falsehoods as well. Vista graphic ABI is not the same as Windows 11 graphics ABI. There have been quite a few changes in the middle.

    This feature in Linux is matches up to 1 of the key features why Vista driver on Windows 11 appears to work but there are 3 key features in total why it works.
    1) Windows Kernel modules have version details. MODVERSION feature.
    2) Windows instead of failing like MODVERSION does with Linux can apply abstraction layer to the drivers call this does result in older drivers been lower performance than newer drivers.
    3) Microsofit is able to define what compiler/s developers are allowed to use to make drivers and this is a factor.

    Notice these points have nothing todo with being Microkernel.

    You start of by saying its extra effort to have defined interface you have missed what that in fact requires.

    In the section "Binary Kernel Interface"
    Depending on the version of the C compiler you use, different kernel
    data structures will contain different alignment of structures, and
    possibly include different functions in different ways (putting
    functions inline or not.) The individual function organization
    isn't that important, but the different data structure padding is
    very important.​
    This is the first point and it no error that it is. The problem with having multi compiler versions build OS leading to crashes and instability documented in Linux even raises it horrible head with Microkernels that used shared memory between driver parts this like QNX. Yes some of the cases of Microsoft updates result in some users systems not booting have also traced back to this same issue between Microsoft restricted list of allowed compilers.

    MODVERSION feature Linux already has so we can class that as even with Windows.
    The abstraction layer solution if you read on though the stable-api-nonsense notice the bit about new drivers using old USB interfaces that don't work right this happens under Windows and happens with all classes of drivers. So abstraction layer would need to be done better. But number compiler bit is a absolute killer. Without solving the compiler but you will have instability.

    Microsoft Singularity OS research project with managed OS was attempt of Microsoft to fix this problem before they started doing driver certification(where they can reject drivers built with wrong compiler). Basically bytecode abstraction. One thing about BPF bytecode and managed OS bytecode drivers is that this route is a solution to compiler miss match between the kernel and the drivers. Does come with a price of the driver in current managed OS and BPF designs of having compiler cost init time use BPF or managed OS driver.

    mdedetrich next option is get distribution for building Linux kernels to use a restricted list of compilers so that the abstraction layer does not need to be ultra complex. This is one of the hurding cats problem. Distributions want more performance than there competitors in benchmarks so will want to use non approved compilers. Being open source where the distributions are building the drivers and kernel themselves upstream kernel.org developers cannot control these actions in anyway. See Microsoft has control so they can pull this off.

    mdedetrich you might say stuff it just have the code in user space fully abstracted as Linux kernel userspace code is to be compiler neutral. Fuse and cuse and buse and uio and others have all been provided over the years. You have constant complaints about performance overhead vs in kernel code.


    Yes when you start solving the performance problems of fuse/cuse/buse.... other problems start turning up. Yes the QNX problem now it on Linux because of io_uring. Yes and very quickly you can end up back with hey kernel used X compiler user space application used Y compiler and the system dies. Or we cannot do X because all compliers don't support it. Like Linux kernel syscalls pass between kernel space and user space no 128 bit stuff as 128 bit stuff not because the hardware does not support 128bit stuff but because llvm and gcc implemented it differently. Yes BPF is able to use 128 bit operations because its 128 bit native code matches what ever the compiler that built the kernel did.

    Stable driver ABI has many problems. These problems exist be your OS a Microkernel or a Monolithic. When these problems don't appear to exist you have normally not looked close enough to see the mitigations. Like you missed Microsoft restricting compiler to make drivers. Stable API nonsense is not written in Monolithic unique way the problems it details apply to all OS and if there is appaerance of a stable API/ABI for drivers there has to be mitigations to the problems. The issue with Linux is may different mitigation options are not open to Linux kernel developers like restricting compiler versions completely. This also happens to many different open source Microkernels.

    Leave a comment:


  • sinepgib
    replied
    Originally posted by xfcemint View Post
    Microkernels... I can't wait for the day when I'll have one on my desktop. I'll open a champaign and celebrate all night. I'll finaly have the entire software on my computer under my control. I'll be able to choose from a vide variety of services, where each one is individually replacable, and no bad apples can cause damage to stability, security or reliability of my compter.
    Hmmm none of that is necessarily true tho. Interfaces between userspace services may break (specifically protocols), both the microkernel and the services may be closed source and thus not under your control, etc. Tight coupling of userspace is not only possible, but one of the main criticisms to systemd.

    Leave a comment:


  • mdedetrich
    replied
    Originally posted by xfcemint View Post

    That is quite interesting.

    Well, from my point of view you can still kind-of refactor a microkernel+services, like this: You rip out half of the services and replace them with refactored ones. As long as the interface to all other services and programs doesn't change, it works.
    Oh definitely, in the modern day the argument not being able to refactor/work on code unless its all in a single monolith is largely absurd. There is definitely extra effort in having to define an interface (making a well defined interface that needs to last ins't easy and its an art/skill in of itself) but its a skill that especially today people are very good at precisely because of the move away from monolithic designs in general. You can even take that argument further which is the act creating an interface actually forces you to think of the problem space and reveals potential issues rather than just iteratively hacking away on things while missing the forest from the trees.

    This ontop of the fact that Linus has a hatred towards micokernel/s (and especially GNU hurd), a lot of the arguments being put forward are technically speaking largely baseless and only exist because cultural and historical reasons, and increasingly that style of thought is been seen as ancient relic. While there are arguments that if you were going to make a general purpose kernel nowadays microkernels would be too extreme, you would really be scraping the bottom of the barrel to argue that a monolithic/linux like kernel is good design. I mean modern day linux desktop is having to deal with these design limitations, i.e. if you buy a brand new graphics card you have to run the latest Linux kernel version (which depending on your distribution/circumstances may not even be ideal or even possible), this is purely because Linux refuses to have a stable graphics ABI otherwise you would be able to install a graphics driver just like any package (regardless if the driver is open source or not) on any Linux kernel version as long as its somewhat modern (remember that Windows has had the same stable graphics ABI since Vista days).

    This is why the only Linux desktop devices that are successful to end users in the non server space are to a certain degree locked down devices (steam deck/android) and not general purpose PC's.
    Last edited by mdedetrich; 11 November 2022, 12:07 PM.

    Leave a comment:

Working...
X