Announcement

Collapse
No announcement yet.

The Linux Kernel Has Been Forcing Different Behavior For Processes Starting With "X"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by xfcemint View Post
    Amazing, I am simply speechless. That changes everything.
    I guess being smart response.
    Vista graphic ABI is not the same as Windows 11 graphics ABI. There have been quite a few changes in the middle.


    There are quite a few changes in the middle. Big thing here is Microsoft is locking the compiler you use to make kernel drivers.
    https://learn.microsoft.com/en-us/wi...-wdk-downloads
    Windows 11, version 22H2 Download the Windows Driver Kit (WDK) Download the Windows Driver Kit (WDK)
    Windows 11, version 21H2 Windows 11, version 21H2 WDK
    Windows Server 2022 WDK for Windows Server 2022
    Windows 10, version 22H2
    Windows 10, version 21H2
    Windows 10, version 21H1
    Windows 10, version 20H2
    Windows 10, version 2004
    WDK for Windows 10, version 2004
    Windows 10, version 1909
    Windows 10, version 1903
    WDK for Windows 10, version 1903
    Windows 10, version 1809
    Windows Server 2019
    WDK for Windows 10, version 1809
    Windows 10, version 1803 WDK for Windows 10, version 1803
    Windows 10, version 1709 WDK for Windows 10, version 1709
    Windows 10, version 1703 WDK for Windows 10, version 1703
    Windows 10, version 1607
    Windows 10, version 1511
    Windows 10, version 1507
    Windows Server 2016
    WDK for Windows 10, version 1607
    Windows 8.1 Update WDK 8.1 Update (English only) - temporarily unavailable
    WDK 8.1 Update Test Pack (English only) - temporarily unavailable
    WDK 8.1 Samples
    Windows 8 WDK 8 (English only)
    WDK 8 redistributable components (English only)
    WDK 8 Samples
    Windows 7 WDK 7.1.0

    Yes from Windows 7 to Windows 11 there 13 different driver development kits. A driver make with Windows 11 22H2 driver development kit will not work on Windows 11 21H2 system. Not all Windows 7 drivers as in one made with WDK 7.1.0 will work with Windows 8 let alone going forwards to Windows 11.

    Yes a Windows Vista driver might load in windows 11 but that if you are lucky. Yes better chance than Linux kernel CONFIG_MODVERSIONS but its the same basic thing of check the version information in the driver and attempt to link it up right that going on inside windows with a little bit extra abstraction for known cases where its not going to work.

    So Windows 7 to 11 you have 13 different compilers. That is 13 different compiler quirks to deal with and Microsoft still has failures because this is a big enough problem space to stuff you over. Now you look at Linux there are over 600 different Linux distributions in active development large number making their own kernels. Worse each kernel version they release could have used different compiler version and had distribution own unique patches added altering offsets of things. So you now have a many thousand wide problem of compiler quirks with upstream API/ABI changes as well..

    xfcemint I guess I was not detail enough. The reality is if a driver from the prior version of windows works on current version of windows there is a lot of luck involved. Yes lot of cherry picking the times it successful and ignoring all the times it not for anyone claim it works. Now the reason why it does not work under Windows all the time big one is the differences between driver development kit compilers. Now you look at Linux distributions and start counting the compilers used its like darn I am stuffed. Lets take debian changes the compiler version used almost as often as they make a new kernel and of course ubuntu does not use the same compliers as debian in their builds then Redhat does not use the same compilers as everyone else.... Start seeing the level of doomed yet.

    Now the fact that everyone using the Linux kernel is using different compilers with different quirks to build kernel means even attempting to make a high performance microkernel you are screwed because those compiler quirks are going to get you with alterations to memory alignments and other things as you attempt to use shared memory to improve performance..

    There is a downside to freedom of open source for OS kernel be it microkernel or monolithic the lack of ablity to control the compiler that is being used to build it once enough independent parties get involved.

    The reality people claim windows works for driver compatibility without looking at how Microsoft Windows is doing it and the failures it is suffering from. Linux is not the only thing with problem in this department. Linux problem space is worse and the ones that expand the problem space to hell is not the alone upstream kernel developers you need to include the downstream distribution makers who do not cooperate with each other..

    Comment


    • Originally posted by cj.wijtmans View Post
      Ah yes we are in a world where coordinating with hundreds if not thousands developers is easier than some ABI drama, which exists for windows likewise(xp->vista->8->10). The whole mess with nvidia and amd graphics drivers was evidence enough for me linux needs to get closer to a microkernel. Maybe not completely. But set some rules what goes into the kernel. For example vesa framebuffer, USB for kb/mouse support, the very basic essentials to get a basic working terminal. Anything to complex and not based on standards should be yeeted out. also these drivers need their own ring.
      https://kernel-recipes.org/en/2022/talks/hid-bpf/ for the USB kb/mouse what is being looked at here is a managed solution being use ebpf to do quirks driver work.
      USB kb and mouse most are very standard just there are a lot of devices that are slightly off specification. Lot of the USB HID stuff is this way. Windows you do end up with thousands of drivers for USB HID that are 99% the same with just some minor alteration to deal with some vendor particular quirk.

      Microkernel idea might make sense in some areas. Managed OS methods does make more sense in others. Managed OS are OS where drivers are bytecode that the kernel has a built in JIT/AOT compiler to turn into native code. Remember Managed OS you have start up overhead of the JIT/AOT but then you don't have context switch and in many cases can totally avoid IPC overheads items like keyboard and mouse were latency could be issue Managed OS solution may be better. Running drivers in their own ring has it own set of problems.

      Linux kernel is developing some managed OS features. First appearance of managed OS solution instead of individual kernel drivers with Linux was ebpf for IR remotes. IR remotes absolutely can be quirky.

      The reality there is more than 1 way to solve this problem cj.wijtmans. Also you suggested let solve this with Microkernel ideal you did not cover how will i do this at high performance and how will I be sure this copes with thousands of different compilers Linux distributions are going to use. Yes the number of compilers used by Linux distributions to build core parts and the interaction issues this causes is a very large problem. You will mainline everything that the upstream Linux kernel developers want to mandate is to reduce the number of different compilers interacting with each other.

      Comment


      • Originally posted by xfcemint View Post
        You don't need to enforce permissions on personal consumer-grade computers. By definition, only one user is using them, and he knows to not install suspicious software. A valuable item like a personal computer should be kept in a room behind a key, anyway. User's don't want to use a permission system, it is too complicated for them, they just want the computer to work.

        Eventually, all important bugs in all important applications will be found and corrected. The user's won't be able notice the difference between a system featuring proteted memory and preemptive multitasking, and the one without.

        Users won't appreciate protected memory and preemptive multitasking. Those two items are too complicated for users to appreciate or realize the benefits. The users won't know why their computer is crashing so often, because they will blame it all on low quality applications.
        The sort of user who knows and cares about installing only trustworthy software is also the sort that might appreciate a permission system. We already have the distinction between "ordinary" user and root on Linux, and I think people here know and appreciate it.
        Even a much more fine-grained permission system might make sense. I could imagine a sort of "sandbox light" where you restrict an application to its sub-directory in the home directory, without going all the way to setting up a virtual machine. Perhaps that already exists and I'm just not aware of it.

        Fixing all important bugs in all important applications is very optimistic. Because most developers have a tendency to add new stuff before hunting down the very latest of the old bugs, and new features will probably come with new bugs. And then there are the business people whose priorities are even more towards new features to sell.

        On the OS side, even most non-expert users will eventually notice that the same applications crash more often on some OSes than on others. Take Windows 3.1 vs. Windows 9x vs. the NT series for example.

        Originally posted by xfcemint View Post
        Implementing protected memory and preemptive multitasking is costly, so you'll need to convince the business people, too. All other sucessfull home computers don't have protected memory and preemptive multitasking, just look at IBM PC, Macintosh, Atari ST and Amiga 500 (note: I think A500 is preemptive but not protected). So the bussiness people won't believe you, because it has been like this for too long.

        The most likely path to protected memory and preemptive multitasking succeeding, IMO, would be trickle down from servers and workstations. That's how the Linux kernel ended up in consumer-grade hardware after all. It had been the workstation and server OS for several years before the consumer-grade home computers got some serious attention from the Linux community.​
        I think you are right about the trickle down. In the UNIX world, multi-user systems were a big thing and you certainly did not want one clumsy or malicious user to bring down the system for all users. Hence, limited permissions and preemptive multitasking. I guess the business people were eventually convinced by too many cases of system unavailability.

        In the PC world, a desire for these things might come from wanting to run several applications in parallel, without one bad app pulling down the whole system. At least I remember the huge difference between Windows 9x and Windows 2000 in this respect. Some trickle down from the server world would be an obvious approach on the technology side.

        Originally posted by xfcemint View Post
        My guess their reason to not go at it at first was a combination of consumer hardware being too slow before (it did come with more context switching overhead compared to building upon a monolithic kernel) and probably some missing compatibility with software. Because consumer hardware is nowadays just a weaker version of what runs on servers, a protected m. and preemptive m. developed with multi-user time-sharing machines in mind may end up running in a consumer machine.

        But even then it's something that will probably take no less than a decade due to migration costs. Only then and when compatible with current userspace is good enough it's likely that computers will start shipping a proper protected mode and preemptive multitasking.​
        Right on most counts, although I think small memory sizes were more relevant than processor speed. My first IBM compatible machine was a 80386X with 4 MB RAM, which was a generous amount of RAM at the time. A lot of PCs were still sold with one MB. But when I got my hands on a copy of OS/2, it still needed most of the memory for itself. Unix was also said to need at least 4 MB.

        The migration in the Windows world was indeed a slow and expensive affair. Mostly slow because software was not always upgraded by its makers, but lingered until it died from lack of user interest. Also, Microsoft was bending over a lot to accommodate programming habits from the Windows 3.x era. Such as dumping configuration data into the installation directory. Of course, that reduced pressure on developers to fix their shit.
        Reportedly, MS even had dedicated code in Windows 95 to avoid breaking SimCity. It was SimCity that had a bug in memory management, but MS changed the memory manager in Win95 to accommodate that.
        "Not breaking Userland" was almost as important to Microsoft as to Linus Torvalds. Linux just had it easier because it came from the UNIX world where having to stick to one's home directory was already well established.
        Last edited by Rabiator; 12 November 2022, 10:19 AM.

        Comment


        • Originally posted by xfcemint View Post
          I was citing sinepgib​, to the level that I was just repeating his sentences almost word-for-word. My point is: what would sinepgib​​ argument look like if the year is somewhere about 1991, and he was arguing that operating systems should have preemptive multitasking and protected memory, in comparison to what I'm arguing now: that an OS needs to have a microkernel. But I switched our roles: I'm arguing AGAINST preemptive multitasking and protected memory (which is, of course, ridiculous).

          Therefore, a sentence of mine like: "There are not many user-level applications. There are just a few important ones: "Word", a spreadsheet processor, and a BASIC programming language" must be understood in the context of the year 1991 and the short-sightnedness of the wisdom-of-the-time from today's perspective.
          Then I missed the context of the sarcasm. In my defense, users only caring about Office, browser and e-mail is not too far off from your sentence, and I believe there are quite a few of those out there.


          Comment


          • Originally posted by oiaohm View Post

            No you are guilty of falsehoods as well. Vista graphic ABI is not the same as Windows 11 graphics ABI. There have been quite a few changes in the middle.

            This feature in Linux is matches up to 1 of the key features why Vista driver on Windows 11 appears to work but there are 3 key features in total why it works.
            1) Windows Kernel modules have version details. MODVERSION feature.
            2) Windows instead of failing like MODVERSION does with Linux can apply abstraction layer to the drivers call this does result in older drivers been lower performance than newer drivers.
            3) Microsofit is able to define what compiler/s developers are allowed to use to make drivers and this is a factor.

            Notice these points have nothing todo with being Microkernel.

            You start of by saying its extra effort to have defined interface you have missed what that in fact requires.
            https://www.kernel.org/doc/Documenta...i-nonsense.rst
            In the section "Binary Kernel Interface"


            This is the first point and it no error that it is. The problem with having multi compiler versions build OS leading to crashes and instability documented in Linux even raises it horrible head with Microkernels that used shared memory between driver parts this like QNX. Yes some of the cases of Microsoft updates result in some users systems not booting have also traced back to this same issue between Microsoft restricted list of allowed compilers.

            MODVERSION feature Linux already has so we can class that as even with Windows.
            The abstraction layer solution if you read on though the stable-api-nonsense notice the bit about new drivers using old USB interfaces that don't work right this happens under Windows and happens with all classes of drivers. So abstraction layer would need to be done better. But number compiler bit is a absolute killer. Without solving the compiler but you will have instability.

            Microsoft Singularity OS research project with managed OS was attempt of Microsoft to fix this problem before they started doing driver certification(where they can reject drivers built with wrong compiler). Basically bytecode abstraction. One thing about BPF bytecode and managed OS bytecode drivers is that this route is a solution to compiler miss match between the kernel and the drivers. Does come with a price of the driver in current managed OS and BPF designs of having compiler cost init time use BPF or managed OS driver.

            mdedetrich next option is get distribution for building Linux kernels to use a restricted list of compilers so that the abstraction layer does not need to be ultra complex. This is one of the hurding cats problem. Distributions want more performance than there competitors in benchmarks so will want to use non approved compilers. Being open source where the distributions are building the drivers and kernel themselves upstream kernel.org developers cannot control these actions in anyway. See Microsoft has control so they can pull this off.

            mdedetrich you might say stuff it just have the code in user space fully abstracted as Linux kernel userspace code is to be compiler neutral. Fuse and cuse and buse and uio and others have all been provided over the years. You have constant complaints about performance overhead vs in kernel code.

            https://www.graplsecurity.com/post/i...e-linux-kernel
            Yes when you start solving the performance problems of fuse/cuse/buse.... other problems start turning up. Yes the QNX problem now it on Linux because of io_uring. Yes and very quickly you can end up back with hey kernel used X compiler user space application used Y compiler and the system dies. Or we cannot do X because all compliers don't support it. Like Linux kernel syscalls pass between kernel space and user space no 128 bit stuff as 128 bit stuff not because the hardware does not support 128bit stuff but because llvm and gcc implemented it differently. Yes BPF is able to use 128 bit operations because its 128 bit native code matches what ever the compiler that built the kernel did.

            Stable driver ABI has many problems. These problems exist be your OS a Microkernel or a Monolithic. When these problems don't appear to exist you have normally not looked close enough to see the mitigations. Like you missed Microsoft restricting compiler to make drivers. Stable API nonsense is not written in Monolithic unique way the problems it details apply to all OS and if there is appaerance of a stable API/ABI for drivers there has to be mitigations to the problems. The issue with Linux is may different mitigation options are not open to Linux kernel developers like restricting compiler versions completely. This also happens to many different open source Microkernels.
            Of course there are changes, the point is that the API is forwards and backwards compatible during thay timeframe. Loading newer drivers on older versions of Windows just means that the newer functions don't get used and loading older drivers on newer Windows versions means the newer Windows version detects the driver doesn't support certain functions and doesn't enable those features.

            If you have a Windows machine you can try this out yourself, Microsoft is very serious when it comes to backwards and forwards compatibility.
            Last edited by mdedetrich; 12 November 2022, 06:05 PM.

            Comment


            • Originally posted by xfcemint View Post
              Why do you think that a managed OS cannot work together with a microkernel? This "managed OS" would also be related to clustered OS calls, as the easiest way to do clustered calls would be via a byte code interpreter. If I were to quickly judge the managed OS idea, I would say that it is too complicated, too risky, for a too small (potential) gain compared to a microkernel. It essentially boils down to integrating an entire Javascript JIT inside an OS.

              Managed OS can have a Microkernel but a Managed OS is more like a Hybrid OS. You would not use a Javascript JIT no managed OS ever has. BPF with Linux is what would come the managed OS part of it language. Microsoft did restricted CLR as in .net and so have others. Different other groups have done Java.


              Managed OS that is a Microkenrel mostly use Language based system for the protection. So not using your hardware ring levels. All the managed byte code drivers once converted to native code run in ring 0 with the Microkernel. Of course as Linux BPF does is bits in the way you would with managed OS just only small segments.

              Clustered OS calls that would be like IO_Uring being looked at for doing syscalls with Linux. Managed OS stuff is a bytecode that can perform logical compare operations and so on.


              There is something interesting about BPF and most of the best managed OS is that the bytecode language is designed intentionally not to be Turing-complete for the driver bytecode. This provides a very interesting security difference. Yes event responsive driver designs with fixed amount operations processing before the code must stop this is a common feature of the managed OS driver bytecode designs. Managed OS are not trusting the driver developers not to attempt to code infinity loops or buffer overflows or other horrible things instead prevent those things by the bytecode design or the bytecode verifier.

              Originally posted by xfcemint View Post
              Highest performance is overrated, and it always was. ARM failed with highest performance, only to succeed later with low power and simplicity. How do you get highest performance? Who knows, you deal with that issue later. The issue here is that the house is falling appart because the foundations cannot withstand the weight, so we should not be discussing whether the new house can ever be as cosy as the old one.
              The highest performance might be over rated but too low of performance is also not acceptable. The weight of massive hardware support Linux has no microkernel has ever been able to support it in history. Massive hardware support is another thing that increases the number of compilers Linux kernel is building with.

              Linux is a very unique problem space. Most likely the solve to the Linux kernel problem space will not be a pure Mirokernel. Linux kernel most likely will end up a mixture of concepts because of the problem space Linux kernel exists in. Yes the compiler nightmare is part of the Linux problem space caused by the massive hardware support.

              Originally posted by xfcemint View Post
              About compilers for Linux... there exist align directives. Some additional directives can be added that mean slightly different things on different architectures/platforms, in order to maintain high performance. Historically, the type int did not have a fixed size, but programs can be made compatible even if int is 36 bits on one platform and 16 bits on another.
              The is a hard reality here. Even microsoft with the compilers they ship with their driver development kits have not managed to keep align directives doing the same things all the time. Yes they were attempting directly todo this. Developer errors in compilers are a true curse. More compilers more developer errors in compilers you have to deal with the hard it is to have a functional ABI.

              The hard reality is that compilers are high precision tool not high accuracy tool when it comes to alignments. Please note with gcc and llvm you change the optimization level same compiler and your alignments at times can be different even if you used directives. Of course every run at the same optimization level generates the same alignments. So high precision on alignments is true closer inspection shows very poor accuracy with alignments.

              Comment


              • Originally posted by mdedetrich View Post
                Of course there are changes, the point is that the API is forwards and backwards compatible during thay timeframe. Loading newer drivers on older versions of Windows just means that the newer functions don't get used and loading older drivers on newer Windows versions means the newer Windows version detects the driver doesn't support certain functions and doesn't enable those features.

                If you have a Windows machine you can try this out yourself, Microsoft is very serious when it comes to backwards and forwards compatibility.
                No I have tried it and that the problem. You need to go and try it out properly this time build some sample drivers with the WDKs and see how it really behaves. I have drivers from vista items that the maker is no more that don't work on Windows 7. Microsoft is very serous about backwards and forward compatibility this is true but they don't get it to work all the time and they have a restricted set of compilers and still get done in by it.

                By the way its not true that you have backwards and forwards compatibility . If you build a driver for Windows 11, version 22H2 and you attempt to load it on Windows 11, version 22H1 it don't work at all. This is true with all windows driver development kits. Reality you cannot load drivers build for newer on older with windows at all because the Windows driver loader forbids it.

                Windows only has backwards compatibility with drivers. As in newer Windows able to load older drivers some of the time.

                Those providing drivers are providing multi versions of drivers because it required if they want to use newer features with Windows. Basically you just wrote what has not been true since Windows Vista.

                Before Windows Vista with XP and before you use to have that where you load driver and it would attempt to nop out the not support functions. The result was unstable drivers. Yes XP driver on windows 2000 use to play up badly at times. Microsoft learnt what you described was a huge mistake long ago.

                Comment


                • Originally posted by xfcemint View Post

                  When I think of it, the most obvious targets for a microkernel are hypervisors and "consumer desktop" = home computers/laptops. The home computer market is a target because security, stability and reliability are important there, and also because some (possible/potential) performance penalty there would matter none.

                  It is a bit sad to hear that Linus is such a big opponent of microkernels. It shuldn't be surpising, especaially not after that famous debate of his. Like other people, after he has choosen a side, it is stuborness to the end in ever rising amounts. The sad thing is that by doing this he has actually turned himself into the exact same problem that he once helped to solve/defeat.

                  It also makes me wonder what other OSS institutions besides Linus are doing regarding microkernels. I mean, anyone can just fork the Linux kernel and add microkernel facilities, although that requires a big amount of work. Has Linus managed to convince everyone to his side? Is there a lack of funding? Is there a lack of interest? Is there a lack of supportes in the academic community?

                  Also, I was re-reading this fascinating thread a few times. Look at all the ignorance that exists out there. There certainly won't be much support for microkernels from the side of "advanced users": sysadmins, programmers and such. They are all happy to simply recite the common wisdom and glorify their heroes.

                  The biggest problem with arguments for microkernels is that, if those are right, then it automatically implies that Linus & company are wrong. The venerated heroes-of-yesterday are now suddenly as ignorant and as stubborn as ordinary people, and that is the hardest thing to chew through.
                  I think if the linux were to go more microkernel route the corporate funding would not make much sense anymore. That would be an issue.

                  Comment


                  • Originally posted by xfcemint View Post
                    But, if Linux goes microkernel route, then it doesn't really need corporate funding, since the scope of the Linux project gets reduced. Linus can then apply for EU funds or something similar. Wouldn't it be better to remove corporate influence from an important piece of software such as an OS kernel?

                    EDIT: Also, what I have been saying so far is actually a route through a hybrid kernel (for compatibility reasons), and the kernel would remain a hybrid for at least a decade. Therefore, in the near future it would be business as usual.
                    https://wiki.minix3.org/doku.php?id=...rerequirements vs https://en.wikipedia.org/wiki/List_o..._architectures

                    Reducing the scope would result in reduced platform support. Lets take the hell Microsoft has had getting Windows on arm or the fact minix and others with reduced scope end up with reduced platform support. Mainlining the drivers the way Linux does means those drivers are open on other platforms as a starting point.

                    Corporate influence is required to have drivers for hardware. EU funds normally not going to cover writing drivers to support hardware. Hardware vendors are corporate like it or not.

                    The biggest problem with arguments for microkernels is that, if those are right, then it automatically implies that Linus & company are wrong. The venerated heroes-of-yesterday are now suddenly as ignorant and as stubborn as ordinary people, and that is the hardest thing to chew through.
                    Yes that is correct. Problem here is there are known issues with microkernels. Research into managed OS happened because there was valid reasons for managed OS design as well.

                    Monolithic, Microkernel and Managed OS all have their strengths and weaknesses.

                    There are reasons why mainlining everything with Linux has been a very good thing for Linux. Big one is large platform support due to spinning up a new platform you don't need to remake all the drivers from scratch because they were mainlined. This is why reducing the scope of Linux does not work you reduce the scope of Linux then you have removed what makes Linux popular to use so its not Linux any more.

                    This is what makes Linux hard problem the huge scope of Linux is what the solution has to work with and not undermine.

                    Comment


                    • Originally posted by oiaohm View Post
                      There are reasons why mainlining everything with Linux has been a very good thing for Linux. Big one is large platform support due to spinning up a new platform you don't need to remake all the drivers from scratch because they were mainlined. This is why reducing the scope of Linux does not work you reduce the scope of Linux then you have removed what makes Linux popular to use so its not Linux any more.
                      Note this is a bit of a false dichotomy. You can very well have your big monorepo with as many drivers as you want bundled within the tree, all the while keeping them running as separate processes talking via some efficient IPC.
                      Besides, porting efforts, once the driver is done for a platform, should be the same or less for a microkernel, as it allows for fewer ad-hoc solutions. If your IPC is platform neutral then your driver will be as well (except for the hardware and bus it uses of course).

                      Comment

                      Working...
                      X