Announcement

Collapse
No announcement yet.

The Linux Kernel Has Been Forcing Different Behavior For Processes Starting With "X"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Originally posted by sinepgib View Post
    I don't know, this sounds just like academics defined the usefulness in theoretical grounds, just like they tend to consider big-O complexity as the only predictor of performance, which is often wrong in the real world. Isolation between processes has been proved empirically to be a major improvement to security and stability, and all* microkernels provide that, formally verified or not.
    There are a lot of presume here that turn out not to be back up in real world microkernels.

    Originally posted by sinepgib View Post
    You're missing that this shared memory is between user processes with prior explicit authorization from both processes. It's certainly not the same as a monolithic kernel and carries in fact a reduced chance of kernel corruption, as now there are no reads or writes to memory from userspace in kernel context. Now overflows can't corrupt the kernel, and they can't corrupt the other process either. An overflow simply crashes the writing or reading process, depending who accesses out of bounds. The splash damage is reduced by an arguably very big margin.

    Lets take one of the worst real world examples that was massively spread. "X11 server user mode setting drivers(UMS)" yes this are Microkernel style drivers. First UMS designed for a Microkernel Unix in fact not monolithic Linux. Now what is the fatal problem here. These UMS microkernel drivers mandated full /dev/mem access.under Linux and every other platform they were used on because there design mandated full physical memory access to the userspace drivers. Think about it you have just authorized user space process to have full system wide memory access there is now no separation between the kernel and user-space or user-space to user-space any more. Yes people think X11 server running as root was the worst problem the worst problem was that the UMS drivers had full run of complete OS memory.

    Share memory between processes does not need prior explicit authorization from both processes todo so this is true under monolithic kernels and microkernel its just a question of authorization. Think debugging lots of OS only the debugging process need authorization to access another processes memory. What is authorized comes very important.

    This is why its absolutely critical to have a verified microkernel design not just random microkernel design. This checks that the authorization design is sane and functional. " X11 server UMS drivers are example of not sane or functional authorization.

    Originally posted by sinepgib View Post
    That part we agree. The only viable way forward to a mass use microkernel is to gradually extract portions of Linux (at first optionally) to userspace until it can be shrunk. You won't get a replacement written from scratch anytime soon. Specially due to the lack of commercial incentives that brings an egg-chicken problem.
    IO_URING and other parts also need to be developed and security issues worked though. Because end result need to be a verified design. There are lots of highly.insecure microkernel designs out there some where ultra popular. The end result of verified design might not end in a Microkernel. Managed OS verification process and so on may be successful.

    Comment


    • Originally posted by oiaohm View Post
      These UMS microkernel drivers mandated full /dev/mem access.
      /dev/mem pretty much shits over everything. I'm assuming it's out of the question that such device node shouldn't even exist, whether we talk of a microkernel or a monolithic one. It's the dumbest hack that was ever invented and AFAIK distros tend to disable it nowadays. But because of that I refuse to even take into account that scenario in any hypothetical microkernel design, as its mere existence just makes the system a whole unikernel in disguise.

      Originally posted by oiaohm View Post
      Share memory between processes does not need prior explicit authorization from both processes todo so this is true under monolithic kernels and microkernel its just a question of authorization. Think debugging lots of OS only the debugging process need authorization to access another processes memory. What is authorized comes very important.
      Not in general, but the discussion is not whether we just use any mechanism, but what mechanisms exist. You can share memory and you can make the OS check for the right authorization before mapping just any page.

      Originally posted by oiaohm View Post
      This is why its absolutely critical to have a verified microkernel design not just random microkernel design. This checks that the authorization design is sane and functional. " X11 server UMS drivers are example of not sane or functional authorization.
      You can verify that insane design too, so I'd say we should separate the concept of design (specification) and its verification. If your specs are shit no formal verification will fix them, they will just prove it's shit just as designed.

      Comment


      • Originally posted by sinepgib View Post
        /dev/mem pretty much shits over everything. I'm assuming it's out of the question that such device node shouldn't even exist, whether we talk of a microkernel or a monolithic one. It's the dumbest hack that was ever invented and AFAIK distros tend to disable it nowadays. But because of that I refuse to even take into account that scenario in any hypothetical microkernel design, as its mere existence just makes the system a whole unikernel in disguise.

        EDIT: For general discussion about this topic, please post in the following location (and not here): http://xdaforums.com/showthread.php?t=2057818 Now find a one-click root application at http://xdaforums.com/showthread.php?t=2130276. More...

        This is 2012 yes here is samsung reimplementing /dev/mem under another name on Linux because it been disabled. There user space driver developers did not want to go though the process of authorizing memory access correctly..

        Please note we don't need to use "hypothetical microkernels" for the problem of implementing /dev/mem like feature. QNX implements a /dev/mem equal for drivers and it in the documentation. There are a lot of items people hold up claiming this is why you should use a Microkernel that by your define sinepgib is just unikernel in disguise.

        Originally posted by sinepgib View Post
        Not in general, but the discussion is not whether we just use any mechanism, but what mechanisms exist. You can share memory and you can make the OS check for the right authorization before mapping just any page.
        ​​
        Also to remember the Linux kernel in ring 0 is also doing lot of memory checks on what can access what. https://www.kernel.org/doc/html/late...rotection.html

        One of the realities here checking authorization to access X memory can be performed in ring 0. This it self opens up another question.

        Originally posted by sinepgib View Post
        You can verify that insane design too, so I'd say we should separate the concept of design (specification) and its verification. If your specs are shit no formal verification will fix them, they will just prove it's shit just as designed.
        I will agree with this. That you can verify insane design. But to perform verification you have to have a design specification. Something like sel4 that is a formally verified micro-kernel the design has had to go though formal verification as well that also forbids lots of stupidity. Like a formal verified design is not going to allow unrestricted /dev/mem because that is a bipass to the security framework.

        There are thousands of Windows drivers that if you look closely you find that the developer has in fact implemented /dev/mem again and again and again. This is a problem is a wheel that keeps on being reinvented.

        This is one of the problems of closed source drivers no proper peer review or proper form of verification that they are not doing something completely stupid.

        sinepgib you are wanting to take the ideal version of a microkernel. The big catch here is that there are tones of examples with microkernels where you have the non ideal outcome with some form of /dev/mem resulting in all the security or microkernel not really existing and yes these are some of the most used microkernels. Then you have multi examples of different vendors reimplementing /dev/mem in their non mainlined Linux drivers. Then you have multi examples of different developers implementing equal to /dev/mem under windows as well.

        The Linux kernel monolithic core model is not exactly the safest option. But there are many items that people call Microkernels that are really no better.

        Also there is another question is Microkernel kernel space/userspace even the right model. Think we have virtualization and NUMA intel developer did experement with Linux of running a hypervisor above the Linux kernel ring 0 providing extra protective primitives into the core Linux kernel of course this change would not alter the Linux kernel unstable driver ABI.

        Remember with the spectuive execution faults core assignment comes important. So at ring 0 use NUMA and hypervisor restriction around drivers so drivers run at ring 0 instead of userspace ring 3 is another option. Of course the driver ABI in this case would not have to above the Linux kernel stable ABI to userspace. Yes this could in theory do all the protections the microkernel kernel/userspace split does around drivers and more. Remember this would have drivers wrapped in one unique set of protections and userspace code wrapped in a different set of protections.

        sinepgib big question that could complete nuke common microkernel idea. The fact that QNX and others end up implementing /dev/mem equal for drivers to allow direct hardware memory access and low performance overhead. Should drivers exist in their own unique area with their own unque ABI/API with their own unique protections different to userspace applications.

        The reality here Microkernel may not be the correct fit.

        Comment


        • Originally posted by xfcemint View Post
          His argument relies on an endless stream of specific issues (I can't figure out is how he knows about them all). Every human-designed system will have specific issues. Every system that has ever been designed by humans has failed, or it is expected to fail.
          There is a reason why I know the specific issues lots of time working with OS in embed usage.

          There is a catch there is a repeating set of issues that turn up.

          To be correct there is a direct conflict.

          1)Core drivers need physical memory access.
          2) Non driver application never need physical memory access virtual memory that you can use any security control you like with will do.

          Do note that sinepgib said that /dev/mem is a really bad idea. Having all drivers in userspace means you have to implement this really bad idea of /dev/mem hopefully better.

          QNX, Samsung in 2012 with Linux and many others end up with physical memory that basically access anything exposed to userspace.

          The monolithic split of drivers in ring 0 and userspace applications in ring 3. In a correctly setup of monolithic you don't end up randomly giving user space application direct physical memory access as that is restricted to code in kernel space.ring 0.

          Historic examples of microkernels had the core kernel on ring 0 with drivers on ring 1 with services on ring 2 and userspace on ring 3. Context switching between all those rings was highly expensive on performance. This did reduce the memory assignment problem.

          Windows NT was meant to be a microkernel but where are its drivers and services related to drivers in ring 0 these days.

          Something you missed I have given a list of specific issues from different solutions that are in fact a single problem that keeps on turning up with userspace drivers. There are practical problems you have to get over when you write drivers.

          Remember driver need to communicate with the hardware and userspace. Lets say you have raw physical memory access in the driver in userspace so it need to work. What this is mapped into user space and you are going to need to be sharing memory with application in userspace how close are you to screwing up at this point.

          The monolithic ring split between "driver/kernel services" and "userspace application" is happens to make sense on a security ground. Microkernel split being kernel/userspace makes sense on a simpler kernel but it then bundling drivers and general application with each other.

          Reality here microkernel and monolithic kernel neither is 100 percent right.

          The historic secure design of microkernel.
          Ring 0 : Kernel
          Ring 1 : Drivers
          Ring 2 : Servers
          Ring 3 : Userspace programs​

          Yes this historic design ring 3 userspace programs would only interface with servers. ring 2 Servers would interface with drivers userspace and kernel. and drivers would interface with hardware and kernel and servers. The servers was barrier between driver developer created issues leaking to userspace applications.

          Remember every ring change is like context switch. Lots of overahead is the reason why don't have operating systems design like this.

          Monolithic kernel takes ring 0 1 and 2 of the historic secure design microcode and fuses all that into ring 0. Modern performant Microkernel takes ring 1 2 and 3 and fuses that all into ring 3. Each way you have instantly degraded the security of the historic secure microkernel designs.

          xfcemint get it yet Microkernel is not all the same thing.

          The 286 and latter x86 processors was designed to have 4 rings to suite being used for secure microkernel. There are repeating issues with Microkernels caused by solving the Microkernel performance problem. Why because every time a Microkernel developer solves the performance problem they under mine what would have been natural security of the Microkernel design.

          Yes having drivers and servers for hardware in ring 0 as monolithic kernel does has danger. Problem is putting drivers and servers all in userspace then having to provide them with the access they need to work also has it dangers yes these dangers are just as bad as the monolithic kernel problem if not worse. This is why over and over again monolithic and microkernels in real world examples have end up just as security flawed.

          There are lot of papers written saying microkernel can solve X/Y and Z problems but these brush over for the microkernel to perform you have undermined security in another way. Also those paper nicely ignore that the split line monolithic kernel has happens to have a valid security reason to be there.

          Making a high performing micro-kernel and high security microkenrel at the same time from the microkernels developed so far seams like impossible task.
          Last edited by oiaohm; 16 November 2022, 07:08 AM.

          Comment


          • Originally posted by xfcemint View Post
            The way to solve this problem is to add a mechanism (by the system of capabilities, you should read it) to the microkernel that tracks the permissions for the segments of the physical address space. In this perspective, the "physical address space" becomes just another set of interfaces. The permissions to access a particular interface are than tracked by the capabilities sub-system, just like any other permissions for any other interfaces. In fact, the capabilities system allow for a much broader functionality, since a capability system allows a permission-for-an-interface to be sent from one driver/service to another, duplicated, granted, revoked at will, subdivided, joined, et cetera.

            In short, physical address space can be percieved as just another interface. Linux doesn't have a built-in capabilities system (yet).
            No this is wrong. https://www.kernel.org/doc/html/v4.1...uio-howto.html Linux does have built-in capabilities system that has been build around /dev/men and uio for userspace drivers. Both have had their issues.

            Next question that you did not ask yourself what system is the Linux kernel using generally between userspace and kernel mode services/drivers.



            Clues in these functions. The answer is domains. The standard Linux model is the Linux based driver cannot access something from userspace until its mapped into kernel space and userspace cannot access kernel space data until it mapped into userspace and in standard mode a block of memory cannot be allocated to userspace or kernel space at the same time.

            Please note Linux kernel has capabilities system on top of a domain system. Historic secure Microkernels had 4 domains. Using domains has a performance cost.

            Yes you just made another common argument mistake presumed that Linux kernel does not have something already. Linux kernel developers have done a lot of work with the Linux kernel experimenting around with user space drivers because like you many of them thought the idea of user space drivers would be a good thing. Unlike you they have really done it and found out its not as ideal as it first appears.

            General Linux kernel space/userspace interface mostly operates on RCU this is not suitable for device control works quite well between the drivers to services in the kernel and the userspace.

            xfcemint; the historic microkernels declared 4 domains clear reasons. The types of memory operations required changes between each of those domains. Each domains can have their own domain particular capabilities systems for memory.

            monolithic kernels with drivers, services and kernel core in a single domain does have problem you don't have means to make strictly individual capabilities for each of those parts because they are in the same domain. Modern microkernels kernel in ring 0 and now drivers, services and user applications in ring 3 you now have the problem that you cannot make strictly individual capabilities between drivers, services and user applications because they all are in the same domain. 90% the same problem.

            xfcemint basically what you are writing is like someone suggesting to reorder the deck chairs on the sinking titanic as a solution to stop the titanic from sinking.



            Comment


            • Originally posted by xfcemint View Post
              It is possible that I could, perhaps, do some cybernetical analysis of the "situation" with userspace ABIs. That might end up usefull, or it might not. If anyone is interested in reading such a thing, I can post the analysis here.
              Sounds like an interested read, I'm in.

              Comment


              • Originally posted by sinepgib View Post
                Sounds like an interested read, I'm in.
                Really when it comes to userspace API analysis there is really is not simple cybernetical analysis that works for all cases.

                Backgroud: io_uring vs epoll Nowadays there are many issues and projects focused on io_uring network performance, and the competitor is always epoll. #189 https://github.com/frevib/io_uring-echo-se...

                Linux kernel developers are doing cybernetical analysis all the time in different areas using all the different methods.

                IPC is a message passing mechanism, right? After all, it’s an acronym for inter-process communication? That’s historically true, but seL4 IPC has come a long way, and its IPC primitive …


                Yes both uring and sel4 IPC has downside where they not work well.

                Do note xfcemint has been getting basics wrong. Common error is that capabilities can solve everything. There is need for domains. Domains being areas that even using capabilities you cannot override the rules. Security breaks into 2 general types. Discretionary Access Control(DAC) and Mandatory Access Control (MAC) Capabilities in most operating systems are only DAC. Yes Linux capabilities are only DAC.

                The ring0 to ring 3 context switch for a syscall that in old micro-kernel stuff would be domain change this is really MAC. Remember in ring 0 you can see one set of page tables and when you switch to ring 3 you see a different set.

                MAC has a far higher performance cost than DAC in most cases.

                Really this is the big problem with the idea that Microkernel is going to solve everything. Only way it can is if you have domains made by rings or MAC around drivers services and applications userspace. MAC being strict rules that drivers are drivers, services are services and userspace applications are userspace applications. Problem is this is going to hurt performance a lot.

                The Linux kernel and the Windows kernel splitting userspace application from drivers, services and kernel core at by a ring change is not a mistake from security point of view.

                Think attack surface. Yes Linux kernel has a large attack surface having all drivers and services and kernel core in ring 0. But without stuff you call foolish sinepgib like /dev/mem the user space applications cannot be messing with raw device memory.

                Now lets look at Modern Microkernel without solid MAC(as all the ones people quote are) You have drivers, services and userspace applications in ring 3. Hang on userspace applications if permissions go wrong(SOD law applies that permissions will go wrong at some point) userspace application can over ride all security controls due to complete memory access.

                Shock horror right Microkernel has bigger attack surface than Monolithic because all of userspace could be a complete security bypassing item.

                Bigger the attack surface the more you have to audit to make a truly secure OS.

                Userspace drivers have upsides improved security is not one of them. Userspace drivers most work out to be as harmful to OS security and stability as ring 0 drivers because they need raw physical memory access to work. Remember you have to allow samsung example I gave with the Linux kernel where the person making a userspace driver takes the lazy route of granting their driver everything.

                Core Linux kernel developers wanting drivers up-streamed and peer reviewed prevents a lot of stupidity. You see it a lot on LKML that people making drivers are getting called out for doing foolish things that are not human error but are instead the lazy way. Like adding a lock instead of using RCU and so on. Then all the Windows drivers that basically implement /dev/men and so on lazy ways to solve problems.

                Never underestimate how human laziness not checked for can totally undermine security. This is some of why real world microkernel have never really delivered what the concept seams to offer.

                Comment


                • Originally posted by xfcemint View Post
                  Linux IPC should not be like sel4 IPC.
                  Linux kernel itself implements many different IPC solutions. Each IPC has different advantages and disadvantages. So Linux kernel having a IPC like sel4 along side it other options should be expected at some point.

                  Originally posted by xfcemint View Post
                  I think that both MAC and DAC should be avoided if possible. Capabilities are much more than that. Some particular service can implement DAC or MAC on top of capabilities if it needs to, but the kernel certainly shouldn't do that.


                  This is the problem with I think. Capabilities under Linux are implemented under LSM Framework. LSM framework is not a Capabilities solution for many reasons.

                  Mandatory access control by USA government defines that in the original "Trusted Computer System Evaluation Criteria (TCSEC)​" as kernel feature.

                  The problem is when MAC is in place no user-space application should be allowed to start that can bipass the MAC. See trouble here. If service attempting to make Mac is itself in the user space application domain its no longer a Mac because its a user space application. Historic microkernels that used all 4 rings of x86 processor of course does not have this problem because userspace applications and services run on different rings/domains.

                  Next problem why you cannot commonly use Capabilities for TCSEC define MAC is that MAC rules must be invisible to user space application so that attacker cannot see what the set limitations are until after they have tripped them. DAC does not have this limitation of being invisible to application.

                  Next is why LSM framework uses hooks instead of Capabilities. Hooks can run code at the approve or reject point so allowing more complex MAC designs than you can do with capabilities.


                  Yes Linux kernel support MAC system being written in BPF so using Language-based system for security here instead of being a built in kernel module.

                  xfceminit you are right to say MAC should be implemented as a Service. Remember how I pointed out the 4 domians of historic Microkernels.

                  1) kernel
                  2) drivers
                  3) Services
                  4) userspace.application.

                  Core OS services contain information that user space applications should never be able to see or modify. Drivers contain information that services and userspace applications should never be able to see or modify. Core kernel contains information that drivers, services and userspace applications should never be able to see or modify. These are requirements when running as a secure mode to minimize attack surface.

                  Monolithic kernel breaks this rule because kernel, drivers and services are all in one area. Modern Microkernels break this rule because drivers, services and userspace applications are in one area.

                  Like it or not to make correctly secure OS you need mandatory rules between kernel, drivers, services and userspace applications enforced by some means. Idea of microkernel then everything else be generic userspace application does not fly.

                  BPF applies mandatory rules by it validator and the language limitations.

                  The hard part is how to isolate kernel, drivers, services and userspace applications from each other without absolutely killing performance. 4 ring Microkernels of old were ok in security not perfect because the x86 ring system at hardware level was designed wrong(so allowing ring to ring bipassing of security another CPU design fault issue) and caused massive performance overhead causing the idea that all Microkernels had to be slow.

                  xfcemint domains of historic Microkernels 4 ring were very well considered. How MAC has to work to trap attackers in the TCSEC was also very well considered. There has been lots of study in these fields.

                  Microkernel are not magic bullets. Both monolithic and Microkernel designs you run into the problem parts that should be isolated from each other not being isolated from each other so causing major security problems. Annoying part is the fact there is going to be major security problems is very well documented. Also very well documented is that when you isolate those parts as you should with current cpu designs you end up taking horrible performance hits.

                  Comment


                  • Originally posted by xfcemint View Post
                    Capabilities are a superior solution compared to everything else that you have mentionined here or around. The only half-valid argument is the question of performance (regarding capabilities), but it appears that the performance is not detrimental, it is expected to work out fine.

                    If USA goverment has declared some idiotic rules, then that is their problem. They can choose either to change the rules or to not use a capabilities-based OS.

                    I'm not going to write 50 pages here explaining in detail how microkernel-alike IPC and capabilities should work, and I'm not going to write another 50 pages about how to correctly design the implementation.
                    I don't need 50 pages of garbage because you made some fundamental mistake that can result that cannot be implemented in the real world for something like linux.

                    https://en.wikipedia.org/wiki/Capabi...erating_system

                    There are only 9 capability based OS that have ever been built out to any amount. Sel4, Fuchsia and Genode are the only 3 that remain in development. Genode is microkernel or monolithic kernel.

                    Notice something here the platform support of these capability based OS has remained quite limited there are interesting problems implement capability OS on different CPU designs. Yes genode is the most developed out.

                    https://genode.org/documentation/gen..._security.html

                    Guess what you cannot implement using this all the different MAC solutions the Linux kernel supports using genode capabilities. There are reasons why Linux kernel LSM is the way it is.

                    xfceminit you have never read the TCSEC have you to understand why MAC has to be kernel level and why domians. Do note genode attempts to create a userspace close to what you have been describing.

                    Like how you are claiming superior solution the developer who made genode started off with your idea that capabilities model had to be super and shock horror the more genode has developed the more work that is put into making capabilities work the more limiting and problematic they become.

                    LSM hook system allows altering the fundamental rules this is where capabilities fall apart is that you end up writing stack of fundamental rules you cannot change because applications depend on those rules. MAC remember user space application is not mean to know what the MAC rules are this is what LSM hooks also implement so you change the fundamental rules the MAC is enforcing since the user space application never new about those rules and as it as it not doing something forbid by the MAC to the application nothing has changed.

                    xfcemint this is the reality of capabilities systems person spend ages writing 50 page+ document on how in theory it should work goes and implements it then some attack comes up that turns out has happened because they made some human error and there is no way to fix this issue without breaking applications because applications themselves depend on the capabilities keeping the security flawed behavior.

                    It really does pay to sit down and read the TCSEC reasoning. The design theory there also considers how you are going to handle if you have made a mistake.

                    Comment


                    • Originally posted by xfcemint View Post
                      A service: a program providing an ABI. Note: this includes some kernel modules in a monolithic kernel.
                      There is not a monolithic kernel in existence that has a service in fact provided by kernel modules. Linux kernel services is part of a item called subsystems yes these can have their own kthreads that show how in your process scheduler yes the stuff with [] around them but mostly not. The ABI is provided by core kernel in a monolithic kernel and this is how LSM hooks can simply be placed across the ABI because LSM hooks don't need to be put in kernel modules individually..

                      ​Making the functionality work is called driver and that is what a kernel module is. Remember I listed 4 parts over and over again.
                      1. kernel core.
                      2. driver
                      3. service
                      4. application.
                      ​You have just made one of the fundamental mistakes. Service level is where the stable ABI to user-space is defined in a Microkernel is define. Driver also has ABI to userspace. Yes you use ioctl to talk to driver this was defined in the subsystem are to exist. You use /sys directory to talk to drivers this was again defined in the service/subsystem level with Linux.

                      Service layer provides ABI to drivers and ABI to userspace. This stuff with a monolithic kernel is not in the kernel modules/drivers. There is a reason why monolithic kernels end up using the name subsystem instead of service. Because subsystem limitations don't line up with user-space daemons/services.

                      Monolithic
                      1) kernel core.
                      2) driver(can be module in modular kernels)
                      3) subsystem( Monolithic equal to microkernel services in kernel space and never a module)
                      4) user-space daemons/services.
                      5) user-space applications

                      Microkernel model normally does not have a exactly define split between what are user-space services like postgresql... and subsystem services like VFS ...in ABI/API provide.

                      Originally posted by xfcemint View Post
                      - point 2: I find it to be likely that the possibility of incentivizing creation of translator services has not been sufficiently considered and discussed.
                      Translator services are items like Wine. There are issues like this. Big one is developer time. But point 4 it self stuff also applies.


                      Originally posted by xfcemint View Post
                      ​- point 3: I find it to be likely that the possibility of incentivizing creation of translator libraries based only on stable ABIs has not been sufficiently considered and discussed.
                      You see this with the current SDL wrapper, ioring. opengl and others already exist. But these require lots of maintainer work..

                      Originally posted by xfcemint View Post
                      - point 4: I find it to be likely that the possibility of incentivizing the retention of old and superseded ABIs by Linux distributions has not been sufficiently considered and discussed.

                      Security issues are also very important for Linux. When a
                      security issue is found, it is fixed in a very short amount of time. A
                      number of times this has caused internal kernel interfaces to be
                      reworked to prevent the security problem from occurring. When this
                      happens, all drivers that use the interfaces were also fixed at the
                      same time, ensuring that the security problem was fixed and could not
                      come back at some future time accidentally. If the internal interfaces
                      were not allowed to change, fixing this kind of security problem and
                      insuring that it could not happen again would not be possible.​
                      Old and superseded ABI cannot always be kept. Sometimes they were simply bad insecure design and keeping them around keeps bad insecure around.

                      Next Linux distributions have limited maintainer time. The larger the ABI you are maintaining the more it costs. If you go though ubuntu and debian and fedora mailing lists you will find this topic has been heavily debated many times.

                      Originally posted by xfcemint View Post
                      ​- point 5: I find it to be likely that, when a smaller vendor is competing with a larger vendor, the smaller one is likely to push for standardization.
                      Linux kernel has been odd this way early on. When Linux kernel was a smaller vendor due to being the kernel core and changing all the time pushed larger vendors into doing stuff its own way. These days the core kernel development of Linux is self is quite a big vendor.

                      Originally posted by xfcemint View Post
                      - point 6: I find it to be likely that, when a competition between two libraries or services of similar functionality exists, then a resulting force of a push for standardization is likely to be created.
                      This totally misses something.

                      Yes this standards but under Linux we have this happen with libraries all the time.

                      TLS libraries good example we have a standard but we have 16 open source implementations with majority having unique API/ABI.

                      Originally posted by xfcemint View Post
                      - point 7: I find it to be likely that, the possibility of incentivizing (smaller) vendors to create competing services and libraries has not been sufficiently considered and discussed.
                      We have tones of examples where having smaller vendors do things like TLS libraries, sysvinit and so on where this turns out not to be a good thing. Resources issue. Smaller vendor making stuff without enough resources to in fact maintain it just end up causing security problems and crashes.

                      Originally posted by xfcemint View Post
                      ​- point 8: I find it to be likely that the possibility of incentivizing creation of smaller services and smaller libraries that can be used by multiple vendors (to simplify development of services) has not been sufficiently considered and discussed.
                      This has serous downside. Each implementation can have it own unique quirks and issues. Mesa3d user space driver stuff valve has found has very little quirk difference between them because they all come from single central vendor that worked on by multi vendors. Vs under windows intel amd and so on all come from different vendors development areas leading to their opengl implementations all have unique quirks.

                      Multi vendors historically has proven to be more of a nightmare than a help.

                      Originally posted by xfcemint View Post
                      ​​- point 9: I find it to be likely that the effects (on this problem) of introducing a microkernel-based subsystem (into the Linux kernel) have not been sufficiently considered and discussed.

                      I would say correctly discussed. Most time Microkernel is discussed the time is not spent to understand what the current Linux kernel provides in it current form.

                      LSM hooks for example. Why this are not like capabilities if a hook is not being used by current setup the overhead is exactly 1 jmp call that by out of order CPU is normally no cost at all.

                      There are advantages in the current Linux kernel monolithic design. Yes these don't model across into standard microkernel based design.

                      Originally posted by xfcemint View Post
                      ​- point 10: I find it to be likely that the possibility of incentivizing creation of app developers' interest groups that can push for creation of translator services has not been sufficiently considered and discussed.
                      These interest groups have existed many times over. This again has been more considered than you think. Simple list of where this goes wrong.
                      1) Cost in developer time.
                      2) performance cost.

                      Wine is able to push against this quite a lot because it allows applications to run on Linux that don't run on Linux natively.

                      Some of the game developer world are willing to invest likes of SDL1 wrapper library development. This is a case that legacy application is important.

                      There are a lot of cases where it does not make sense because the 1 year old application you would not be using. Take a browser who want to be using 1 year old possible security flawed web browser on the internet.

                      Something you did not consider is how small the interest group to create translator services/wrapper libraries are. This also means you did not consider why they are not growing.

                      Comment

                      Working...
                      X