Announcement

Collapse
No announcement yet.

The Linux Kernel Has Been Forcing Different Behavior For Processes Starting With "X"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • oiaohm
    replied
    Originally posted by xfcemint View Post
    I sound like a whacko to myself. This is completely whacko.

    On the other hand, it is not a problem to give it a shot. I mean, it is simple. Then we can evaluate does it work or not. It would be an attempt to put a theory into practice.

    The analysis, actually, suggests exactly that same thing in the very beginning (good further discussion). The only difference is that this time the "experts in solving similar problems" are more precisely identified as psychologists who must take double-bind into account when discussion is in progress.
    No the issue is pure double bind.

    Originally posted by xfcemint View Post
    Also, one more point: apparently, the vendors (i.e. the representatives of vendors) should not participate in the good further discussion, if the best results are to be obtained. The discussion should be between the hidden actor (or representatives), psychologists and app developers' representatives.
    The reality is there are not hidden actors that much in the Linux world.

    What you wrote here is a Herding cats problem.

    The reality here is Linux Distributions have different ideas on what is correct they are not going to sit down with each other can come to a 100 percent uniform agreement. Linux Distributions have true case of Irreconcilable Differences.

    The problem here is like how different christian religions interpret the bible differently and totally refuse to agree. This aligns up with Linux Distribution behavior.

    Then you have different application developer groups that have Irreconcilable differences between them so will not sit down with each other to come up with a common agreement..

    The stable ABI is an obstacle to improving CPython. The limited API, and the API in general, is also a problem, but much harder to fix. Let’s keep the limited API, at least until we have a completely new C API in 2030 or whenever. I also think that we want to keep ABI stable within releases (no ABI changes after late beta/RC). The big problem with the stable ABI is that it prevents a number of otherwise quite reasonable changes. For example: Re-ordering fields in structs, or adding new fi...

    Then you have this that stable ABI causes it own fair share of issues. So a developer of a application or library to fix security issues, performance issues and so on is motivated by one of these reasons to break stable ABI. Of course Sod law applies "if something can go wrong, it will"​ so Stable ABI breakage is always going to be natural happening.

    xfcemint the biggest problem here is that people need to accept the reality. Linux world is never going to have a 100 percent uniform stable ABI that application developers ask for this is never going to happen while people can create their own new distributions customized for what they need. Windows and Mac OS and android can kind of offer this because each of these are single cathedrals. Yes each Linux distribution is own cathedral and worse nightmare is every release of a distribution can be its own cathedral.

    The reality you make a deb package for debian/ubuntu.... you are basically agreeing to update that when the distribution does update and that you will not have backwards compatibility all the time. Same applies to RPM... There are a long list of distributions in this camp. This is their nature.

    Freedesktop runtimes(what flatpak uses), Steam runtime(valve), NixOS are all different forms of Cathedrals that focus on long term running of applications. There are security and memory usage downsides to this.

    Stable ABI is not a free it has costs. Yes application developers and end users have to agree to pay the cost here.
    Application developers need to stop expecting parties like Debian/Ubuntu.. to behave like windows or macOS with stable ABI. Notice Ubuntu snap you end up bound to their store.

    For end users I would say there is a lack of a good front end with tooling to make resolving ABI issues simpler. Yes people forget that its really possible to override any dynamic linked Linux applications loader this can be combined with .desktop files. So Linux distributions could have a compatibility mode like Windows does even provide by a third party of someone was willing to put in the effort the technology exists to-do it. Another interesting point the 80000 packages in nixOS don't need to be installed in /usr NixOS works by altering the dynamic loader. Dynamic loader fixing ABI issues and allowing alternative libraries is a tested and functional solution under Linux but this does not come default with distributions like debian/ubuntu/Redhat/Suse.... instead you need to add like NixOS or Guix to have it.

    The horrible reality right the solutions exist 99% of the LInux user-space ABI mess to have applications work across multi distributions and multi generations of Linux distributions. This now comes the classic statement "you can lead a horse to water but you cannot make them drink" you run into this a lot with distributions and application developers.

    xfcemint driver issue is another problem of it own.

    Userspace ABI issue for applications is purely failure to accept what Linux world is and work in the problem space with what is in fact provided. Yes the Linux world does punish those who look for a quick fix solution without first accepting and understanding the Linux world very harshly due to being able make something like a deb package for debian without understandably this is agreeing to be on a treadmill of updates then run into nightmares of repeated breaking. Yes for this debian case if you had read the Debian packaging policies they warn you of this problem.

    I would say this is a very steep learning curve because the Linux world is truly different to Windows and Mac OS world and developers come in attempting to interact with it like it Android Windows and Mac OS.

    Yes Android, Windows and Mac OS you can presume that you will naturally get forwards and backwards compatibility as application developer all of the time. But this has never been true for all operating systems. Most BSD based solutions you don't even have the stable ABI from kernel space.

    The top one is "Assume= make ASS out of U and ME". Biggest problem for those starting attempting to make application for Linux is they assume Linux will behave like Windows or MacOS or Android so they don't need to learn anything about the nature of what they are using. There is a very large diversity and natures between Linux distributions.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by xfcemint View Post
    Here is the corrected text of the analysis:

    App developers write applications, but they have to use APIs to make an application functional. App developers are expected to ship executables, which require stable ABIs. The app developers cannot express where their problem is, because the situation is too complex and proper terminology is lacking. The app developers are in a situation known as double-bind (see description on Wikipedia of double bind).

    [ I.e however "looney" it might sound, whoever wants to be an efficient regulator of this situation needs to first contact a psychologist that has expertise in double bind situations (and, the psychologist should know something about Linux, to make it easier for him to understand the situation). ]
    This is not really the current location.


    ​Lets take glibc under Linux has almost 100% perfect backwards compatibility. Symbols are versioned so this can happen. Now application build with glibc does not have forwards compatibility. Forward compatibility this is where you take application build with newer glibc and try to use on older glibc only to find out it throws error due to missing symbol. Yes this is exactly like the Windows driver ABI.​

    Lets do a C application under windows using visual studio you are going to build up a Microsoft maintained runtime with your application or user will hit symbol does not work and install the runtime themselves. Yes one of the features of the SXS system under windows is to prevent these libraries from overwriting each other. This allows forward compatibility.

    The like or not the proper terminology exists and the functional problem is not that complex. But it requires effort to understand the problem.

    Learn a couple of ways to use multiple glibc on our machine with g++ and patchelf


    NixOS and others uses the technology above to have a form SXS under Linux.

    Originally posted by xfcemint View Post
    Ok, so here is, in simple words, what has been going on:

    There is another, hidden actor or actors. This hidden actor is the result of the sum of the most important people and organizations dealing with Linux. This hidden actor is the one who can actually "pull the strings". He is a regulator. But, an efficient regulator (the concept of an "efficient regulator" is a part of cybernetics, so read about it there) needs to have a complete model of the system he is controlling. However, the current regulator (the hidden actor) doesn't know about the double bind, so he is an inefficient regulator. In order for this hidden actor to become an efficient regulator, the hidden actor must consult a psychologist. Together, the hidden actor, the psychologist, and the app developers can find efficient solutions to the problem of a usefull userspace ABI. The cybernetical analysis that I have provided should be a starting point for the discussion between the affected actors.
    Wrong presume. We don't have hidden actor on Linux with userspace ABI and that is the problem.

    Here’s the story, titled “Whose Job Is It, Anyway?”

    This is a story about four people named Everybody, Somebody, Anybody and Nobody. There was an important job to be done and Everybody was sure that Somebody would do it. Anybody could have done it, but Nobody did it. Somebody got angry about that, because it was Everybody’s job. Everybody thought Anybody could do it, but Nobody realized that Everybody wouldn’t do it. It ended up that Everybody blamed Somebody when Nobody did what Anybody could have.

    The story may be confusing but the message is clear: no one took responsibility so nothing got accomplished.

    Recently I told a group of leadership executives a simple but meaningful story that you may have heard before. It’s the story of four people named Everybody, Somebody, Anybody and Nobody. Here’s the story, titled “Whose Job Is It, Anyway?” This is a story about four people named Everybody, Somebody, Anybody and Nobody. There was […]

    We have a story of everybody, somebody, anybody and nobody here when it comes to stable userspace ABI for application developers that the way they want.

    Distribution job its to make their collection of applications work with latest versions of libraries for the best performance and security. So its not their job to do the effort to make a stable ABI for third party applications. Of course application developers want to blame Distribution for this at the same time failing to see that Distributions cannot fix their problem when they want to run on older so need forward compatibility. Forwards compatibility=requirement for application developer to ship with runtime or user to be able to install runtime..

    Library makes in the linux world majority do make backwards compatible libraries and that is a confirmed fact by abi-laboratory. But applications developers want to release new application using new library features and expect it to work on old distributions with only old version of libraries and it breaks of course because they did not include runtime. Yes application developers under windows are use to paying Microsoft Windows and visual studio to make these runtime for them. Microsoft is the hidden actor here making it work.

    Where is the Linux hidden actor to make it work. That right Nobody. Are the application developers or end users willing to put up money to pay party to do this role this is mostly not.

    xfcemint way simpler problem right you never asked who is responsible to notice the current party is nobody. You also started off with the presume that Stable ABI does not exist on Linux. Issue user-space Stable ABI absolutely exists on Linux distributions themselves would not be able to function if it did not.

    Valve runtime for running old Linux native binaries catches a lot out it always end up using versions of glibc way newer than the application binary and this only works because there is a stable ABI. Yes libcapsule is also used to shim the old library usages to new libraries. Also steam runtime for developers is restricted set of libraries that all the libraries have a track record of long term ABI stability for backwards compatibility and the steam application installs newer runtime on older distributions so newer games can work. Valve here took up the nobody was doing it role.

    xfcemint the fact valve with steam gets it to work shows clearly its been a nobody or application developer problem. Of course valve wants to recover there costs so some of how it works is bound to steam.

    Yes some of the problem lets pick the most popular distribution users are using and build with that logic that application developer take at times and get very badly burnt. Lets say they had built for NixOS instead. NixOS can be installed as a full baremental distribution or on top of almost all other Linux distributions. NixOS is built to have SXS functionality build in. Of course then you have Linux users upset that having the extra libraries and so on are going to cause higher memory and disc usage yes Windows users are use to this.

    Then you have flatpak and snap that are even more solutions to the same problem. So we have the many cathedrals problem. Application developers with Linux are in a city of Cathedrals but only a handful in fact suite their faith and they are upset because they are going into Cathedrals that don't use their doctrine. Yes you could say Valve, NixOS, flatpak, snap.... are Cathedrals that give there members guided tours with instructions how to interact with other faiths in other Cathedrals without doing anything too offensive and this is what applications developers need to be looking for.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by xfcemint View Post
    Yes, that is the same as my interpretation. I reasoned: there already exists a huge toolset for dealing with unstable APIs in Linux distributions, so I don't see a reason why not to use it for "services" (the term "services" is a weird catagory in the current situation, noone except the microkernel "loonies" can have such a perspective).​
    ​Service is a term you have to be very careful with.

    It not just micro-kernels that point of view that its core. This is windows NT forwards architecture with executive services basically Windows NT equal to Linux kernel subsystems.

    Historic term daemon fell out of favor and use to be used for user space services that are not driver related. Microkernel design never formally declared what the name was for subsystem/executive services vs daemons/ user space services that are not driver/core kernel related that sit on top the the ABI/API provided by the kernel and subsystem/executive services.

    Originally posted by xfcemint View Post
    The app developers are using one thing (API), but want another (stable ABI). They can't solve this problem themselves, they are stuck in the current situation. I'm sure that there is a contradiction there, but perhaps it isn't really "internal". Anyway, they would certainly prefer the issue to be resolved, which is important for "point 10".
    Application developers are not this good. Application developers have multi different issues.
    1) they want to show good performance.

    Yes that number 1 leads to many issues. Like depending on quirks in particular implementations that are bugs because it will give them higher performance even when they know it is a bug that should be fixed. So not all application developers want stable ABI/API some of them want the latest and greatest API/ABI so they get the best performance.

    This is a issue you have you are thinking the will of application developers are unified. Valve work on the steam runtime is interesting as they are trying to come up with solutions that solve for all different application developers groups.

    Originally posted by xfcemint View Post
    ​I'm against rephrasing it. "Shim" in this context means something simple and small. On the other hand, "translator libraties" are named such to be consistent with "translator services". The only strange thing is that "translator libraies" are no different than any other libraries.
    Shim is a windows term and when you would look it up its most likely not what you attend at all. Shim modified the dynamic loader under windows.

    This on the Linux side the closest thing is libcapsule.
    Version-independent Steam Runtime container and diagnostic tools, including steam-runtime-system-info diagnostic tool and pressure-vessel container launcher. Report issues and feature requests to

    Neither is what I think you are going for.

    Closest defined terms I can think of is a Wrapper library and Compatibility layer.

    Originally posted by xfcemint View Post
    I think that there must be a bazaar, it is unavoidable. You can't get a stable solution immediately, it has to go through the bazaar phase.
    This is something you miss the its partly covered in the book the "The Cathedral and the Bazaar". Lets look at the khronos group or Wayland protocols or Linux kernel users space stable ABIs, WIndows stable ABIs... Every case you find that there is a Cathedral.

    There are examples where there was no bazaar phase before stable ABI in different areas have appeared on Linux. The one part that must be there is a Cathedral to uniform ratify what is valid and correct.

    Remember each Linux distributions is basically its own Cathedral. Linux world has the same problem as the Christianity with all the competing churches/Cathedrals.

    Linux world has a Bazaar filled with Cathedrals with Bazaars inside those Cathedrals that at times has another bazaar inside then Cathedral inside .... Basically the old saying turtles all the way down.

    Originally posted by xfcemint View Post
    My thought was: Incentivizing creation of new translator libraries that provide an API (stable or unstable), but translate (make calls, i.e. sit on top) only to stable ABIs. Translator libraries can't provide an ABI, because libraries can provide APIs only.

    Translation libraries are normally wanted by parties like Valve to run old applications or application developers who are not updating any more. Performance cost is never liked.

    Originally posted by xfcemint View Post
    Well, that's related to my idea that, in order to get a stable ABI, you first have to go through a bazaar phase. In the beginning, it looks like all competing libraries and services are creating a mess caused by mutual incompatibility, but in the end, the situation should actually stabilize. In order for the situation to stabilize, it must first mature. So, all the services and libraries must resolve the initial issues that exist because, at first, services and libraries get rushed out to cover some gaping holes (lack of required functionality) in ABIs/APIs.
    No this is your wishful thinking we are 2+ decades in and the reality ABI don't naturally stabilize. Remember gnome with GTK formally agreed to have a cycle of intentionally breaking backwards compatibility. Same appears from Qt and many other core like libraries.

    ABI only stabilize if some form of Cathedral body(yes central body) exists to make it happen with testsuites and so on to test conformance and detect when conformance is broken. This rule applies to Linux kernel for user space ABI, Microsoft, Khronos group ... basically any group that makes stable ABI/API that application developers can depend on.

    This is not about becoming mature. Qt selling new versions of their libraries is in the company behind them interest and a large selling feature is performance not backwards binary compatibility.

    Remember party like EPIC does not what to have to depend on Valve for a solution. This is another factor when you have two companies in tooth and nail competition with each other they can feel that they cannot trust the other party so they would prefer to use a solution from a different party who is not the competition this is another factor in API/ABI fragmentation under Linux.

    Linus Torvalds defines this problem as attempting to Herding cats​. Yes application developers will say they will want a stable ABI but when you get closer they turn out to be insanely picky on what they want that to be.

    Microsoft and Apple has advantage here they have a single Cathedral each and they can decide to make all application developers working on their platforms slightly unhappy to get ABI stability. Yes successfully herding cats by having only one way door to drive them though. Linux world is not this..

    Leave a comment:


  • sinepgib
    replied
    Sorry for the delay xfcemint, it's been a busy week. I haven't read the other article yet.

    Here's my response to this one. Parts I didn't quote I pretty much agree with and am comfortable with how they are phrased.

    Originally posted by xfcemint View Post
    A translator service: a program providing an ABI, by converting the ABI into a mix of APIs and ABIs.
    The use of APIs here would be to convey that you could rebuild the translator whenever the lower layer breaks? Otherwise I'm not sure why APIs are relevant in this context.

    Originally posted by xfcemint View Post
    App developers write an application, but the application needs APIs. App developers prefer to ship executables, which require stable ABIs. There is an internal contradiction in app developers. Actors prefer to resolve their internal contradictions.
    I'm not sure what the contradiction is.

    Originally posted by xfcemint View Post
    Also note that shim libraries may exist that can convert an ABI into a shim API.
    Would rephrase to translator library for consistency with the glossary.

    Originally posted by xfcemint View Post
    Vendors' preferences: vendors are very diverse, and their goals are also diverse. Vendors generally prefer to provide an API to an ABI. Providing a stable API or a stable ABI usually inflicts extra cost and development time to them.

    The current situation is the result of the following forces:
    - Vendors provide APIs.
    - Some APIs become quasi-stable. ABIs might be too low-level to be of practical use, or they might not be well documented, or they might be unstable.
    - App developers will use APIs.
    - This incentivizes the ecosystem of Linux distributions ("the ecosystem") to providing tools for managing the situation of unstable APIs. Those tools have grown sufficiently sophisticated to solve practically all the problems arising from of such a situation, except for the need of app developers to ship executables.
    - Vendors have no good incentives for providing stable ABIs. Some incentives might exist for providing stable APIs.
    I would add one item about the bazaar nature of open source incentivizing the rise of multiple incompatible solutions to the same problem. That doesn't help to stabilize an ABI in practice.

    Originally posted by xfcemint View Post
    - point 3: I find it to be likely that the possibility of incentivizing creation of translator libraries based only on stable ABIs has not been sufficiently considered and discussed.
    With "based only on stable ABIs" do we mean the ones they provide or the ones they sit on top? It'd make more sense to me for those to provide a stable ABI rather than require it, but I find how it's phrased ambiguous.

    Originally posted by xfcemint View Post
    - point 4: I find it to be likely that the possibility of incentivizing the retention of old and superseded ABIs by Linux distributions has not been sufficiently considered and discussed.
    This part is just discussing the point, but I agree we should consider it: I'd say point 3 is a much better approach, as it's a more "don't pay for what you don't use" way. I have this really old binary? Ok, I run my translator library/service. All my programs have been compiled from source? Ok, I can save a few kB of RAM.

    Originally posted by xfcemint View Post
    - point 5: I find it to be likely that, when a smaller vendor is competing with a larger vendor, the smaller one is likely to push for standardization.
    I agree, and I think it's actually why Azure and GCP seem to provide a better out-of-the-box experience for Kubernetes than AWS (or so I heard from devops). While all of them are giant companies, only one is the main player in cloud computing, and that one focuses on keeping their proprietary solutions on top and (presumably) actively tries to make migrating to standards a pain.

    Originally posted by xfcemint View Post
    - point 6: I find it to be likely that, when a competition between two libraries or services of similar functionality exists, then a resulting force of a push for standardization is likely to be created.
    I have mixed impressions about this one. First, I'm not sure which actors you suggest push for this. Then, while I think sometimes the library vendors do push for standardization, it doesn't happen all the time. The one thing I've seen causing such efforts are stuff revolving the X and Wayland protocols and their surrounding services. But in terms of library APIs (as opposed to bus protocols) I'm not sure there's been such a push.

    Originally posted by xfcemint View Post
    - point 7: I find it to be likely that, the possibility of incentivizing (smaller) vendors to create competing services and libraries has not been sufficiently considered and discussed.
    I think this merits discussion, but if the competition involves the API itself it may actually hinder the development of reliable, stable APIs and ABIs. If it concerns mostly the implementation of agreed-upon interfaces, it will probably be beneficial.

    Originally posted by xfcemint View Post
    - point 8: I find it to be likely that the possibility of incentivizing creation of smaller services and smaller libraries that can be used by multiple vendors (to simplify development of services) has not been sufficiently considered and discussed.
    I could see some resistance from application developers here. There's already a bit of trouble with having myriads of tiny deps in the high level development world, specially now that supply chain is in the agenda. I think most developers will want a bigger layer to rely on. Unless you mean using those interchangeably to build such a layer in each vendor (e.g. for a distro).

    Leave a comment:


  • oiaohm
    replied
    Originally posted by xfcemint View Post
    A service: a program providing an ABI. Note: this includes some kernel modules in a monolithic kernel.
    There is not a monolithic kernel in existence that has a service in fact provided by kernel modules. Linux kernel services is part of a item called subsystems yes these can have their own kthreads that show how in your process scheduler yes the stuff with [] around them but mostly not. The ABI is provided by core kernel in a monolithic kernel and this is how LSM hooks can simply be placed across the ABI because LSM hooks don't need to be put in kernel modules individually..

    ​Making the functionality work is called driver and that is what a kernel module is. Remember I listed 4 parts over and over again.
    1. kernel core.
    2. driver
    3. service
    4. application.
    ​You have just made one of the fundamental mistakes. Service level is where the stable ABI to user-space is defined in a Microkernel is define. Driver also has ABI to userspace. Yes you use ioctl to talk to driver this was defined in the subsystem are to exist. You use /sys directory to talk to drivers this was again defined in the service/subsystem level with Linux.

    Service layer provides ABI to drivers and ABI to userspace. This stuff with a monolithic kernel is not in the kernel modules/drivers. There is a reason why monolithic kernels end up using the name subsystem instead of service. Because subsystem limitations don't line up with user-space daemons/services.

    Monolithic
    1) kernel core.
    2) driver(can be module in modular kernels)
    3) subsystem( Monolithic equal to microkernel services in kernel space and never a module)
    4) user-space daemons/services.
    5) user-space applications

    Microkernel model normally does not have a exactly define split between what are user-space services like postgresql... and subsystem services like VFS ...in ABI/API provide.

    Originally posted by xfcemint View Post
    - point 2: I find it to be likely that the possibility of incentivizing creation of translator services has not been sufficiently considered and discussed.
    Translator services are items like Wine. There are issues like this. Big one is developer time. But point 4 it self stuff also applies.


    Originally posted by xfcemint View Post
    ​- point 3: I find it to be likely that the possibility of incentivizing creation of translator libraries based only on stable ABIs has not been sufficiently considered and discussed.
    You see this with the current SDL wrapper, ioring. opengl and others already exist. But these require lots of maintainer work..

    Originally posted by xfcemint View Post
    - point 4: I find it to be likely that the possibility of incentivizing the retention of old and superseded ABIs by Linux distributions has not been sufficiently considered and discussed.

    Security issues are also very important for Linux. When a
    security issue is found, it is fixed in a very short amount of time. A
    number of times this has caused internal kernel interfaces to be
    reworked to prevent the security problem from occurring. When this
    happens, all drivers that use the interfaces were also fixed at the
    same time, ensuring that the security problem was fixed and could not
    come back at some future time accidentally. If the internal interfaces
    were not allowed to change, fixing this kind of security problem and
    insuring that it could not happen again would not be possible.​
    Old and superseded ABI cannot always be kept. Sometimes they were simply bad insecure design and keeping them around keeps bad insecure around.

    Next Linux distributions have limited maintainer time. The larger the ABI you are maintaining the more it costs. If you go though ubuntu and debian and fedora mailing lists you will find this topic has been heavily debated many times.

    Originally posted by xfcemint View Post
    ​- point 5: I find it to be likely that, when a smaller vendor is competing with a larger vendor, the smaller one is likely to push for standardization.
    Linux kernel has been odd this way early on. When Linux kernel was a smaller vendor due to being the kernel core and changing all the time pushed larger vendors into doing stuff its own way. These days the core kernel development of Linux is self is quite a big vendor.

    Originally posted by xfcemint View Post
    - point 6: I find it to be likely that, when a competition between two libraries or services of similar functionality exists, then a resulting force of a push for standardization is likely to be created.
    This totally misses something.

    Yes this standards but under Linux we have this happen with libraries all the time.

    TLS libraries good example we have a standard but we have 16 open source implementations with majority having unique API/ABI.

    Originally posted by xfcemint View Post
    - point 7: I find it to be likely that, the possibility of incentivizing (smaller) vendors to create competing services and libraries has not been sufficiently considered and discussed.
    We have tones of examples where having smaller vendors do things like TLS libraries, sysvinit and so on where this turns out not to be a good thing. Resources issue. Smaller vendor making stuff without enough resources to in fact maintain it just end up causing security problems and crashes.

    Originally posted by xfcemint View Post
    ​- point 8: I find it to be likely that the possibility of incentivizing creation of smaller services and smaller libraries that can be used by multiple vendors (to simplify development of services) has not been sufficiently considered and discussed.
    This has serous downside. Each implementation can have it own unique quirks and issues. Mesa3d user space driver stuff valve has found has very little quirk difference between them because they all come from single central vendor that worked on by multi vendors. Vs under windows intel amd and so on all come from different vendors development areas leading to their opengl implementations all have unique quirks.

    Multi vendors historically has proven to be more of a nightmare than a help.

    Originally posted by xfcemint View Post
    ​​- point 9: I find it to be likely that the effects (on this problem) of introducing a microkernel-based subsystem (into the Linux kernel) have not been sufficiently considered and discussed.

    I would say correctly discussed. Most time Microkernel is discussed the time is not spent to understand what the current Linux kernel provides in it current form.

    LSM hooks for example. Why this are not like capabilities if a hook is not being used by current setup the overhead is exactly 1 jmp call that by out of order CPU is normally no cost at all.

    There are advantages in the current Linux kernel monolithic design. Yes these don't model across into standard microkernel based design.

    Originally posted by xfcemint View Post
    ​- point 10: I find it to be likely that the possibility of incentivizing creation of app developers' interest groups that can push for creation of translator services has not been sufficiently considered and discussed.
    These interest groups have existed many times over. This again has been more considered than you think. Simple list of where this goes wrong.
    1) Cost in developer time.
    2) performance cost.

    Wine is able to push against this quite a lot because it allows applications to run on Linux that don't run on Linux natively.

    Some of the game developer world are willing to invest likes of SDL1 wrapper library development. This is a case that legacy application is important.

    There are a lot of cases where it does not make sense because the 1 year old application you would not be using. Take a browser who want to be using 1 year old possible security flawed web browser on the internet.

    Something you did not consider is how small the interest group to create translator services/wrapper libraries are. This also means you did not consider why they are not growing.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by xfcemint View Post
    Capabilities are a superior solution compared to everything else that you have mentionined here or around. The only half-valid argument is the question of performance (regarding capabilities), but it appears that the performance is not detrimental, it is expected to work out fine.

    If USA goverment has declared some idiotic rules, then that is their problem. They can choose either to change the rules or to not use a capabilities-based OS.

    I'm not going to write 50 pages here explaining in detail how microkernel-alike IPC and capabilities should work, and I'm not going to write another 50 pages about how to correctly design the implementation.
    I don't need 50 pages of garbage because you made some fundamental mistake that can result that cannot be implemented in the real world for something like linux.

    https://en.wikipedia.org/wiki/Capabi...erating_system

    There are only 9 capability based OS that have ever been built out to any amount. Sel4, Fuchsia and Genode are the only 3 that remain in development. Genode is microkernel or monolithic kernel.

    Notice something here the platform support of these capability based OS has remained quite limited there are interesting problems implement capability OS on different CPU designs. Yes genode is the most developed out.

    https://genode.org/documentation/gen..._security.html

    Guess what you cannot implement using this all the different MAC solutions the Linux kernel supports using genode capabilities. There are reasons why Linux kernel LSM is the way it is.

    xfceminit you have never read the TCSEC have you to understand why MAC has to be kernel level and why domians. Do note genode attempts to create a userspace close to what you have been describing.

    Like how you are claiming superior solution the developer who made genode started off with your idea that capabilities model had to be super and shock horror the more genode has developed the more work that is put into making capabilities work the more limiting and problematic they become.

    LSM hook system allows altering the fundamental rules this is where capabilities fall apart is that you end up writing stack of fundamental rules you cannot change because applications depend on those rules. MAC remember user space application is not mean to know what the MAC rules are this is what LSM hooks also implement so you change the fundamental rules the MAC is enforcing since the user space application never new about those rules and as it as it not doing something forbid by the MAC to the application nothing has changed.

    xfcemint this is the reality of capabilities systems person spend ages writing 50 page+ document on how in theory it should work goes and implements it then some attack comes up that turns out has happened because they made some human error and there is no way to fix this issue without breaking applications because applications themselves depend on the capabilities keeping the security flawed behavior.

    It really does pay to sit down and read the TCSEC reasoning. The design theory there also considers how you are going to handle if you have made a mistake.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by xfcemint View Post
    Linux IPC should not be like sel4 IPC.
    Linux kernel itself implements many different IPC solutions. Each IPC has different advantages and disadvantages. So Linux kernel having a IPC like sel4 along side it other options should be expected at some point.

    Originally posted by xfcemint View Post
    I think that both MAC and DAC should be avoided if possible. Capabilities are much more than that. Some particular service can implement DAC or MAC on top of capabilities if it needs to, but the kernel certainly shouldn't do that.


    This is the problem with I think. Capabilities under Linux are implemented under LSM Framework. LSM framework is not a Capabilities solution for many reasons.

    Mandatory access control by USA government defines that in the original "Trusted Computer System Evaluation Criteria (TCSEC)​" as kernel feature.

    The problem is when MAC is in place no user-space application should be allowed to start that can bipass the MAC. See trouble here. If service attempting to make Mac is itself in the user space application domain its no longer a Mac because its a user space application. Historic microkernels that used all 4 rings of x86 processor of course does not have this problem because userspace applications and services run on different rings/domains.

    Next problem why you cannot commonly use Capabilities for TCSEC define MAC is that MAC rules must be invisible to user space application so that attacker cannot see what the set limitations are until after they have tripped them. DAC does not have this limitation of being invisible to application.

    Next is why LSM framework uses hooks instead of Capabilities. Hooks can run code at the approve or reject point so allowing more complex MAC designs than you can do with capabilities.


    Yes Linux kernel support MAC system being written in BPF so using Language-based system for security here instead of being a built in kernel module.

    xfceminit you are right to say MAC should be implemented as a Service. Remember how I pointed out the 4 domians of historic Microkernels.

    1) kernel
    2) drivers
    3) Services
    4) userspace.application.

    Core OS services contain information that user space applications should never be able to see or modify. Drivers contain information that services and userspace applications should never be able to see or modify. Core kernel contains information that drivers, services and userspace applications should never be able to see or modify. These are requirements when running as a secure mode to minimize attack surface.

    Monolithic kernel breaks this rule because kernel, drivers and services are all in one area. Modern Microkernels break this rule because drivers, services and userspace applications are in one area.

    Like it or not to make correctly secure OS you need mandatory rules between kernel, drivers, services and userspace applications enforced by some means. Idea of microkernel then everything else be generic userspace application does not fly.

    BPF applies mandatory rules by it validator and the language limitations.

    The hard part is how to isolate kernel, drivers, services and userspace applications from each other without absolutely killing performance. 4 ring Microkernels of old were ok in security not perfect because the x86 ring system at hardware level was designed wrong(so allowing ring to ring bipassing of security another CPU design fault issue) and caused massive performance overhead causing the idea that all Microkernels had to be slow.

    xfcemint domains of historic Microkernels 4 ring were very well considered. How MAC has to work to trap attackers in the TCSEC was also very well considered. There has been lots of study in these fields.

    Microkernel are not magic bullets. Both monolithic and Microkernel designs you run into the problem parts that should be isolated from each other not being isolated from each other so causing major security problems. Annoying part is the fact there is going to be major security problems is very well documented. Also very well documented is that when you isolate those parts as you should with current cpu designs you end up taking horrible performance hits.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by sinepgib View Post
    Sounds like an interested read, I'm in.
    Really when it comes to userspace API analysis there is really is not simple cybernetical analysis that works for all cases.

    Backgroud: io_uring vs epoll Nowadays there are many issues and projects focused on io_uring network performance, and the competitor is always epoll. #189 https://github.com/frevib/io_uring-echo-se...

    Linux kernel developers are doing cybernetical analysis all the time in different areas using all the different methods.

    IPC is a message passing mechanism, right? After all, it’s an acronym for inter-process communication? That’s historically true, but seL4 IPC has come a long way, and its IPC primitive …


    Yes both uring and sel4 IPC has downside where they not work well.

    Do note xfcemint has been getting basics wrong. Common error is that capabilities can solve everything. There is need for domains. Domains being areas that even using capabilities you cannot override the rules. Security breaks into 2 general types. Discretionary Access Control(DAC) and Mandatory Access Control (MAC) Capabilities in most operating systems are only DAC. Yes Linux capabilities are only DAC.

    The ring0 to ring 3 context switch for a syscall that in old micro-kernel stuff would be domain change this is really MAC. Remember in ring 0 you can see one set of page tables and when you switch to ring 3 you see a different set.

    MAC has a far higher performance cost than DAC in most cases.

    Really this is the big problem with the idea that Microkernel is going to solve everything. Only way it can is if you have domains made by rings or MAC around drivers services and applications userspace. MAC being strict rules that drivers are drivers, services are services and userspace applications are userspace applications. Problem is this is going to hurt performance a lot.

    The Linux kernel and the Windows kernel splitting userspace application from drivers, services and kernel core at by a ring change is not a mistake from security point of view.

    Think attack surface. Yes Linux kernel has a large attack surface having all drivers and services and kernel core in ring 0. But without stuff you call foolish sinepgib like /dev/mem the user space applications cannot be messing with raw device memory.

    Now lets look at Modern Microkernel without solid MAC(as all the ones people quote are) You have drivers, services and userspace applications in ring 3. Hang on userspace applications if permissions go wrong(SOD law applies that permissions will go wrong at some point) userspace application can over ride all security controls due to complete memory access.

    Shock horror right Microkernel has bigger attack surface than Monolithic because all of userspace could be a complete security bypassing item.

    Bigger the attack surface the more you have to audit to make a truly secure OS.

    Userspace drivers have upsides improved security is not one of them. Userspace drivers most work out to be as harmful to OS security and stability as ring 0 drivers because they need raw physical memory access to work. Remember you have to allow samsung example I gave with the Linux kernel where the person making a userspace driver takes the lazy route of granting their driver everything.

    Core Linux kernel developers wanting drivers up-streamed and peer reviewed prevents a lot of stupidity. You see it a lot on LKML that people making drivers are getting called out for doing foolish things that are not human error but are instead the lazy way. Like adding a lock instead of using RCU and so on. Then all the Windows drivers that basically implement /dev/men and so on lazy ways to solve problems.

    Never underestimate how human laziness not checked for can totally undermine security. This is some of why real world microkernel have never really delivered what the concept seams to offer.

    Leave a comment:


  • sinepgib
    replied
    Originally posted by xfcemint View Post
    It is possible that I could, perhaps, do some cybernetical analysis of the "situation" with userspace ABIs. That might end up usefull, or it might not. If anyone is interested in reading such a thing, I can post the analysis here.
    Sounds like an interested read, I'm in.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by xfcemint View Post
    The way to solve this problem is to add a mechanism (by the system of capabilities, you should read it) to the microkernel that tracks the permissions for the segments of the physical address space. In this perspective, the "physical address space" becomes just another set of interfaces. The permissions to access a particular interface are than tracked by the capabilities sub-system, just like any other permissions for any other interfaces. In fact, the capabilities system allow for a much broader functionality, since a capability system allows a permission-for-an-interface to be sent from one driver/service to another, duplicated, granted, revoked at will, subdivided, joined, et cetera.

    In short, physical address space can be percieved as just another interface. Linux doesn't have a built-in capabilities system (yet).
    No this is wrong. https://www.kernel.org/doc/html/v4.1...uio-howto.html Linux does have built-in capabilities system that has been build around /dev/men and uio for userspace drivers. Both have had their issues.

    Next question that you did not ask yourself what system is the Linux kernel using generally between userspace and kernel mode services/drivers.



    Clues in these functions. The answer is domains. The standard Linux model is the Linux based driver cannot access something from userspace until its mapped into kernel space and userspace cannot access kernel space data until it mapped into userspace and in standard mode a block of memory cannot be allocated to userspace or kernel space at the same time.

    Please note Linux kernel has capabilities system on top of a domain system. Historic secure Microkernels had 4 domains. Using domains has a performance cost.

    Yes you just made another common argument mistake presumed that Linux kernel does not have something already. Linux kernel developers have done a lot of work with the Linux kernel experimenting around with user space drivers because like you many of them thought the idea of user space drivers would be a good thing. Unlike you they have really done it and found out its not as ideal as it first appears.

    General Linux kernel space/userspace interface mostly operates on RCU this is not suitable for device control works quite well between the drivers to services in the kernel and the userspace.

    xfcemint; the historic microkernels declared 4 domains clear reasons. The types of memory operations required changes between each of those domains. Each domains can have their own domain particular capabilities systems for memory.

    monolithic kernels with drivers, services and kernel core in a single domain does have problem you don't have means to make strictly individual capabilities for each of those parts because they are in the same domain. Modern microkernels kernel in ring 0 and now drivers, services and user applications in ring 3 you now have the problem that you cannot make strictly individual capabilities between drivers, services and user applications because they all are in the same domain. 90% the same problem.

    xfcemint basically what you are writing is like someone suggesting to reorder the deck chairs on the sinking titanic as a solution to stop the titanic from sinking.



    Leave a comment:

Working...
X