Announcement

Collapse
No announcement yet.

How Google's Android Maintains A Stable Linux Kernel ABI

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • oiaohm
    replied
    Originally posted by Vistaus View Post
    But the point was that they *are* developing their own kernel. The point was not "how long will they be committed".
    https://www.businessinsider.com.au/g...-policy-2015-4

    That is not straight forwards. Google has this 20 percent time idea that keeps their developers happy. So at google a stack of staff and work on a project without any major plans for it going anywhere.

    There have been some quite spectacular R&D OS developments that have just dead ended in time due to hardware support issues. Good example Singularity OS from Microsoft. 7 years of full time development team not a final product and not a final product was the result of Singularity OS.

    You can extract stats from the fuchsia git. You will find if you do its a total of 12 individual developers all from google. There are no hardware vendors making drivers in the mix.

    12 developers working on fuchsia total there is over 10000 at google working on Linux in different areas. Fuchsia gets a lot of press but its development team is only the size of a company expendable R&D team. Singularity OS that failed was a team of over 200 so Fuchsia is not highly resourced.

    Vistaus is google developing their own kernel or are they just after a kernel where they can mess around with ideas then end up taking up stream into Linux? Or worse is fuchsia just developed by google to be a barging chip? Time will tell. Until we see products shipped with Fuchsia we will be guessing. That right up until Google ships products with Fuchsia its just a mirage of a future OS that may not have any real world usage or any long term development. Fuchsia at this stage is no different to walking into a planning office and seeing a nicely planned out future city that never comes into real world existence yet people are attempting to write how life will be in that city.

    Leave a comment:


  • Vistaus
    replied
    Originally posted by Aeder View Post

    Given Google's track record I fully expect them to abandon it and Fuchsia the millisecond it's not an instant success installed across all smartphone brands and driving billions of sales.
    But the point was that they *are* developing their own kernel. The point was not "how long will they be committed".

    Leave a comment:


  • oiaohm
    replied
    Originally posted by coder View Post
    What I'm saying is a false dichotomy is the idea that a stable ABI means never changing it. IMO, a sane middle-ground would be some ABI version numbering scheme and general plan around how often and at what points incompatible changes would be introduced. You could separately version the ABI of different subsystems, or just tie the incompatible changes to the major version of the kernel (i.e. have it actually mean something).
    This idea does not hold up in the real world examples Solaris , windows....

    Originally posted by coder View Post
    Yes, any time you constrain kernel developers, there's always a downside - more hoops for them to jump through, to avoid introducing breaking changes at the wrong times, and maybe even a few cases where features have to get deferred for a few releases.
    I wish it was only features deferred a few release. Everything to do a stable ABI has got trapped. Windows 32 bit full PAE support allowing large memory in all cases has basically got deferred to never functional so the platform support dies first. So a feature deferred for a few releases to maintain compatible risk turning into never.

    There are examples in Solaris and other operating systems were deferred to maintain kernel ABI for drivers turns into the feature never landing as default user because the feature is kept turned off because it might break drivers users wishes to run.

    Originally posted by coder View Post
    All I'm saying is there are also upsides, and not only for closed-source, out-of-tree drivers.
    Except that is really the only major advantage.

    Originally posted by coder View Post
    Except I never said that. What I'm describing would be a compromise between the two extremes. It should be obvious that major versions mean there are points across which old drivers can no longer work.
    This is arguement of person who has not looked at the history of stable kernel ABI. The reality is if you have a stable kernel ABI and you want to break old drivers you will have massive resistance. Yes those fighting for a stable kernel ABI now is only a small amount of resistance. If you can avoid you don't want to increase resistance to change.

    Originally posted by coder View Post
    The very idea of there being some kind of policy around when incompatible changes should be introduced is specifically intended to counter the classic slippery-slope argument. Basically, you're saying that if kernel developers accept any kind of constraints, then they risk ceding all control. I think that's just absurd.
    You think the ceding all control is absurd outcome but this is what the history of every operating system to attempt a stable kernel ABI. Either you end up ceding all control or you drop the idea of stable kernel ABI and tell the driver makers to suck it up. Solaris and freebsd both attempted stable kernel ABI for drivers as well. So you are suggesting something that has been tried with know effects.

    Originally posted by coder View Post
    IMO, something so exotic is probably a lot harder to gain traction. But, good luck.
    Problem here for Linux kernel developers doing LTS kernels Semantic Patch is not exotic is how most of the patches in fact move from development branch back to the LTS kernels. There is a problem that there is fragmentation is how the Semantic patches are made with no policy for them to be both directions. So you find Semantic Patch suites to take code from Linus branch back to LTS but out of tree branches up to Linus branch you would be down right lucky to find the semantic patches todo that. There is also no policy that when you add a breaking change you have to create a Semantic patch either so it end up landed on those working on LTS branches to create them.

    So there is a policy problem.

    https://linuxplumbersconf.org/event/...ummit_2019.pdf
    If you read this above this is not the only policy problem.

    There is a bigger policy problems. You cannot start a stable kernel ABI without a functional test suite system to confirm it and bug system to repair the issues. You cannot really have a well tested forwards porting and back-porting Semantic patch system without a functional test suite system to confirm and bug system to repair the issues it either.

    Guess what the Linux kernel does not have a functional test suite system or bug system to repair the currently detected bugs without adding ABI or Semantic patch bugs on top.

    There might be less out of tree drivers as well if the development processes of Linux were in fact properly functional.

    Coder you called Semantic Patch exotic it is outside kernel space. Its not exotic for linux kernel space developers. You have presumes about the effects of a stable kernel ABI problem is history has it playing out the same way every single time its been done.

    The road to hell is paved with good intentions. Stable kernel ABI is one of those things that always starts out with good intentions but down the road a bit every time it been done turns to hell.

    So we do need to find another way of achieving the Stable Kernel ABI result without the major downside everything you are suggesting has failed to work in the Windows,. Solaris and freebsd attempts. The one thing we can be fairly sure of it will not be a Stable Kernel ABI that will be the correct answer. Could be like a stable kernel bytecode for drivers or semantic patch system for source...

    The correct solution is going to have some way to transform the driver code binary form.

    There are still many examples on windows where spectre and meltdown exploits still work in third party drivers this is a side effect of stable kernel ABI. So stable kernel ABI lowers security. Those with third party drivers to the Linux kernel hate the API changes. A stable API is from a security point of view a better choice than stable ABI in kernel space due to the security problems the ABI brings.

    If you cannot have s stable API something like Semantic Patch kit that is maintained is still quite a good solution so you can migrate from one API version to the next without manual coding.

    Leave a comment:


  • skeevy420
    replied
    Originally posted by eigenlambda View Post
    So why can't Android userland be updated while the kernel remains the same manufacturer-supplied kernel?
    Occasionally Google updates parts of the userland to require certain kernel features to be present. When that happens then the userland needs kernel updates.

    Part of what Google is working on now is getting it all setup so a manufacturer only has to target a certain kernel so they could then do all of their customizations with kernel modules which means that only the modules would have to be updated and that can be included in a userland update -- think locked bootloader with the ability to take partial kernel updates...not necessarily a good thing for the root and rom community. Combine that with FS-VERITY and DM-VERITY and it'll soon be a lot harder for XDA people to find exploits for devices since the manufacturers will be able to checksum and verify every freakin stock file a device ships with.

    Leave a comment:


  • eigenlambda
    replied
    So why can't Android userland be updated while the kernel remains the same manufacturer-supplied kernel?

    Leave a comment:


  • You-
    replied
    The only proper fix is for the drivers to be upstreamed.

    AFAIK this is happening and everyone benefits from it. The stragglers must be dragged along.

    The alternative is integrated components falling out of support a couple of years after release - which the maufacturers dont care about as its past thewir supported release cycle.

    Leave a comment:


  • coder
    replied
    Originally posted by oiaohm View Post
    Its not a false dichotomy.
    What I'm saying is a false dichotomy is the idea that a stable ABI means never changing it. IMO, a sane middle-ground would be some ABI version numbering scheme and general plan around how often and at what points incompatible changes would be introduced. You could separately version the ABI of different subsystems, or just tie the incompatible changes to the major version of the kernel (i.e. have it actually mean something).

    Originally posted by oiaohm View Post
    Its the downside of stable kernel ABI.
    Yes, any time you constrain kernel developers, there's always a downside - more hoops for them to jump through, to avoid introducing breaking changes at the wrong times, and maybe even a few cases where features have to get deferred for a few releases. All I'm saying is there are also upsides, and not only for closed-source, out-of-tree drivers.

    Originally posted by oiaohm View Post
    The idea that major versions will dig you out of closed source drivers using the old ABI and refusing the upgrade is wrong.
    Except I never said that. What I'm describing would be a compromise between the two extremes. It should be obvious that major versions mean there are points across which old drivers can no longer work.

    Originally posted by oiaohm View Post
    You make a stable ABI in kernel space for drivers expect to be walled in by users with devices where the vendor is no more or the vendor will not make a new driver demanding you don't break the ABI so seeing you stuck.
    The very idea of there being some kind of policy around when incompatible changes should be introduced is specifically intended to counter the classic slippery-slope argument. Basically, you're saying that if kernel developers accept any kind of constraints, then they risk ceding all control. I think that's just absurd.

    Originally posted by oiaohm View Post
    Please note if you cannot really Semantic Patch changing major versions creates a huge stack of work and those making drivers will skip out on doing it. So a functional Semantic patch policies between incompatible changes is need with or without stable kernel ABI and we don't have that policy.
    IMO, something so exotic is probably a lot harder to gain traction. But, good luck.

    Leave a comment:


  • anarki2
    replied
    Sooo, literally the one and only successful product that utilizes the Linux kernel broadly disputes and defeats stable_api_nonsense.txt. It's about time Linux developers seriously reconsider ... everything they're doing.

    Naturally, a lot of clowns here still defend the approach. It indeed worked greatly in the past 25 years, right? No.

    Leave a comment:


  • jacob
    replied
    Originally posted by Volta View Post

    Blame nvidia.

    Ps. if FreeBSD and Solaris have stable ABI's, then it seems it doesn't work for them. It's not some magical solution to success.
    Having a stable kernel ABI *would* make life easier for users and third party driver developers alike, but it would come at a cost that the stakeholders don't seem to be prepared to pay. FreeBSD and Solaris are less popular than Linux, but I believe the primary reasons for that have little to do with a stable kernel ABI (which FreeBSD doesn't really have either, by the way).

    Leave a comment:


  • oiaohm
    replied
    Originally posted by coder View Post
    IMO, it's a false dichotomy to debate whether or not to have a stable driver API. Obviously, there will need to be some incompatible API changes, over time. That's why God invented major versions, after all.
    Its not a false dichotomy. Its the downside of stable kernel ABI. Yes you might be able to invent a major version. Microsoft windows 32 bit PAE support. This is the real reason why Windows 32 bit locked itself at 4g of memory max for desktop users. They had a stable ABI for drivers the drivers were made for 32 bit of memory and could not cope when you gave them PAE with 64G of memory on a 32 bit machine.

    Originally posted by coder View Post
    So, the real issue seems to be that there's no policy around introducing incompatible changes. That doesn't seem like such a big ask, but it's understandable why some GPL purists would rather not have to think about such matters.
    I am not past the idea of a policy about introducing incompadible change this does not in fact require a stable kernel ABI and would most likely be way better long term without stable kernel ABI.

    Originally posted by coder View Post
    However, I think it's not only vendors with proprietary drivers that would benefit. Anyone trying to port fixes forward or backwards could potentially benefit, and not just in kernel modules, but also the the core.
    http://coccinelle.lip6.fr/
    Let say we had a policy that every incompatible change had to make a SmPL (Semantic Patch Language) SingularitySingularity to transform from old code to new code and with mirror for new code to old code.

    There would be no need for a stable kernel ABI if that policy existed for open source parts of course that not going to help proprietary drivers one bit. Also it would avoid the set in stone pile of drives problems windows CE, Windows NT, Solaris,... basically everyone who has ever implemented a stable kernel ABI has run into.

    The idea that major versions will dig you out of closed source drivers using the old ABI and refusing the upgrade is wrong.

    There is a lot of not looking at the history of failures of the stable kernel ABI idea. You make a stable ABI in kernel space for drivers expect to be walled in by users with devices where the vendor is no more or the vendor will not make a new driver demanding you don't break the ABI so seeing you stuck. This is the direct price of the stable ABI in kernel space idea every single time its been done. It really about time we attempt to find another way of achieving the objective and avoiding this problem.

    Microsoft with Singularity that came a failure was that all drivers would be written in bytecode that Microsoft could semantic patch by updating the bytecode to native converter to get away from stable kernel driver ABI problem. The stable kernel ABI idea is suggesting something Microsoft has found themselves stuck with and cannot get out of even throwing a few billion dollars at the problem.

    Yes you need drivers to be provided in a form they can be modified source code is one option. We need policies so that updating code or backporting code is straight forwards.

    Please note if you cannot really Semantic Patch changing major versions creates a huge stack of work and those making drivers will skip out on doing it. So a functional Semantic patch policies between incompatible changes is need with or without stable kernel ABI and we don't have that policy.

    Leave a comment:

Working...
X