Popular USB DWC3 Linux Driver Likely To "Never Be Finished" With Continued Adaptations

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • sinepgib
    replied
    Originally posted by willmore View Post
    Microsoft has well demonstrated that code review is insufficent to keep the monkeys from throwing poop into the code base. They have been trying this method for three decades and have produced larger and larges stacks of poop.
    Most of the stack is outside kernel space. The kernel itself seems rather decent and stable AFAICT. Besides, then you contradict yourself. If style is a barrier it will only be enforced at code review time. And, really, are you proposing that indentation is a harder barrier to surmount than actually understanding how a scheduler, virtual memory, memory mapped IO and what not works? Same for the language. C is easy compared to these concepts. But it's also easy to make a mess with it, because it's not obvious when UB crops up. Matter of fact, with your proposed gatekeeper in place Linux has plenty of UB cases in its history.
    Making a field artificially harder when code review is decent (and no, MS is no example, as with all commercial proprietary systems there's a hard pressure on "just ship it") is asking for trouble, nothing else. I seriously doubt anyone understanding the low level concepts you'd use in the kernel enough to pass through GKH or Dave Airlie or Jason Donenfeld or any other maintainer is bad enough at coding that you'd want them out of the kernel.

    EDIT: besides, with your argument we should rather use the full spectrum of C++. Talk about a high barrier of entry!

    Leave a comment:


  • willmore
    replied
    Originally posted by sinepgib View Post

    1. The domain itself acts as the gatekeeper, you don't really need any extra blocks, they won't make a difference. Kernel programming is inherently more difficult than what infinite monkeys would dare approach.
    2. The whole argument only makes sense if the programming model is a mob programming mess, rather than one based on very demanding code reviews. The bar comes from there.

    Not contending the fact that (most, at least) OOP languages are completely inappropriate for a kernel.
    Microsoft has well demonstrated that code review is insufficent to keep the monkeys from throwing poop into the code base. They have been trying this method for three decades and have produced larger and larges stacks of poop.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by camel_case View Post
    Thats why Linux need to be implemented in a object oriented programming programming language.

    Simply:

    public class XyzDwc3Module extends DwcModule implements usbInterface { ... }

    and DwcModule never ever needs to be changed.

    But in Linux they want 70ies like C most time and most people they need to keep their own codebase in shape.
    There is one huge problem with this statement.




    C is in fact a object oriented programming capable language. Does not have any special support in the language. Linux kernel is not pure C but. You find sparse extra meta data in the Linux kernel C guess what this ends up doing most of the object oriented programming checks you would expect in a Language like C++.

    The C of the Linux kernel is not 1970s C. The C of the Linux kernel is even extended past hat is in the C standards.

    Reality here with knowledge what ever you can do with class in object programming languages you can do as structures in C. The first versions of C++ did not have a straight compiler instead was just a code processor that output C code. Yes even back then a class is nothing more than a very fancy C structure.

    Yes there are ways to create equal to classes in the Linux kernel those are normally done as part of subsystems.

    Another thing to remember Linux also supports module using another module. So if the Dwc interface could be stable you could have xyzdwc3module loading a common core dwc3module. To make a exported class in object oriented programming you need some what stable interface. To make shared code you need somewhat stable interface.

    Yes the modules of the Linux kernel can be used as individual objects in oop programing. Linux oop extends past what most OOP langauges support.

    Leave a comment:


  • sinepgib
    replied
    Originally posted by willmore View Post
    To rif on this a little bit. There's something to be said for having a barrier to entry for code that is as critical as an operating system. While, as a general goal, it's best to have as many people able to write code so as to widen the pool of people writing code, there is something to be said for the quality of the code written. If not, this quickly devolves into the "infinite monkeys w/infinite typewriters" problem. Sure, lots of code, but lots of *bad* code.

    Having the language or the coding style (If you're challenged by indentation standards, you really should review your decision to be a coder.) be a small barrier to entry proportional to the difficulty of the system the code is to comprise makes a lot of sense. I don't want just anyone writing kernel code. Too many people rely on it to be correct and it's way too easy for someone who doesn't understand things to mess up something subtle.
    1. The domain itself acts as the gatekeeper, you don't really need any extra blocks, they won't make a difference. Kernel programming is inherently more difficult than what infinite monkeys would dare approach.
    2. The whole argument only makes sense if the programming model is a mob programming mess, rather than one based on very demanding code reviews. The bar comes from there.

    Not contending the fact that (most, at least) OOP languages are completely inappropriate for a kernel.

    Leave a comment:


  • willmore
    replied
    Originally posted by dragon321 View Post

    Using OOP to write kernel is like literally asking for troubles.

    Linux is not written in C because "they love 70's technology" but because there wasn't really better language to write kernel when it was created. Still C is one of the best languages for that purpose. It's simple (in terms of metaphor and abstractions), well supported and it's relatively easy to understand how code is going to compile. Of course writing in C is not simplest way to write code but who said developing kernel is easy?
    To rif on this a little bit. There's something to be said for having a barrier to entry for code that is as critical as an operating system. While, as a general goal, it's best to have as many people able to write code so as to widen the pool of people writing code, there is something to be said for the quality of the code written. If not, this quickly devolves into the "infinite monkeys w/infinite typewriters" problem. Sure, lots of code, but lots of *bad* code.

    Having the language or the coding style (If you're challenged by indentation standards, you really should review your decision to be a coder.) be a small barrier to entry proportional to the difficulty of the system the code is to comprise makes a lot of sense. I don't want just anyone writing kernel code. Too many people rely on it to be correct and it's way too easy for someone who doesn't understand things to mess up something subtle.

    Leave a comment:


  • dragon321
    replied
    Originally posted by camel_case View Post
    Thats why Linux need to be implemented in a object oriented programming programming language.
    Using OOP to write kernel is like literally asking for troubles.

    Linux is not written in C because "they love 70's technology" but because there wasn't really better language to write kernel when it was created. Still C is one of the best languages for that purpose. It's simple (in terms of metaphor and abstractions), well supported and it's relatively easy to understand how code is going to compile. Of course writing in C is not simplest way to write code but who said developing kernel is easy?

    Leave a comment:


  • abott
    replied
    Originally posted by camel_case View Post
    Thats why Linux need to be implemented in a object oriented programming programming language.
    That simple shit requires a complete retooling of EVERYTHING in Linux, and also sucks to program with because it's Cpp.

    Like others said, that functionality is already in the kernel and exactly HOW it all works.

    Worthless garbage.

    Leave a comment:


  • stormcrow
    replied
    Originally posted by sinepgib View Post

    For huge projects where you expect external contributors to participate, it's actually important for code to not become idiosyncratic. This discussion is both important and the fixes trivial. Formatting and variable naming won't generally require more than one extra version with close to no extra work. The core of the iteration comes from handling edge cases, using the right interfaces provided by the kernel, and other architectural functionally relevant changes. So, there's not really a way for that change to end well, even if not changing it means less upstream work from hardware providers it means what does land is maintainable in the long run.
    The stable interfaces we agree, but then again that's only useful for out-of-tree (most often closed source) drivers, so for what the article argues it's mostly irrelevant.



    While the being easier part is true, the feasibility to understand what it's going to compile to is rather not true anymore since a long time ago. Optimizing compilers, which you use to build the kernel, by definition change how the code is compiled from the "obvious" version you would intuitively think of.



    Absolute nonsense. For a start, you can't reasonably make an ABI with C++ and you shouldn't use Java like bs for a kernel (periodic freezes and stupid high memory overhead in your kernel are absolutely unacceptable).
    Besides, the kernel is already as object oriented as C allows and works exactly that way, just with function pointers in an explicit vtable, and also you don't need OO for keeping stable interfaces.
    Also, it's quite funny you complain about programmers stuck in the 70s and propose 80s' tech as the replacement in 2022



    Unless with "corporate" you mean Android, which is the most common target for ARM SoCs, it doesn't seem correct. I think it's more likely to be sheer incompetence than an intentional attack to open source. Of course, the effect is the same, an eternally moving target for open systems. As someone else mentioned, in the end it's also more expensive at initial bring up (which in most cases is pretty much all the hardware provider does) to go mainline than to make a blob drop.
    A reason to think it's incompetence is that being that the main target, and that action discouraging other embedded uses, it goes against the interests of those hardware providers.

    Adding to the bold section: there are any number of articles on the Internet where the 'expected' became totally 'WTH' with any given (optimizing) compiler. GCC, Intel, LLVM/Clang, Microsoft, etc all have cases where the generated code doesn't match the logic in the human readable code, and yet the generated code isn't necessarily a generation bug, but an effect of platform optimization logic. Add the two logic sets together and you end up with machine code that doesn't necessarily line up verbatum with the written code logic, but close enough the result is the same.

    Edit to add for the OP: Let's not conflate computing & security paradigms (Unix being originally formulated in the 70s & 80s) with implementations and techniques of software coding. Despite being written in C, for the most part, much of the Linux kernel's code is considerably more modern in design than the dinosaurs you appear to be implying. I was sitting here wondering why you were saying that Linux being OO would solve X, but what you're complaining about is solved by modular programming paradigm. Any language implementation that handles that paradigm can solve the problem you're talking about, including C.
    Last edited by stormcrow; 05 June 2022, 02:20 PM.

    Leave a comment:


  • sinepgib
    replied
    Originally posted by CommunityMember View Post
    and less discussions within a subsystem about exactly how the code formatting should be, and what the internal variable names should be (since all of those are unlikely, I don't expect things to get a lot better any time soon).
    For huge projects where you expect external contributors to participate, it's actually important for code to not become idiosyncratic. This discussion is both important and the fixes trivial. Formatting and variable naming won't generally require more than one extra version with close to no extra work. The core of the iteration comes from handling edge cases, using the right interfaces provided by the kernel, and other architectural functionally relevant changes. So, there's not really a way for that change to end well, even if not changing it means less upstream work from hardware providers it means what does land is maintainable in the long run.
    The stable interfaces we agree, but then again that's only useful for out-of-tree (most often closed source) drivers, so for what the article argues it's mostly irrelevant.

    Originally posted by jrdoane View Post
    Not to mention that it's a lot harder to understand how it will translate to machine code after it's compiled. Something nice about using C is that it's at least feasible to understand what it's going to compile to
    While the being easier part is true, the feasibility to understand what it's going to compile to is rather not true anymore since a long time ago. Optimizing compilers, which you use to build the kernel, by definition change how the code is compiled from the "obvious" version you would intuitively think of.

    Originally posted by camel_case View Post
    Thats why Linux need to be implemented in a object oriented programming programming language.

    Simply:

    public class XyzDwc3Module extends DwcModule implements usbInterface { ... }

    and DwcModule never ever needs to be changed.

    But in Linux they want 70ies like C most time and most people they need to keep their own codebase in shape.
    Absolute nonsense. For a start, you can't reasonably make an ABI with C++ and you shouldn't use Java like bs for a kernel (periodic freezes and stupid high memory overhead in your kernel are absolutely unacceptable).
    Besides, the kernel is already as object oriented as C allows and works exactly that way, just with function pointers in an explicit vtable, and also you don't need OO for keeping stable interfaces.
    Also, it's quite funny you complain about programmers stuck in the 70s and propose 80s' tech as the replacement in 2022

    Originally posted by varikonniemi View Post
    This is one form of embrace, extend extinguish. Every vendor does things a bit different and supports their differences in corporate operating systems, and leaves the open ones with endless work.
    Unless with "corporate" you mean Android, which is the most common target for ARM SoCs, it doesn't seem correct. I think it's more likely to be sheer incompetence than an intentional attack to open source. Of course, the effect is the same, an eternally moving target for open systems. As someone else mentioned, in the end it's also more expensive at initial bring up (which in most cases is pretty much all the hardware provider does) to go mainline than to make a blob drop.
    A reason to think it's incompetence is that being that the main target, and that action discouraging other embedded uses, it goes against the interests of those hardware providers.

    Leave a comment:


  • willmore
    replied
    Originally posted by CommunityMember View Post

    At the low level hardware designs, it is completely up to the designer to choose which specific GPIO lines (and addresses, and status bits) do what, and there is no right choice, just a choice. Typically within one company an engineering group responsible for (say) USB designs will do the same thing again and again the same way, but in a different company they will choose differently. And if a company buys the USB IP from another company for their next product things get different all over again. This mostly does not matter as the company supplies drivers for their devices for the OS's they wish to support, but while android may be a target OS, the company may not wish to invest in the overhead(s) of getting the driver into Linux mainline, and just shipping a working driver to their platform partners is much less overhead. If one wants to make contribution(s) to the Linux kernel more palatable, one needs a more stable API, and less discussions within a subsystem about exactly how the code formatting should be, and what the internal variable names should be (since all of those are unlikely, I don't expect things to get a lot better any time soon).
    I would disagree because we see many public uses of these IP which are consistent and require almost no kludges. I'm thinking of NXP specifically who has nice documentation for thier parts and they lay everything out on the table. You're right when you say the designer using these IP can route address and data lines any way they want to, but that's the point. There is a standard layout for them, but if a designer choses to route them differently, they are making a choice to do things differently. I would argue that the motivation isn't some kind of short term routing convenience, but an intententional attempt to hide the (mostly unlicensed) IP they're using.

    WRT the driver situation. I really don't care what's easier for a vendor. I'm in the open software community and their being a bad actor in that space is a problem. It may be harder to mainline a driver (or variant) than to just ship some hacked up thing in a BSP, but how many times do you have to do that before it would have been easier to just mainline it? Especially if it's a variant of an existing driver, all you'd need to do is add the register remapping function and you'd be done. But that would make it very clear what IP you were using, wouldn't it?

    If your arguement is the old "Why take the effort to write good code once when I can write bad code dozens of times?" Then I don't think we're going to agree.

    Leave a comment:

Working...
X