Announcement

Collapse
No announcement yet.

Linus Torvalds Comments On Bcachefs Prospects For Linux 6.6

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • oiaohm
    replied
    Originally posted by blackiwid View Post
    You yourself mentioned linters, and they check for "stylistic errors" which I would not even use the word error. And how much you have from what we don't know or I don't your organisation might have a high ratio of very important tests others maybe not.
    What is important changes project to project


    Coccinelle is a tool for pattern matching and text transformation that has many uses in kernel development, including the application of complex, tree-wide patches and detection of problematic programming patterns.
    Linux kernel sorry "stylistic errors" those are future nightmares. Part of Linux kernel development is using auto tools to process along the code and generate patches. Yes notice tree-wide patches.

    Style errors can result in semantic patches/coccinelle patches applying incorrect that may not be noticed straight away.

    Lint over code for style is highly important on any software development project using semantic patches/coccinelle patches. Style errors cannot be treated as minor problems it the side effect of using semantic patches.

    Legal and auditing is why you patches have to have correct signing those errors are not option with the Linux kernel. The one - was able to classed as close enough because there are already others merged broken the same way.

    blackiwid getting code into the Linux kernel from the Linux-next stage can take 12+ months like it or not. There is no room in the Linux kernel mainline code for minor errors because due to semantic patches minor error have habits of turning into major ones. The Linux-next branch is not a fun process there are going to be a lot bots putting out a lot code errors that you might want to say are minor but they are not getting past until they are fixed.

    Yes Kent was wanting to skip the 12 months of code review by bots and the code fixing that causes.

    Leave a comment:


  • blackiwid
    replied
    Originally posted by reba View Post

    Unimportant test? At work we follow a test driven approach using unit tests, behaviour tests, etc pp.:
    You yourself mentioned linters, and they check for "stylistic errors" which I would not even use the word error. And how much you have from what we don't know or I don't your organisation might have a high ratio of very important tests others maybe not.

    Leave a comment:


  • reba
    replied
    Originally posted by blackiwid View Post

    I never said that, I just said that some of this tests check unimportant things, so you get maybe 1000 errors where 50-200 are important and the rest is unimportant and just best practices standards. They have also a relevance but I can understand from a person that want to make big steps that has a full working schedule that might be problematic.
    Unimportant test? At work we follow a test driven approach using unit tests, behaviour tests, etc pp.:

    For a single one of our microservices there are literally up to 500-1000 specificly written tests and for example one our our applications consists of twelve microservices, just to give you an idea. (This are the figures for an intranet-only, not internet-facing application with an allowed downtime of 7 days and has redundant applications in case of failure. Once internet-facing or with more nines of uptime, you grow on these numbers and install additional measurements. In this post we only talk about one of these "not so demanding" applications)

    On top of these come the standard tests provided by frameworks, all the findings of the IDE, all of the findings of SonarLint and other code style and code format checkers.

    Only *then* the deployment pipeline does not immediately break and hand you an error state which you *have* to fix.

    Oh, and in the pipeline there are additional tests: linters for all the other programming languages you specifically did not locally test, security checks of all the included libraries, for the upstream docker images used in the compilation step, and the base images where your container later runs on.

    Only *then* the actual tests start, which hopefully (but usually, because you already tested thmen locally, didn't you) let you pass.
    Additionally here the first real integration tests fire (hundreds...), which are a little finnicky to test locally.

    And only *then* you are able to deploy on the throw-away dev environment as the very first step with an actual impact.

    Writing buggy programs? Hardly possible but we sometimes manage to. And then we add a couple of more tests to seal the hole once and forever.

    I know it sounds a bit elitist but for me this is the minimum standard of operations and once applied it is very easy to surf on this and have a good night's rest. And yeah, I love my working environment for taking stuff serious and responsibly.

    Last edited by reba; 10 September 2023, 03:16 AM.

    Leave a comment:


  • NobodyXu
    replied
    Originally posted by blackiwid View Post

    I never said that, I just said that some of this tests check unimportant things, so you get maybe 1000 errors where 50-200 are important and the rest is unimportant and just best practices standards. They have also a relevance but I can understand from a person that want to make big steps that has a full working schedule that might be problematic.
    There's no unimportant test, a test is a test and you need to pass all tests to merge your PR/MRs.
    Best practices standards are important, it improves readability and reduces the chances of having bugs.

    If someone refuses to fix their tests and get CI to pass, then their PR/MR would not be merged, period.

    It looks like you have no coding experience so please do not let your fucking politics gets into programming.

    Leave a comment:


  • blackiwid
    replied
    Originally posted by NobodyXu View Post
    blackiwid
    And your idea that software testing tools doesn't do much is just not true.
    I never said that, I just said that some of this tests check unimportant things, so you get maybe 1000 errors where 50-200 are important and the rest is unimportant and just best practices standards. They have also a relevance but I can understand from a person that want to make big steps that has a full working schedule that might be problematic.

    Leave a comment:


  • oiaohm
    replied
    Originally posted by blackiwid View Post
    Didn't for the new ntfs code some high kernel maintainer push that and helped actively to get that added? so some get a friendly welcoming treatment where people at least hold their hand and some get the RTFM treatment that is double standard.
    That a mistake. NTFS3 driver made it into Linux kernel 5.15 31 October 2021. It was put forwards to be in the Linux kernel August of 2020. Yes it took a year and 2 months to complete the process not the longest submit process but not shortest either because things did not all go well.

    Paragon Software developers due their legal department had to read the software submit parts of Linux documentation before August 2020. So they started of on the right foot. Yes even after reading the manual not everything went right for Paragon Software in that 12 months to get to mainline.

    Paragon Software developers did not bypass Linux-next or miss doing the patch checklist. They did not even need to be told todo these things.

    Yes the VFS maintainer back in 2020 was not required to watch the general fsdevel mailing list Linus changed that back then. Also the system was put in place to deal with missing in action maintainer was added to the Linux kernel documentation back then.

    So the new NTFS driver everyone was nice and friendly and this was helped that the Paragon Developers had read the processes for submitting code and followed them basically to the letter.

    Leave a comment:


  • NobodyXu
    replied
    blackiwid As a software engineer, communicating is just as important as the ability to write code, because for any large enough project, it will eventually becomes a team game and will require collaboration from at least dozens of people.

    If you have your pride so high that you rejects to follow the code review practices at any team and tries to go pass the rules, then you be warned to stop that and if you keep that bullshit behavior, you will be kick out of the team and possibly lost your job.

    There's no such thing that talented/senior gets a way out, even the tech lead themselves have to get approval for feature/bugfix from someone else before allowed to merge, unless it's just synching different topics branch and nothing new added, or just improvements to CI.

    You will find that there are code reviewing rules at every team that owns a relatively mature product, so your expectation of Kent can break the rules just because his clever is just bullshit.

    Even the smartest people on Earth make mistakes so if they refuse to accept advice from others because their ego is so high, it's time to slap them in the face to get them calm down and realize that they are not some coding god that are beyond others.

    Leave a comment:


  • NobodyXu
    replied
    blackiwid I am a software engineer and in my workspace, it's the one who opened the PR (pull request)/MR (merge request) responsibility to make sure our code passes all CI tests before requesting for review (and better to catch it before commit) so that other devs do not waste their precious on reviewing code that doesn't work.

    And your idea that software testing tools doesn't do much is just not true.

    Even for dynamic typing PL like Python, there's a lot of tools to check typing at compile-time such as mypy and linters for common anti-patterns.
    There are also a lot of unit testing and integration testing to make sure it's doing the right thing.

    For Linux kernel which is mainly written in C, the compiler itself serves as a checker and then it will be run on sanitizers to discover memory related errors.

    Leave a comment:


  • blackiwid
    replied
    Originally posted by oiaohm View Post

    Yes human review may have been skipped but automated review has not been.
    This tests do surely some good stuff, but as I know as example some software testing tools for python they surely check if the comments are formated correctly and stuff like that that has no effect on the functionality of the code, but just have higher standard of code. If you have a million of not important things that do block a upstream like that I can understand by a big mamut project that 1 person does that has to do other things than just doing full time trying to add the code to linux he probably get bug reports adds features in another tree etc.

    Didn't for the new ntfs code some high kernel maintainer push that and helped actively to get that added? so some get a friendly welcoming treatment where people at least hold their hand and some get the RTFM treatment that is double standard.

    Funny enough Linus himself complained about the process back then, apparently Linus is not happy how the process specifically with Filesystem works and that it's to slow, so it seems to me with this very unclear mail he wrote that he is on Kents side but don't want to piss his underlings off and because of politics he does not accept it, that would explain his unclear contradicting last mail:
    Torvalds further added, "We simply don't have anybody to funnel new filesystems - the fsdevel mailing list is good for comments and get feedback, but at some point somebody just needs to actually submit it, and that's not what fsdevel ends up doing.
    And he basically complains and advocates for exactly that, what Kent did he complained that noboby submitted the code to him... so the exact opposite happened back then and he complained about it, sure at a later stage but still this system seems broken even back then. There was and is no automatic IF A B C happens MERGE happens, it's social arbitrariness. And it's never do A B then it get's guaranteed added that is in my view a frustrating and horrible process.

    Leave a comment:


  • blackiwid
    replied
    Originally posted by oiaohm View Post

    This is not good faith. You will find that graphics card driver code has gone to next passed by the audit bots before the graphics driver developers request Linus pull it into his branch.

    Yes human review may have been skipped but automated review has not been.

    bcachefs has skipped the automated review step that everyone normally does causing a lot of parties to get worried and annoyed. Remember the automated audit bots of Linux keep on getting improved they are already to the point they do better code audit than over 9 out of 10 kernel maintainers can do in a manual review in under 1 hour of time.
    Maybe I understand git and the gpl wrong, but usually the owner of a git tree PULLs code and GPL Code can easily pulled legally, so what would have hindered any of the kernel devs to pull it and ad it to Linus-next why does it have to be Kent himself doing that? There are hundrets of NO-sayers and 1 guy tries to make things happen, a bad ratio in my view.

    Leave a comment:

Working...
X