Announcement

Collapse
No announcement yet.

Linus Torvalds Comments On Bcachefs Prospects For Linux 6.6

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #81
    Two quite interesting replies in LKML...

    Originally posted by Martin Steigerwald
    Hi Kent, hi Linus, hi everyone,

    Kent Overstreet - 08.09.23, 01:40:01 CEST:
    > The biggest thing has just been the non stop hostility and accusations - everything from "fracturing the community" too "ignoring all the rules" and my favorite, "is this the hill Kent wants to die on?" - when I'm just trying to get work done.

    I observed this for a while now, without commenting and found the following pattern on both "sides" of the story:

    Accusing the other one of wrong-doing.

    As long as those involved in the merging process continue that pattern that story of not merging bcachefs most likely will continue. And even if it gets merged, there would be ongoing conflict about it. Cause I have no control over how someone else acts. Quite the contrary: The more I expect and require someone else to change the more resistance I am most likely to meet. I only can change how I act.

    This pattern stops exactly when everyone involved looks at their own part in this repeated and frustrating "bcachefs is not merged to the mainline Linux kernel" dance. And from what I observed the failure to merge it is not caused by a single developer. Neither from you, Kent, neither from anyone else. It is the combination of the single actions of several developers and the social interaction between them that caused the failure to merge it so far. Accusing the other one is giving all the power to change the situation to someone else.

    I am sure merging it will work when everyone involved first looks at themselves and asks themselves the questions "Have I contributed to make merging bcachefs difficult and if so how and most importantly how can I act more constructive about it?". And I mean that for the developers who have been skeptical about the merge as well as the supportive developers including Kent. There have been actions on both "sides" that contributed to delay a merge. I am not going to make a list but leave it to everyone involved to consider themselves what those were.

    For the recent requests of having it GPG signed as well as having it go through next: I think those requests are reasonable. As far as I read bcache back then went through next as well. Would it have been nice to have been told that earlier? Yes. But both of those requests are certainly not a show-stopper to have bcachefs merged at a later time.

    Of course I know I have been asking others to go within and consider their own behavior in this mail while being perfectly aware that I cannot change how anyone else acts. However, maybe it is an inspiration to some to decide for themselves to consider a change.

    In the best hopes to see bcachefs being merged to the "official" Linux kernel soon,
    --
    Martin
    Originally posted by Joshua Ashton
    I've been holding off replying here for a while because I really hoped that this situation would just work itself out. (I apologise for adding more noise in advance)

    I agree that it really sucks that sometimes you don't get replies to things sometimes or the review from the people you need it from all the time, or didn't tell you something you needed to know.

    But, I think it's really important though to realize that you are talking to other people on the ML and not review machines (unless that person is Dave Airlie on Zink ;P) and very often, other work can come up that would block them being able to spend time reviewing or guiding you on this process.

    Everyone on here is another person who has their own huuuge slog of work that is super important for security, stability, shipping a product/feature, keeping their job, etc.

    Eg. I proposed several revisions on the casefolding support for bcachefs, but right now I am busy doing some other AMDGPU and Gamescope/Proton + color work so I haven't had a chance to follow up more on that since the last discussion.

    You might think that because X takes a while to respond/review or a didn't mention that you actually needed to do Y or missed your meeting; it's because they don't care, but it's probably way more likely that they are just busy and going through their own personal hell.

    One of the harsh things about open source is rationalizing that nobody owes you a review or any of their time. If people are willing to review your features and changes in any capacity, then they also have an interest in your project.

    If you can understand that, then you are going to have a much better time proposing things upstream.

    I also really want to see bcachefs in mainline, and I know you can do it. :-)

    Cheers
    - Joshie 🐸✨


    Originally posted by Brian Foster
    Yeah.. IMO the main advantages of a sort of squashed down/sanitized git history is to either aid in code review or just clean up a history that is aesthetically a mess. For the former, the consensus seems to be that no one person is going to sit down and review the entire codebase, but rather folks have been digging into peripheral areas they have experience in (i.e., locking, pagecache, etc.) to call out any major concerns. I believe Kent has also offered to give pointers or just sit down with anybody who needs assistance to navigate the codebase for review purposes. For the latter, ISTM that bcachefs has pretty much followed kernel patch conventions, with it being originally derived from another upstream kernel subsystem and whatnot.

    The flipside is that losing the history makes it incrementally more annoying for developers working on bcachefs going forward. So I can see an argument for doing things either way in general just depending on context, but it looks like there's precedent for either approach.
    Looking back at btrfs in v2.6.29, that looks like a ~900 or so commit history that was pulled in. bcachefs has a larger commit log (~2500+) at this point, but if we can do whatever magic Chris referred to to try and avoid any logistical issues for the broader kernel community, I think that would be ideal.

    BTW this is just my .02 of course, but I'm also fairly certain at least one or two developers have looked at the git log and expressed the exact opposite opinion expressed here: that seeing an upstream-like history is appreciated because it reflects a sane/compatible development process.
    That again isn't to say one way or the other is the right approach for a merge, just that it seems subjective to some degree and so inevitably there will be different opinions..

    Brian

    Originally posted by Martin Steigerwald
    To all kernel developers.

    Kent Overstreet - 03.09.23, 05:25:55 CEST:
    > Hi Linus,
    […]

    Sometimes it is all too easy to forget saying thank you!

    Thank you to all of you for your work on the Linux kernel.

    I greatly appreciate it.

    Except for some older and one (almost insanely nice) newer device that run AmigaOS or variants of that operating system, all my computing devices including router and phone run a Linux kernel. And then considering the huge amount of Linux servers that actually power most of what we call the internet and its services… awesome!

    That would not have been possible without your work!

    So: Thank you!

    Best,
    --
    Martin

    Last edited by timofonic; 08 September 2023, 10:05 AM.

    Comment


    • #82
      Originally posted by Khrundel View Post
      According to later thread, there were some kind of ambiguity, someone suggested him to go ask Linus, and he have asked Linus this and Linus just ignores this question, because it looks "obvious".

      This is a problem. Some of the questions Kent asked happen to be in the documentation.


      Linus Torvalds is the final arbiter of all changes accepted into the Linux kernel. His e-mail address is <[email protected]>. He gets a lot of e-mail, and, at this point, very few patches go through Linus directly, so typically you should do your best to -avoid- sending him e-mail.​
      Do note this the documentation warns you that you should not be using Linus Torvalds as communication path. Yes get a clarification on process yes. Talk to Linus when your patch is in next.

      Things have gone wrong here.

      Originally posted by Khrundel View Post
      Why can't he just submit? Well because it became "kafkaesque". When people are giving one excuse after another and end up with "it is better for you not to show up for another decade", you would start to question if this is a subtle hint. Maybe nobody want your filesystem in kernel at all. Last month main concern was who will maintain this code in case Kent will be hit by bus, now, after he found some comaintainer this linux-next problem reappeared out of nowhere.
      And I can understand kernel team's point: they do not want 2 kinds of btrfs to support and it is easier for them to not to accept new one then deprecate old one. Maybe I would prefer to btrfs to steal bcachefs features too, but it seems they've already bitten more than could chew.
      Please do take note that Linus said that over 3 month ago he should have noticed the Next branch problem. Lot of issues Linus was spotting when he was looking the code the bots on next would have picked up and of the submit checklist had been obeyed would have also reduced the number of defects Linus was picking up.

      Getting into mainline Linux all the i have to be dotted and the t have to be crossed basically. When this stuff has not happened it comes very frustrating.

      This is the problem starts getting big.

      • Builds cleanly:
      • with applicable or modified CONFIG options =y, =m, and =n. No gcc warnings/errors, no linker warnings/errors.


      Linus response here.


      Notice something Linus here should be really pissed because Linus Torvald being able to find a compiler error you hope its a compiler bug if its not the person submitting code to the LKML is not obeying process everyone should. . "No gcc warnings/errors, no linker warnings/errors".

      Yes Khrundel think one min Kent has the role of bcache Maintainer so he should be well and truly aware of you don't send anything to the LKML that has compiler/linker errors.

      Kent like it or not has wasted Linus time because he has not done what he should have done many times over with bcachefs. Submitting to next tree before talking to Linus is something maintainers know and anyone following the processes in the Linux kernel documentation include the all important submit-checklist would not have put such low quality patches in-front of Linus.

      Kent has had Linus waste his time reviewing patches that were not at the min quality standard required by the Linux kernel documentation. Yes min quality standard is the submit-checklist if you cannot pass that don't submit you patch for Linus to look at.

      Khrundel just take some time and look at the issues Linus has been spotting in bcachefs code then compare to the submit-checklist then you notice that Kent has majorly screwed up and has been wasting Linus Torvalds very valuable time because Kent has not done the steps he should have to make sure his patches were the quality required.

      Yes better not to show up for another decade that sign of Linux developers who play by the rules getting very frustrated with Kent for wasting their time with no end in sight. Yes worse if you look Kent has been adding new compiler fails as he going and patching things.

      Linux kernel has rules mostly so developers don't end up reviewing stacks of patches that are too poor of quality to be accepted anyhow. Like it or not the bcachefs patches have been too poor of quality. This is another thing that Linus possible should have jumped on Kent for sooner.

      Yes submitting to staging or next the automated bots would have jumped on Kent for the compiler issues as well so not wasting anyone time reviewing not acceptable patches.

      Comment


      • #83
        Originally posted by blackiwid View Post
        Again this is practice that happens already, Linus accepts regularly from graphic cards drivers huge amounts of code in very late points of the development, he knows that a good graphic stack is for desktop Linux / gaming Linux important and so he pays the price of taking a risk for it, so that Linux can stay more relevant.
        This is not good faith. You will find that graphics card driver code has gone to next passed by the audit bots before the graphics driver developers request Linus pull it into his branch.

        Yes human review may have been skipped but automated review has not been.

        bcachefs has skipped the automated review step that everyone normally does causing a lot of parties to get worried and annoyed. Remember the automated audit bots of Linux keep on getting improved they are already to the point they do better code audit than over 9 out of 10 kernel maintainers can do in a manual review in under 1 hour of time.

        Yes there is a risk to having less human review but its not as big as you would think due to how imperfect that process is. There are massive risks of the automated bot reviewers get skipped, Any large code base that has not had a automated review written by humans so far has had 100 percent odds that you can find very stupid coding mistakes..

        This is part of the problem those who are long time linux kernel people as in in kernel development back in the early 2000 before the automated bots understand how poor human code review really is. Sorry to say most code review done by humans is mostly like eye wittiness testimony highly unreliable.

        blackiwid; you think about it sometimes attempting to trust eye witness testimony yet the crime was caught on security camera would not be annoyed with a person making that mistake. This is the kind of issue Kent has just done.

        Please note kent here gets worse. Code Kent been putting up for review the compiler own internal automated auditing has been failing. Yes part of the checklist for submitting code is that your code should not have any compiler errors at all. Code graphics card driver developers submit to next very late don't have a single compiler error/warning.

        Like or not there is a min quality bar for code submitted to the Linux kernel and Kent code has not been above that min bar with bcachefs. Yes people did not notice sooner but it was going to be noticed before bcachefs got into a release.

        Comment


        • #84
          Originally posted by oiaohm View Post

          This is not good faith. You will find that graphics card driver code has gone to next passed by the audit bots before the graphics driver developers request Linus pull it into his branch.

          Yes human review may have been skipped but automated review has not been.

          bcachefs has skipped the automated review step that everyone normally does causing a lot of parties to get worried and annoyed. Remember the automated audit bots of Linux keep on getting improved they are already to the point they do better code audit than over 9 out of 10 kernel maintainers can do in a manual review in under 1 hour of time.
          Maybe I understand git and the gpl wrong, but usually the owner of a git tree PULLs code and GPL Code can easily pulled legally, so what would have hindered any of the kernel devs to pull it and ad it to Linus-next why does it have to be Kent himself doing that? There are hundrets of NO-sayers and 1 guy tries to make things happen, a bad ratio in my view.

          Comment


          • #85
            Originally posted by oiaohm View Post

            Yes human review may have been skipped but automated review has not been.
            This tests do surely some good stuff, but as I know as example some software testing tools for python they surely check if the comments are formated correctly and stuff like that that has no effect on the functionality of the code, but just have higher standard of code. If you have a million of not important things that do block a upstream like that I can understand by a big mamut project that 1 person does that has to do other things than just doing full time trying to add the code to linux he probably get bug reports adds features in another tree etc.

            Didn't for the new ntfs code some high kernel maintainer push that and helped actively to get that added? so some get a friendly welcoming treatment where people at least hold their hand and some get the RTFM treatment that is double standard.

            Funny enough Linus himself complained about the process back then, apparently Linus is not happy how the process specifically with Filesystem works and that it's to slow, so it seems to me with this very unclear mail he wrote that he is on Kents side but don't want to piss his underlings off and because of politics he does not accept it, that would explain his unclear contradicting last mail:
            Torvalds further added, "We simply don't have anybody to funnel new filesystems - the fsdevel mailing list is good for comments and get feedback, but at some point somebody just needs to actually submit it, and that's not what fsdevel ends up doing.
            And he basically complains and advocates for exactly that, what Kent did he complained that noboby submitted the code to him... so the exact opposite happened back then and he complained about it, sure at a later stage but still this system seems broken even back then. There was and is no automatic IF A B C happens MERGE happens, it's social arbitrariness. And it's never do A B then it get's guaranteed added that is in my view a frustrating and horrible process.

            Comment


            • #86
              blackiwid I am a software engineer and in my workspace, it's the one who opened the PR (pull request)/MR (merge request) responsibility to make sure our code passes all CI tests before requesting for review (and better to catch it before commit) so that other devs do not waste their precious on reviewing code that doesn't work.

              And your idea that software testing tools doesn't do much is just not true.

              Even for dynamic typing PL like Python, there's a lot of tools to check typing at compile-time such as mypy and linters for common anti-patterns.
              There are also a lot of unit testing and integration testing to make sure it's doing the right thing.

              For Linux kernel which is mainly written in C, the compiler itself serves as a checker and then it will be run on sanitizers to discover memory related errors.

              Comment


              • #87
                blackiwid As a software engineer, communicating is just as important as the ability to write code, because for any large enough project, it will eventually becomes a team game and will require collaboration from at least dozens of people.

                If you have your pride so high that you rejects to follow the code review practices at any team and tries to go pass the rules, then you be warned to stop that and if you keep that bullshit behavior, you will be kick out of the team and possibly lost your job.

                There's no such thing that talented/senior gets a way out, even the tech lead themselves have to get approval for feature/bugfix from someone else before allowed to merge, unless it's just synching different topics branch and nothing new added, or just improvements to CI.

                You will find that there are code reviewing rules at every team that owns a relatively mature product, so your expectation of Kent can break the rules just because his clever is just bullshit.

                Even the smartest people on Earth make mistakes so if they refuse to accept advice from others because their ego is so high, it's time to slap them in the face to get them calm down and realize that they are not some coding god that are beyond others.

                Comment


                • #88
                  Originally posted by blackiwid View Post
                  Didn't for the new ntfs code some high kernel maintainer push that and helped actively to get that added? so some get a friendly welcoming treatment where people at least hold their hand and some get the RTFM treatment that is double standard.
                  That a mistake. NTFS3 driver made it into Linux kernel 5.15 31 October 2021. It was put forwards to be in the Linux kernel August of 2020. Yes it took a year and 2 months to complete the process not the longest submit process but not shortest either because things did not all go well.

                  Paragon Software developers due their legal department had to read the software submit parts of Linux documentation before August 2020. So they started of on the right foot. Yes even after reading the manual not everything went right for Paragon Software in that 12 months to get to mainline.

                  Paragon Software developers did not bypass Linux-next or miss doing the patch checklist. They did not even need to be told todo these things.

                  Yes the VFS maintainer back in 2020 was not required to watch the general fsdevel mailing list Linus changed that back then. Also the system was put in place to deal with missing in action maintainer was added to the Linux kernel documentation back then.

                  So the new NTFS driver everyone was nice and friendly and this was helped that the Paragon Developers had read the processes for submitting code and followed them basically to the letter.

                  Comment


                  • #89
                    Originally posted by NobodyXu View Post
                    blackiwid
                    And your idea that software testing tools doesn't do much is just not true.
                    I never said that, I just said that some of this tests check unimportant things, so you get maybe 1000 errors where 50-200 are important and the rest is unimportant and just best practices standards. They have also a relevance but I can understand from a person that want to make big steps that has a full working schedule that might be problematic.

                    Comment


                    • #90
                      Originally posted by blackiwid View Post

                      I never said that, I just said that some of this tests check unimportant things, so you get maybe 1000 errors where 50-200 are important and the rest is unimportant and just best practices standards. They have also a relevance but I can understand from a person that want to make big steps that has a full working schedule that might be problematic.
                      There's no unimportant test, a test is a test and you need to pass all tests to merge your PR/MRs.
                      Best practices standards are important, it improves readability and reduces the chances of having bugs.

                      If someone refuses to fix their tests and get CI to pass, then their PR/MR would not be merged, period.

                      It looks like you have no coding experience so please do not let your fucking politics gets into programming.

                      Comment

                      Working...
                      X