Announcement

Collapse
No announcement yet.

Linux 6.12-rc2 Released With Initial Batch Of Fixes

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • fitzie
    replied
    Originally posted by mdedetrich View Post

    I am going to ignore the rest of your dribble, but you just very eliquently shot yourself in the foot and explain the core issue. You can't just submit currently in development code into linux-next all of the time because you have a high chance of breaking the build for everyone else, which is why fs-next exists in the first place. The whole point of fs-next is it acts as a current in development work branch (typically this is a `main` branch for standard git projects) and you don't want to push that into linux-next which by your own admission would be the only way to get the current Linux CI Ito build your project to figure out it doesn't break for everyone else.

    The problem is that there is no CI in fs-next, so you don't get proper feedback when working on a feature if it actually builds/runs on all platforms until that last moment when code is submitted to linux-next for a release candidate.



    By definition, having a CI that runs frequently (typically nightly) on current in progress code (i.e. fs-next here) is better than not having it exist at all. Just because something is done some way in Linux doesn't automatically mean its good. In fact considering that Linux is like 35 years old and the majority of people that have power/sway are greybeards, its actually highly likely that when it comes to modern process for software development Linux are really that antiquited. This really should not be that surprising and its also the same reason why Linus is open to suggestions, he is aware of this (and unlike you he is not dogmatic about it).

    Ontop of this because we are dealing with greyboards who typically have big ego's, they have no incentive to change things because its what they are used to, so anyone that comes along and puts a light on this and also tries to fix it is always going to ruffle a lot of features.

    I mean they are obviously extremely talented when it comes to low level kernel engineering, that doesn't mean they are excellent in every area.
    you keep lying. 1) linux-next is the place for builds to fail. that's kinda the entire point to flesh those issues there, no one has ever been yelled at for causing an issue in linux-next, unless it's determined they didn't test the code themselves. it's expected to find issues there that your local testing might not have caught 2) there is a CI for linux-next. there is I've sent out several links to the dashboard you can see all the posts from the CI on the linux-next mailing lists. linux-next is release daily and CI tested by many builders throughout the release cycle. and fs-next was created to avoid the broken builds that exist in linux-next not the other way around, because the fs/vfs/mm developers want to do integration testing without worrying about other subsystems. When it was formally requested, Wilcox wrote:

    We'd like to avoid that testing be blocked by a bad patch in, say, a graphics driver.​
    I'd post links to the emails from the CI's but you've committed to your story so I wont bother. if anyone is believing your stories they should just look at the linux-next mailing list and you'll see that mdedetrich is just existing in his own reality.

    Leave a comment:


  • mdedetrich
    replied
    Originally posted by fitzie View Post

    But the bcachefs author blames the fact that there's not a CI that found this, when there very much was a CI that found it. The bcachefs author seems to want some sort of thorough CI that can prevent him from submitting anything bad into linux-next/linus, and obviously breaking builds like this is an easy thing to detect and prevent. The issue the bcachefs author has is that all the existing CI's are there only after the submission to linux-next or torvalds is made. But as Linus explains that's kinda the way it works, no one is expected the kernel to work for every possible use case they don't care about, only that what you submit is available for testing by others before it makes it's way to Linus.
    ​
    I am going to ignore the rest of your dribble, but you just very eliquently shot yourself in the foot and explain the core issue. You can't just submit currently in development code into linux-next all of the time because you have a high chance of breaking the build for everyone else, which is why fs-next exists in the first place. The whole point of fs-next is it acts as a current in development work branch (typically this is a `main` branch for standard git projects) and you don't want to push that into linux-next which by your own admission would be the only way to get the current Linux CI Ito build your project to figure out it doesn't break for everyone else.

    The problem is that there is no CI in fs-next, so you don't get proper feedback when working on a feature if it actually builds/runs on all platforms until that last moment when code is submitted to linux-next for a release candidate.

    Originally posted by fitzie View Post
    ​The bcachefs author defenders here have gaslighted this history to the nth degree, are resorting to cheap attacks and logical fallacies to "win" their argument. They can easily point to the fact that most developers work through a github style pull request system, so the linux system is different and therefore inferior. They can poke fun of email as a record of patch introduction, or whatever they want. Linus wrote git, so i find the accusation that he is holding on to doing things the ancient way kinda laughable.
    By definition, having a CI that runs frequently (typically nightly) on current in progress code (i.e. fs-next here) is better than not having it exist at all. Just because something is done some way in Linux doesn't automatically mean its good. In fact considering that Linux is like 35 years old and the majority of people that have power/sway are greybeards, its actually highly likely that when it comes to modern process for software development Linux are really that antiquited. This really should not be that surprising and its also the same reason why Linus is open to suggestions, he is aware of this (and unlike you he is not dogmatic about it).

    Ontop of this because we are dealing with greyboards who typically have big ego's, they have no incentive to change things because its what they are used to, so anyone that comes along and puts a light on this and also tries to fix it is always going to ruffle a lot of features.

    I mean they are obviously extremely talented when it comes to low level kernel engineering, that doesn't mean they are excellent in every area.
    Last edited by mdedetrich; 08 October 2024, 09:12 AM.

    Leave a comment:


  • fitzie
    replied
    Originally posted by Radtraveller View Post
    Can someone explain the bcachefs thing? The only I've been able to take away from this : There is a process for code submission so the code is tested against standards to make sure it doesn't break things which may take a while, but one guy bypasses that established process?

    I've no way to determine if the guy submitting the code is so experienced and good at what he does that it shouldn't be an issue. But then again, anyone can make a mistake or typo once in a while.

    Shrug, can't some folks that are tired of having to deal with old hardware simply fork and do a version that, while mainly staying in sync with Linux, drops the whole backward compatibility for anything older than 'x' generations of hardware? Then they can manage code submission any way they want?
    the tl;dr: submissions to linus should be made available to others to test via the linux-next tree. if you break the build or otherwise introduce issues that would have likely been detected by others if they saw those patches it will cause issues. it also indicates that you're submitting things developed at the last minute, which should have a good reason, and not the standard.

    long explanation:

    The release cycle for a new linux kernel is roughly 9 weeks these days. subsystems are expected during the 9 weeks to submit an initial submission during the release window opening that has been previous submitted to linux-next, and for the rest of the 9 weeks, they are suppose to only send in fixes to that version. The release cycle for the next release overlaps the current one, and officially kicks of early in the current release cycle. It looks like this:

    - release window opens for release N
    - submissions merged into linus tree (mostly via git tags sent to linus with an email explaining what's being sent in)
    - rc1 release
    - linux-next now open for release N+1
    - developers submit bugfix updates to linus for release N
    - developers send new code meant for N+1 to linux-next
    - linus releases rc2, rc3 on a weekly basis
    - linus release final release N kernel at around rc9
    - release window opens for release N+1

    This means that developers have to split up their work into what's for the release N vs release N+1 basically all the time. And usually beyond that if they are working on something that isn't clear when or if it will become stable. What's important is that until rc1 is released, linux-next points to release N, and is an important integration checkpoint that has a dedicated developer doing the merges of hundreds of git branches, and daily automated build tests on tons of platforms including m68k boxes and s390x computers.

    What happened here was bcachefs author submitted something to linus that wasn't in linux-next. this is the linux-next on september 27th (6mb compressed patch file), that doesn't have the bad patch

    and this is the same day September 27th when the patch made it's way into linus's tree for rc1.

    What was the bad patch?:



    "give bversions a more distinct name, to aid in grepping"

    So this patch that wasn't a last minute fix or anything like that, was sent to linus right before rc1 was release bypassing linux-next. Now technically bcachefs author could have made it available for linux-next at the same time, but it but it certainly wasn't submitted for linux-next even a day before.

    As soon as linus tags rc1 (20240929), this patch breaking big endian systems is detected.

    Sun, 29 Sep 2024 15:51:43 -0700 linus announcement email
    Mon, 30 Sep 2024 16:53:22 +0200 announcement of build errors/warning

    that is a four hour difference.


    But the bcachefs author blames the fact that there's not a CI that found this, when there very much was a CI that found it. The bcachefs author seems to want some sort of thorough CI that can prevent him from submitting anything bad into linux-next/linus, and obviously breaking builds like this is an easy thing to detect and prevent. The issue the bcachefs author has is that all the existing CI's are there only after the submission to linux-next or torvalds is made. But as Linus explains that's kinda the way it works, no one is expected the kernel to work for every possible use case they don't care about, only that what you submit is available for testing by others before it makes it's way to Linus.

    Even if the bcachefs author had access to the CI that tests in every way known, there is still two problems. 1) this patch wasn't made available to others to see before submission, 2) this patch wasn't tested by the developer for more then a few hours before submitting to linus. Having a better CI wouldn't have fixed those problems. Linus certainly is flexible if there was some sort of last minute robustness feature that really needed to get in, but when he investigated the details he's seeing there's just this attitude of cramming things into the submission at the last minute. There's always going to be some issues that even the best CI will not detect, and submitting changes made at the last minute for no good reason will end up causing conflict if and when an issue is found with a patch and the history is revealed that shows this patch wasn't made available for anyone to check before it got to Linus.

    Quite frankly, going through this, it's actually quite hard to recreate exactly what happens when, because there's no real standards of immutability, so the details of when bcachefs author made his submission to linux-next (which is technically done by the bcachefs author just updating a git branch available to pull by the linux-next team) is lost forever, because that branch has been reset to track 6.13 work now. Linus's tree is a good record of time, the emails are too. the bcachefs author doesn't like to have email records of these patches for some reason.

    The bcachefs author makes a claim that he needs to get things to his users as quickly as possible, but there's just no way to do that with linus/stable trees. If there's something that's not a critical bugfix it's just not going to get back into stable no matter what, so if it's an improvment, or diagnostic it doesn't apply to ever make it to stable trees. This is why distro kernels have a lot of stuff just not in the LTS trees even though they might be using an LTS kernel. And if it's really urgent, it can take several days to get into linus tree and then another week after that to make its way back to stable. for those users that need patches ontop of stable kernel the bcachefs author talks about the pain of having to tell people about a different location to get those patches, instead of just permanently announce on the bcachefs website his tree as the proper place to check for any last minute urgent fixes (e.g. check this git for release-N-fixes branch, if it exists use that, otherwise use stable tree). There is always going to be some delay for the stable branch, and oddly enough it's even further made difficult because the bcachefs chooses to submits patches to stable in a fashion that isn't preferred by the stable tree maintainers, which causes them additional overhead and puts bcachefs stable patches to be done manually by the stable maintainer after others. This is because the bcachefs author doesn't like the preferred way.

    The bcachefs author defenders here have gaslighted this history to the nth degree, are resorting to cheap attacks and logical fallacies to "win" their argument. They can easily point to the fact that most developers work through a github style pull request system, so the linux system is different and therefore inferior. They can poke fun of email as a record of patch introduction, or whatever they want. Linus wrote git, so i find the accusation that he is holding on to doing things the ancient way kinda laughable.
    ​
    Last edited by fitzie; 08 October 2024, 08:10 AM.

    Leave a comment:


  • PuckPoltergeist
    replied
    Originally posted by Alexmitter View Post
    But in all seriousness, maybe Kent is insanely smart,
    Someone really smart wouldn't offend others all the time

    Leave a comment:


  • Alexmitter
    replied
    Originally posted by mdedetrich View Post

    It is genuinely the case that Kent is insanely smart, he is one of the smartest long time kernel contributors.
    I will be honestly impressed if you don't have a picture of Kent on your nightstand.
    But in all seriousness, maybe Kent is insanely smart, or he is a egocentric one man show that somehow convinced the zfsdiots that he is the true savior and everything else is gonna eat your data. The more I see of Kent, the more I see a smart developer in the mind of a 10 year old with poor socialization.

    So where does this get us. In my opinion, bcachefs should not be in mainline now or any time soon. Kent has shown that he does not care about anything but his filesystem running on his hardware, like the true one man show he is. And something like that does have no place in a kernel.

    Leave a comment:


  • mdedetrich
    replied
    Originally posted by NateHubbard View Post
    Maybe we should all just let the kernel developers argue about this, instead of us all arguing about the kernel developers arguing about this?
    The kernel developers have already stopped arguing about this and unlike here both Linus and Kent have backed down, there is no real drama. Kent will do his best to try and submit critical patches by a specific date while implementing a much better CI so that regressions can be caught while also not slowing down development pace to a crawl (so it only takes a single kernel release i.e. months and not years to get fixes/improvements through) which Linus is open to changing the process to accomodate.
    Last edited by mdedetrich; 07 October 2024, 10:09 AM.

    Leave a comment:


  • NateHubbard
    replied
    Maybe we should all just let the kernel developers argue about this, instead of us all arguing about the kernel developers arguing about this?

    Leave a comment:


  • tx_rx
    replied
    Originally posted by PuckPoltergeist View Post
    Shouldn't stop from compile-testing, shouldn't it?
    I thought we already established that it wasn't compile-time testing that was the problem, but the run-time testing.

    Leave a comment:


  • PuckPoltergeist
    replied
    Originally posted by mdedetrich View Post

    Read the fuken mailing list. Hes not talking about setting up qemu, he is talking about how long it takes the filesystem tests to run in qemu which for something that needs to be purely emulated takes an insanely long amount of time.
    Shouldn't stop from compile-testing, shouldn't it?

    Leave a comment:


  • mdedetrich
    replied
    Originally posted by OneTimeShot View Post

    Wow - HE came up with the idea of running the regression tests before pushing the code and screwing people over? I totally see why Linus was wrong to doubt him now.
    Massive facepalm, would help if you read what was actually going on instead of keyboard warrior'ing for a moment.

    Leave a comment:

Working...
X