Announcement

Collapse
No announcement yet.

EXT4 & Btrfs Regressions In Linux 2.6.36

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    Originally posted by Smorg View Post
    Without some sandbox for all the developers of all the various systems to play together in without having to worry about breaking things, how is anybody to know how things will work when everybody gets together all at once to merge in their possibly massive and incompatible changes within the tiny allowed timeframe (only to be disallowed from making the possibly sweeping changes necessary in order to fix things properly)? This sounds like a terrible development model. And why is it called a release candidate? There is absolutely zero chance that -rc1 is bug free and will be released as-is, and thus, not a candidate for release. It's that simple.

    I need to finish school and learn C so I can help fix this shit. Totally ridiculous. The world needs more CK's. It also sounds like we need a way to make big changes which break things and then have the ability to eventually get those changes mainline only after everything gets integrated properly. Evolutionary change sometimes sucks and causes big ugly codebase.
    How old are you? High school still?
    You might want to familiarize yourself with linux-next, and linux-staging. Linux-next is like a pre-RC, i.e. BETA. Linux-staging is like a pre-pre-RC, i.e. ALPHA. You have your completely insane changes that will break all kinds of things in their current form.... they go into linux-staging, which you can pretty well assume will be so completely broken that it might eat your dog. At some point, if it cleans up reasonably well and starts to shape up into something useful, it ends up in next, which is basically what the NEXT kernel version will become.... i.e., the current RC is 2.6.36, so next has stuff that you can expect to see going into 2.6.37... i.e. 2.6.37 BETA. These are big changes, but not so big that they pose any particularly major risk -- they've already passed staging (if applicable) and most likely they will build and run.

    Now you want to make some MAJOR architectural changes that will wreak all kinds of havoc.... that goes into staging. If other stuff breaks in a major way, it may be your responsibility to fix it.

    Of course, not everything can just get dumped into staging... it still has to pass some basic tests and convince those in charge that it is worthy.... in other words, it can't just be raw insanity.

    So... if you're just interested in fixing bugs, then it will probably just go straight in as a bug fix to be incorporated in the next point release. A little more significant of a change and it goes in next. Major, but well conceived architectural change and it goes in staging. Complete insanity and it goes nowhere.

    Comment


    • #12
      Originally posted by Smorg View Post
      The world needs more CK's.
      This part I can agree with 100%.

      Comment


      • #13
        Originally posted by Smorg View Post
        Without some sandbox for all the developers of all the various systems to play together in without having to worry about breaking things, how is anybody to know how things will work when everybody gets together all at once to merge in their possibly massive and incompatible changes within the tiny allowed timeframe (only to be disallowed from making the possibly sweeping changes necessary in order to fix things properly)? This sounds like a terrible development model. And why is it called a release candidate? There is absolutely zero chance that -rc1 is bug free and will be released as-is, and thus, not a candidate for release. It's that simple.

        I need to finish school and learn C so I can help fix this shit. Totally ridiculous. The world needs more CK's. It also sounds like we need a way to make big changes which break things and then have the ability to eventually get those changes mainline only after everything gets integrated properly. Evolutionary change sometimes sucks and causes big ugly codebase.
        There used to be Linux alpha's and beta's but nearly nobody dared to test them. Calling everything a release candidate fixed this problem.

        Comment


        • #14
          Originally posted by Goderic View Post
          There used to be Linux alpha's and beta's but nearly nobody dared to test them. Calling everything a release candidate fixed this problem.
          Lol. I liked this reply more than the one above about -staging and -next trees (though that is interesting too).

          Comment


          • #15
            Originally posted by Xheyther View Post
            I don't know where you have see that release candidate equal bug-free release.
            The name sounds pretty self-exploratory to me. What else could "release candidate" possibly mean? I'm aware it isn't formally defined in any standard unlike alpha/beta.

            Also it's no because you think that the rc1 is the first release of code that no testing was done before.
            It's the first release where everything has actually been assembled and feature-frozen for wider testing, which I would imagine would make refactoring beyond simple bug fixing in order to fix non-trivial performance regressions hard. My point is that regressions discovered during this phase are inevitable. With all the SDLC possibilities that you could come up with to leverage git I would think you could think of something that avoids regressions after it's too late to fix.

            I know they probably do something better than "Hey Linus plx merge my hax kkthx. Hope it doesn't break anything on your end! Guess we'll find out when Phoronix discovers it."

            Your whole post prove that you have nearly no understanding , if absolutely none, on how the kernel is structured and organized.
            Completely irrelevant to the testing/release methodology is it not?

            You will have to learn a little bit more than C before you can "fix this shit".
            Obviously.

            How old are you? High school still?
            Ya. All the languages I play with (currently Java in computer science) require basically no knowledge of how to implement anything in an operating system...

            You might want to familiarize yourself with linux-next, and linux-staging. Linux-next is like a pre-RC, i.e. BETA. Linux-staging is like a pre-pre-RC, i.e. ALPHA. You have your completely insane changes that will break all kinds of things in their current form.... they go into linux-staging, which you can pretty well assume will be so completely broken that it might eat your dog. At some point, if it cleans up reasonably well and starts to shape up into something useful, it ends up in next, which is basically what the NEXT kernel version will become.... i.e., the current RC is 2.6.36, so next has stuff that you can expect to see going into 2.6.37... i.e. 2.6.37 BETA. These are big changes, but not so big that they pose any particularly major risk -- they've already passed staging (if applicable) and most likely they will build and run.

            Now you want to make some MAJOR architectural changes that will wreak all kinds of havoc.... that goes into staging. If other stuff breaks in a major way, it may be your responsibility to fix it.

            Of course, not everything can just get dumped into staging... it still has to pass some basic tests and convince those in charge that it is worthy.... in other words, it can't just be raw insanity.

            So... if you're just interested in fixing bugs, then it will probably just go straight in as a bug fix to be incorporated in the next point release. A little more significant of a change and it goes in next. Major, but well conceived architectural change and it goes in staging. Complete insanity and it goes nowhere.
            The distributed development seems like it might complicate regression testing where performance is affected by complex interactions between large-scale components. I suppose that all depends on the exact implementation and how modularized things can be. If you had inefficient code-paths which only arise when various branches are merged into an rc then bug fixy workarounds could be hard assuming everybody's code performed well when tested independently.

            If there is a cascading waterfall like thing then whats up with the "merge requests"? Wouldn't Linus just make a copy of the next-most-tested version with no assembly multiple of branches directly into what eventually becomes a rc.

            Comment


            • #16
              Originally posted by Smorg View Post
              Ya. All the languages I play with (currently Java in computer science) require basically no knowledge of how to implement anything in an operating system...
              If you know java then just try some c by example code ... first time I programmed C it was like: "I have no idea .. let's do it like java and see what happens" and it worked ... until I hit the necessity for malloc which took some time to figure, because I was accustomed to the Java automagic way.

              Comment


              • #17
                Originally posted by Smorg View Post
                If there is a cascading waterfall like thing then whats up with the "merge requests"? Wouldn't Linus just make a copy of the next-most-tested version with no assembly multiple of branches directly into what eventually becomes a rc.
                Sometimes Linus or the Developer might not want to merge a specific patch even if it's already living in testing or there are alternatives wich could also considered. (Just my guess)

                Comment


                • #18
                  Originally posted by Smorg View Post
                  [..]
                  Cannot add a lot about what others commented on the carefree nature of your post, other than possibly: Good luck fixing the Linux kernel with Java.

                  Comment


                  • #19
                    And isn't it rather ironic that pretty much the only platform you could sensibly write a kernel in Java with happens to be designed by Linus. *chuckle*

                    Comment


                    • #20
                      [popcorn mode on]

                      Originally posted by Smorg View Post
                      Without some sandbox for all the developers of all the various systems to play together in without having to worry about breaking things, how is anybody to know how things will work when everybody gets together all at once to merge in their possibly massive and incompatible changes within the tiny allowed timeframe (only to be disallowed from making the possibly sweeping changes necessary in order to fix things properly)? This sounds like a terrible development model. And why is it called a release candidate? There is absolutely zero chance that -rc1 is bug free and will be released as-is, and thus, not a candidate for release. It's that simple.

                      I need to finish school and learn C so I can help fix this shit. Totally ridiculous. The world needs more CK's. It also sounds like we need a way to make big changes which break things and then have the ability to eventually get those changes mainline only after everything gets integrated properly. Evolutionary change sometimes sucks and causes big ugly codebase.
                      [popcorn mode off]

                      Comment

                      Working...
                      X