Announcement

Collapse
No announcement yet.

Proposed: A Monthly Ubuntu Release Cycle

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    That's a really bad idea!

    Ubuntu development team has proven in the time not to be able to cope with real development, software integration and bug fixing.
    The Ubuntu people have one thing in mind: profit opportunities.
    That's not a bad thing, of course. It's bad when it's the only one thing you keep in mind.
    A lot of work for marketing, colors, icons, themes and other eye candy.
    But, as far as stability and effectiveness, they are far behind.
    Most of the time they simply push bugs upstream to either Debian or to the (actual) application developers.
    A few examples can help.
    Ubuntu desktop doesn't use a desktop-grade kernel. I mean a kernel promoting the responsiveness. Something you'd expect on ... ehm ... desktop edition.
    They instead use a server-grade kernel.
    Not to talk when they decided to jump on the KDE v4 wagon and leave KDE users with no usable KDE at all.
    That was a marketing decision. They didn't even tried the KDE v4 desktop or read the friendly documentation (stating that KDE v4 was not intended for end users).
    Finally, when they really get to the solution to some issue, they fix it for the future release, not the current one or the latest LTS.
    As if non-techie users were longing to make a complete distribution upgrade every six months. This is not the real use case.

    A faster release cycle would simply emphasize this kind of attitude, thus leading to a fast suicide. Bugs would accumulate on older versions, while newer ones would show new bugs. With no time and resources to fix.

    Ubuntu has had the merit to really push Linux to non-geek non-hackers desktop. Well they have not been the first ones nor the best ones.
    But nowadays when you talk about desktop Linux, Ubuntu is among the two or three names that you can hear.
    A suicide here would be a big loss to the Linux community.

    Maybe a rolling distribution, already discussed in the past, would instead be a better idea. But also here, again, the quality assurance process should be much more important than eye candy.

    So, please Ubuntu devs, don't do monthly release. Thanks.

    Comment


    • #12
      imo its also a bad idea. not for the MAIN ubuntu

      Comment


      • #13
        Originally posted by allquixotic View Post
        I'd like to see a Linux distro that focuses on making a solid release of the core platform on a very short schedule, and come up with a standard, binary-compatible packaging format for all the user applications so that upstreams can do releases in a single format and have it work with all the distros
        We would all like that but it will never happen! It is more a political problem rather than a technical. That would lead to the commoditization of the "distro". Do you think that Mark Suttleworth would invest his fine millions into a product that could be safely swaped with any other standardized binary-compatible distro made by a competitor or a couple of geeks in their basement? I don't think so.

        The release policy, package format and ultimately the repository is what makes the difference from the rest of the distros. Otherwise we would all have switched to Arch by now

        Comment


        • #14
          Maybe it's time for *BSD?

          Those OSes have a major pro. They are an OS *and* a distribution.
          This means that the kernel itself is shipped along with a (rather) complete operating environment.
          Everything is under control. Customization can be done atop of them.
          The major con is that they have very limited resources due to the extreme fragmentation in the open source arena.
          Too many projects, each with too few resources to get to high quality releases, all making more or less the same thing.
          There should be instead a different organization, with no (or very little) overlapping among projects.
          But this is just a dream. I also have one.

          Comment


          • #15
            Great idea, but it won't help Unity.

            It has a tainted image.

            New features and bugfixes each month would be more nauseating that Unity updates during the Oneiric cycle. Like watching paint that never dries.

            Comment


            • #16
              Originally posted by allquixotic View Post
              What you said is true, but the other benefit of this monthly cycle (which I personally value much more than the sum total of Canonical's own software projects) is that you won't have to wait an inane amount of time to receive a new upstream version of a package without compiling it from source.

              As it stands, something like this always happens:

              1. Ubuntu is released in $month with rhythmbox version $ver.
              2. Rhythmbox version $ver+1 is released 2 weeks after, but because of the strict SRU process, users don't see $ver+1 until the next version of Ubuntu, $month+6 later.
              3. 2 days after Ubuntu's beta freeze, Rhythmbox $ver+2 is released. Users of Ubuntu+1 will get $ver+1 in Ubuntu+1, but they won't see $ver+2 until Ubuntu+2, which is more than 6 months away (they have to release the current version of Ubuntu with $ver+1, first!)

              If this cycle is timed exactly wrong (and you know Murphy: it's going to be timed as wrong as it can be 90% of the time), users can end up waiting between 9 months and a year before they see even a minor release of a software package in an Ubuntu release, let alone a major version. And these minor releases are, very often, evolutionary small feature enhancements that are beneficial to the user, coupled with bugfixes, and are well-tested to ensure that there are no regressions.

              For the many packages that do have high quality software testing teams outside of Ubuntu making sure that the software they release is solid, the 6 month release schedule of Ubuntu is absolutely stupid. Users get stale packages, and upstream maintainers get long-delayed user feedback. A user might come along a year after they do a feature release and complain about some problem. Well, that's nice; but I haven't touched that code for nearly a year, and I can barely remember how I wrote it. Maybe I'll work on it when I get around to it. Or: I've long since moved on from that project, maybe someone else can help you.

              If we didn't have such an insanely long release cycle, things like this wouldn't happen.


              And, you know, other OSes don't hold back applications between releases. Imagine if you were stuck with whatever version of Winamp is available when Microsoft did the release of Windows 7, and you couldn't update the software again until Windows 8? Or if you were stuck with the version of Adobe Photoshop that Adobe ships alongside Mac OS X 10.6, and you can't update to the next version of CS until OS X 10.7? The wait would be absolutely unjustifiable, and there'd be a user revolt.

              That's because Linux distros have the unique quality of trying to make every software application under the sun a "part of the operating system". While that certainly has its advantages (each application gets free integration, testing, and QA from the distro), it also has major disadvantages, as I've laid out. Personally, I'm not convinced that the advantages outweigh the disadvantages, and I'd like to see a Linux distro that focuses on making a solid release of the core platform on a very short schedule, and come up with a standard, binary-compatible packaging format for all the user applications so that upstreams can do releases in a single format and have it work with all the distros (provided that the package contains binaries for the same architecture that your running kernel supports). The QA staff of the distro itself would only focus on integrating and testing the core platform, which includes:
              • The kernel
              • Drivers (both in-kernel and FUSE, mainline and any desired third-party drivers)
              • Hardware compatibility and device functionality
              • The boot sequence
              • The X server
              • The most standard programs for the default, "flagship" desktop
              • Any packages included on their 700MB Live CD


              The problem I see is that distros' development resources don't scale with the number of third-party Free Software packages that they agree to integrate into the distro. The testing, debugging, and upstream I/O resources simply aren't present in most distro communities to satisfactorily hunt down every last issue.

              What I think should ideally happen is that developers looking to integrate their software into distros should do their own QA prior to making a release. They can work with distros and end-users to do that, sure, but when the packaged version goes into the distro, it had better have received good testing. This makes it very easy for the distro to integrate the package, and the chances are much higher that users will report a positive experience with the package rather than filing bugs.

              Positive examples: Firefox, OpenOffice, MySQL. These projects have significant corporate and volunteer communities that extensively test and QA the software before making an official release. When distros pick up an official release, the quantity and severity of defects they have to address themselves is exceedingly low, even keeping the extreme complexity of these large projects in mind.

              Neutral examples: binutils, kernel, GNOME. These projects do have significant QA and corporate and volunteer testing behind them. They have large user communities and a lot of distros pick them up. But for some reason, these packages are either inherently bug-ridden due to the nature of the software (e.g. kernel and the unpredictable ways it interacts with random hardware people try to run it on), or they go through phases of stability and instability depending on whether a major transition is in progress (a.k.a. GNOME, which was rock-solid stable and predictable like Firefox from GNOME 2.0 through 2.30, but became absolutely wild and zany once 3.0 hit and APIs started to break).

              Negative examples: MPD, KDE (to some extent), wine, and my own software . These projects have a much smaller community, a smaller testing base, and an open development methodology where they just release whatever they have in a more-or-less functional state as a "stable release", and hope that everyone likes it. Of course, this kind of hopeful engineering is really bad for distros, because more often than not, the distros have to field reports from users when the software doesn't work. In the absence of cathedral-like QA on the upstream side, the distro ends up paying the QA tax themselves, often even after their "stable" distro has been released.

              What do users want? They want every useful Free Software package to be integrated into a distro in such a way that the software is [b]stable]/b] and current.

              How do we accomplish this with proprietary software packages? We can, at least, look at the upsides of proprietary software development methodology and try to learn from it, and overcome the weaknesses of the bazaar model to the extent possible. So I think the proprietary methodology is interesting.

              What happens with proprietary? Well, Microsoft doesn't give two shits about your third-party program. They won't test it for you. They won't ship it in the next release of Windows (unless your company happens to be named "Diskeeper"). What Microosft says is, "Here are our APIs. They are going to be stable. You can rely on them. Our platform should work reasonably well. Feel free to write applications for it!" And then, third-party vendors (who want to release stable, well-working software) each have to do their own, independent testing of their software to ensure that it works well, and sometimes they even have to care what other software is installed on the system. For instance, it's possible for two VPN clients from two different vendors to interact and conflict on the system, but if the vendors are aware of eachother and can work out a way to keep their software from conflicting in an ugly way for the user, they will go and do that. Or sometimes Microsoft will help them work it out by coming up with an API that helps manage multiple implementations of a single piece of functionality (this is what the Windows Wireless Zero-Config Service enable/disable functionality falls under: vendors can, at their choice, override MSFT's implementation of the userspace wireless stack and write their own. But it's the vendor's choice, and sometimes even the user's choice.)

              So when that shiny new version of your favorite software is released, you don't have to wait. You can just grab it, install it, and it works.

              I think that level of flexibility is what Linux misses the most. PPAs try to cover for it, but it's really our decision to keep breaking library APIs over and over and over to "innovate" that is ultimately killing flexibility for our users.

              So our options are: keep the status quo; release monthly releases and try to QA the entire set of supported software; do monthly releases and try to QA only the core packages; or create a stable ABI and a standardized packaging format to encourage third-party developers to create binaries that will work across distros and continue to work on newer distros for quite some time.

              Which of those options sounds best to you?

              I hate the status quo. It makes me rage. I have to compile so much stuff from source, and it's a huge waste of time, especially now that I've mastered compiling stuff from source and no longer really learn anything from the experience (particularly when I don't encounter any problems during the build; I learn from problems, but not from trouble-free success).

              I haven't ever seen a distro do monthly releases, so the jury's still out on that one. I'd like to see it done so I can decide if it makes any sense. I'd want to see it actually in practice.

              Creating a stable ABI and standardized packaging format seems like the best idea to me. We need to create a dotted line around the application-level API of userland applications for the Linux platform. All the packages that fall within the dotted line can change any interfaces that are designed to be used by other components within the dotted line; but they can not change their interface to the packages outside the dotted line. So the kernel can change internals as much as it wants, and libdrm (being a core package) would be expected to adapt to changes in the kernel DRM, and so forth. But once you start talking about real applications, like web browsers and desktop environments, the APIs those call needs to be nailed down and printed in indelible ink for years at a time. So I could build a standard installable package file today for my third-party software, and it would continue to work on future Linux distros for 3 or 4 years, if not longer, despite significant evolutions within the internals of the platform. That's my dream.

              Interesting thought and I for myself would love to see Ubuntu in a more core oriented development model.
              To often the packages are too old to be useful in a current Ubuntu release and one has to rely on PPAs - which is somewhat not practical as these will be disabled with the next major release, etc.
              So yes, change in development cycle/model is needed.

              Comment


              • #17
                Originally posted by LLStarks View Post
                Great idea
                Which one?

                Comment


                • #18
                  Originally posted by cl333r View Post
                  Exactly.
                  I'm surprised some people like this idea.
                  The PPAs are good enough for those wishing some app of a newer version.
                  Except that instead of maintaining both main and various PPAs, Ubuntu staff will now have to deal with main only.
                  I'm all for "release it when it's ready" concept.

                  Comment


                  • #19
                    Originally posted by bug77 View Post
                    Except that instead of maintaining both main and various PPAs, Ubuntu staff will now have to deal with main only.
                    I'm all for "release it when it's ready" concept.
                    +1. But what about fixes? Would you prefer to wait until next release?
                    IMHO, "release it when it's ready" is OK as far as bug fixes first go to the current one and only later to the next one.
                    Ubuntu is doing the other way around with a twist: there are fixes that won't ever reach the current.

                    Comment


                    • #20
                      Originally posted by CTown View Post
                      I'm glad Ubuntu is finally going to become more like a rolling distribution. This way new features can wait, no need to put an unfinished features into a release, since the next release is only a few weeks away. Which will probably benefit Unity users the most, since Canonical doesn't have to wait months to give nice sized improvements.
                      I wonder where you draw this conclusion from, it is a proposal being made by someone outside Canonical. That said, I would like if either Debian or Ubuntu officially adopt a rolling release channel. I believe Debian is already discussing this.

                      PPAs are fine and i very much appreciate them; but I'm curious why Debian hasn't implemented them.

                      Comment

                      Working...
                      X