Announcement

Collapse
No announcement yet.

Proposed: A Monthly Ubuntu Release Cycle

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Proposed: A Monthly Ubuntu Release Cycle

    Phoronix: Proposed: A Monthly Ubuntu Release Cycle

    There's been a proposal written today for a new Ubuntu release process. Under this proposed process, Ubuntu would abandon its traditional six-month release cycles in favor of monthly releases. Yep, once a month. The benefit of this proposal is that new Ubuntu features wouldn't be forced to land every six months but would land when the given feature is actually mature and ready. This is quite different from Ubuntu's current release process, but this proposal comes from Scott James Remnant, the former Canonical employee and Ubuntu Developer Manager...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    I like this idea a lot.

    I'm glad Ubuntu is finally going to become more like a rolling distribution. This way new features can wait, no need to put an unfinished features into a release, since the next release is only a few weeks away. Which will probably benefit Unity users the most, since Canonical doesn't have to wait months to give nice sized improvements.

    Comment


    • #3
      Yes I am trolling

      Yes I hate Unity. I hate Unity I hate Unity. I hate Gnome-shell too
      (And I do use compiz, and I like it very much)

      Comment


      • #4
        If this takes off, they'd might as well do what OpenSUSE did and add an optional Rolling Release repository. It'll bring in more people wanting to test new upcoming features.

        Comment


        • #5
          Originally posted by CTown View Post
          I'm glad Ubuntu is finally going to become more like a rolling distribution. This way new features can wait, no need to put an unfinished features into a release, since the next release is only a few weeks away. Which will probably benefit Unity users the most, since Canonical doesn't have to wait months to give nice sized improvements.
          What you said is true, but the other benefit of this monthly cycle (which I personally value much more than the sum total of Canonical's own software projects) is that you won't have to wait an inane amount of time to receive a new upstream version of a package without compiling it from source.

          As it stands, something like this always happens:

          1. Ubuntu is released in $month with rhythmbox version $ver.
          2. Rhythmbox version $ver+1 is released 2 weeks after, but because of the strict SRU process, users don't see $ver+1 until the next version of Ubuntu, $month+6 later.
          3. 2 days after Ubuntu's beta freeze, Rhythmbox $ver+2 is released. Users of Ubuntu+1 will get $ver+1 in Ubuntu+1, but they won't see $ver+2 until Ubuntu+2, which is more than 6 months away (they have to release the current version of Ubuntu with $ver+1, first!)

          If this cycle is timed exactly wrong (and you know Murphy: it's going to be timed as wrong as it can be 90% of the time), users can end up waiting between 9 months and a year before they see even a minor release of a software package in an Ubuntu release, let alone a major version. And these minor releases are, very often, evolutionary small feature enhancements that are beneficial to the user, coupled with bugfixes, and are well-tested to ensure that there are no regressions.

          For the many packages that do have high quality software testing teams outside of Ubuntu making sure that the software they release is solid, the 6 month release schedule of Ubuntu is absolutely stupid. Users get stale packages, and upstream maintainers get long-delayed user feedback. A user might come along a year after they do a feature release and complain about some problem. Well, that's nice; but I haven't touched that code for nearly a year, and I can barely remember how I wrote it. Maybe I'll work on it when I get around to it. Or: I've long since moved on from that project, maybe someone else can help you.

          If we didn't have such an insanely long release cycle, things like this wouldn't happen.


          And, you know, other OSes don't hold back applications between releases. Imagine if you were stuck with whatever version of Winamp is available when Microsoft did the release of Windows 7, and you couldn't update the software again until Windows 8? Or if you were stuck with the version of Adobe Photoshop that Adobe ships alongside Mac OS X 10.6, and you can't update to the next version of CS until OS X 10.7? The wait would be absolutely unjustifiable, and there'd be a user revolt.

          That's because Linux distros have the unique quality of trying to make every software application under the sun a "part of the operating system". While that certainly has its advantages (each application gets free integration, testing, and QA from the distro), it also has major disadvantages, as I've laid out. Personally, I'm not convinced that the advantages outweigh the disadvantages, and I'd like to see a Linux distro that focuses on making a solid release of the core platform on a very short schedule, and come up with a standard, binary-compatible packaging format for all the user applications so that upstreams can do releases in a single format and have it work with all the distros (provided that the package contains binaries for the same architecture that your running kernel supports). The QA staff of the distro itself would only focus on integrating and testing the core platform, which includes:
          • The kernel
          • Drivers (both in-kernel and FUSE, mainline and any desired third-party drivers)
          • Hardware compatibility and device functionality
          • The boot sequence
          • The X server
          • The most standard programs for the default, "flagship" desktop
          • Any packages included on their 700MB Live CD


          The problem I see is that distros' development resources don't scale with the number of third-party Free Software packages that they agree to integrate into the distro. The testing, debugging, and upstream I/O resources simply aren't present in most distro communities to satisfactorily hunt down every last issue.

          What I think should ideally happen is that developers looking to integrate their software into distros should do their own QA prior to making a release. They can work with distros and end-users to do that, sure, but when the packaged version goes into the distro, it had better have received good testing. This makes it very easy for the distro to integrate the package, and the chances are much higher that users will report a positive experience with the package rather than filing bugs.

          Positive examples: Firefox, OpenOffice, MySQL. These projects have significant corporate and volunteer communities that extensively test and QA the software before making an official release. When distros pick up an official release, the quantity and severity of defects they have to address themselves is exceedingly low, even keeping the extreme complexity of these large projects in mind.

          Neutral examples: binutils, kernel, GNOME. These projects do have significant QA and corporate and volunteer testing behind them. They have large user communities and a lot of distros pick them up. But for some reason, these packages are either inherently bug-ridden due to the nature of the software (e.g. kernel and the unpredictable ways it interacts with random hardware people try to run it on), or they go through phases of stability and instability depending on whether a major transition is in progress (a.k.a. GNOME, which was rock-solid stable and predictable like Firefox from GNOME 2.0 through 2.30, but became absolutely wild and zany once 3.0 hit and APIs started to break).

          Negative examples: MPD, KDE (to some extent), wine, and my own software . These projects have a much smaller community, a smaller testing base, and an open development methodology where they just release whatever they have in a more-or-less functional state as a "stable release", and hope that everyone likes it. Of course, this kind of hopeful engineering is really bad for distros, because more often than not, the distros have to field reports from users when the software doesn't work. In the absence of cathedral-like QA on the upstream side, the distro ends up paying the QA tax themselves, often even after their "stable" distro has been released.

          What do users want? They want every useful Free Software package to be integrated into a distro in such a way that the software is [b]stable]/b] and current.

          How do we accomplish this with proprietary software packages? We can, at least, look at the upsides of proprietary software development methodology and try to learn from it, and overcome the weaknesses of the bazaar model to the extent possible. So I think the proprietary methodology is interesting.

          What happens with proprietary? Well, Microsoft doesn't give two shits about your third-party program. They won't test it for you. They won't ship it in the next release of Windows (unless your company happens to be named "Diskeeper"). What Microosft says is, "Here are our APIs. They are going to be stable. You can rely on them. Our platform should work reasonably well. Feel free to write applications for it!" And then, third-party vendors (who want to release stable, well-working software) each have to do their own, independent testing of their software to ensure that it works well, and sometimes they even have to care what other software is installed on the system. For instance, it's possible for two VPN clients from two different vendors to interact and conflict on the system, but if the vendors are aware of eachother and can work out a way to keep their software from conflicting in an ugly way for the user, they will go and do that. Or sometimes Microsoft will help them work it out by coming up with an API that helps manage multiple implementations of a single piece of functionality (this is what the Windows Wireless Zero-Config Service enable/disable functionality falls under: vendors can, at their choice, override MSFT's implementation of the userspace wireless stack and write their own. But it's the vendor's choice, and sometimes even the user's choice.)

          So when that shiny new version of your favorite software is released, you don't have to wait. You can just grab it, install it, and it works.

          I think that level of flexibility is what Linux misses the most. PPAs try to cover for it, but it's really our decision to keep breaking library APIs over and over and over to "innovate" that is ultimately killing flexibility for our users.

          So our options are: keep the status quo; release monthly releases and try to QA the entire set of supported software; do monthly releases and try to QA only the core packages; or create a stable ABI and a standardized packaging format to encourage third-party developers to create binaries that will work across distros and continue to work on newer distros for quite some time.

          Which of those options sounds best to you?

          I hate the status quo. It makes me rage. I have to compile so much stuff from source, and it's a huge waste of time, especially now that I've mastered compiling stuff from source and no longer really learn anything from the experience (particularly when I don't encounter any problems during the build; I learn from problems, but not from trouble-free success).

          I haven't ever seen a distro do monthly releases, so the jury's still out on that one. I'd like to see it done so I can decide if it makes any sense. I'd want to see it actually in practice.

          Creating a stable ABI and standardized packaging format seems like the best idea to me. We need to create a dotted line around the application-level API of userland applications for the Linux platform. All the packages that fall within the dotted line can change any interfaces that are designed to be used by other components within the dotted line; but they can not change their interface to the packages outside the dotted line. So the kernel can change internals as much as it wants, and libdrm (being a core package) would be expected to adapt to changes in the kernel DRM, and so forth. But once you start talking about real applications, like web browsers and desktop environments, the APIs those call needs to be nailed down and printed in indelible ink for years at a time. So I could build a standard installable package file today for my third-party software, and it would continue to work on future Linux distros for 3 or 4 years, if not longer, despite significant evolutions within the internals of the platform. That's my dream.

          Comment


          • #6
            This has got to be the dumbest thing they could possibly do. Instead of spending a week every six months fixing things that were broken, its one week a month, making the Ubuntu desktop more of a tamagotchi than it already is.

            Comment


            • #7
              Originally posted by yogi_berra View Post
              This has got to be the dumbest thing they could possibly do. Instead of spending a week every six months fixing things that were broken, its one week a month, making the Ubuntu desktop more of a tamagotchi than it already is.
              Exactly.
              I'm surprised some people like this idea.
              The PPAs are good enough for those wishing some app of a newer version.

              Comment


              • #8
                I think the best way to go is releasing a core system once a year and then updating the apps in a rolling release fashion, this rolling release should lag 2 or 3 months behind the software releases, so it gets .1, .2, or .3 releases, but as it's always updating, you never feel it very outdated, opposed to having half the time new ustable software and half the time old stable one, you get moderately new and moderately stable all the time.
                Also if hardware is working properly there should not even be need to update the core system, xorg/kernel apis should get stable enough for this to happen.
                ppas try to fix the current state a bit, but sadly they mess a bit too much with the system, for example you install openshot from the ppa and you get some video related libs too new, so mplayer/vlc/phonon breaks, making you have to update the video player or downgrade openshot again, each with it's own consequences.
                There should also be a "testing" ubuntu which packs with the lastest app releases, those which later come to stable ubuntu.

                Comment


                • #9
                  I want to indicate that I really appreciate allquixotic's post.

                  When I first started using Linux, I was surprised to learn that the packages would be updated only at some indefinite point in the future when debian's team felt testing was adequately stable, and that I was stuck with really old software for 1.5-2 years until that occurred. That is a really long time in software development.

                  A big reason many Linuxheads use Arch is that it respects its users by providing software updates in a timely manner. The long-cycle Red Hat and Debian releases work OK (not really "well"...) for server users, but it is absolutely ridiculous for desktop users; allquixotic explains the problems well.

                  Something needs to happen to deliver application updates in a tighter fashion. The feedback loop needs serious tightening and I think that would greatly improve the software that ultimately gets released; right now, the changes come in far too much bulk to really adapt the software to requirements; by the time a developer finds out something just doesn't work in many real deployments, he may already have 3 mo. worth of code that depends on that behavior, making it very difficult to refactor or react in a reasonable way.

                  It is ridiculous and irresponsible to leave users exposed without bugfixes for 6 mo - 1 yr, depending on when someone hits in the release cycle. I really find it silly that people find it acceptable to distribute software in that way. Another big reason Arch attracted me was that they did not make a habit of going into codebases completely disoriented and attempting to "backport" fixes that work perfectly reasonably in newer versions, as Debian et al often do. We saw with the Debian SSL fiasco where that leads, and something like that has always been my fear. If you want bugfixes and updates, you should use the real, tested, upstream software.

                  While I appreciate the value in a server platform that allows you to run your code with some reasonable assurance of stability, I think it is a shame that that paradigm has been applied to desktop operating systems. While Ubuntu offers a server flavor, they ultimately target the desktop, and their contributions have been excellent for Linux as a whole; I think those contributions would be greatly improved by tightening the release cycle, feedback loop, and real bugfix/version distribution.

                  In fact, the six month release cycle was originally quite a progressive thing in a space dominated by Debian, Slackware, and other "release whenever" (almost always 1 yr+) distributions, and I think now it's time to recognize that it wasn't progressive enough. Ubuntu should issue a "desktop" release channel that receives code that is initially tested for interoperability and then pushed out if it doesn't appear to damage anything after a short time (a few weeks or less if possible) and a "server" release channel that basically comprises their LTS offerings. They'd do really well to take a page from Arch; Arch targets power users, but its release methodology should work adequately well on a system like Ubuntu. Usually a kernel update is the thing that takes the most time to hit Arch and that usually takes 1.5-2 mo until the .1 or .2 release comes out, at least all desktop software (not necessarily the kernel) should see distribution times no longer than that.

                  Comment


                  • #10
                    They should just make it a semi-rolling release with releases every 1 or two years. This way people can have their apps up to date without having to clutter their systems with tons of potentially unstable ppa's, and they can spend more time polishing their releases.

                    Comment

                    Working...
                    X