Announcement

Collapse
No announcement yet.

RandR 1.3 Arrives With Panning Support

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #11
    It's certainly fair to say that most changes have happened because a developer thought it was better for the users, which may or may not turn out to be true.

    What makes it complicated is that a lot of the changes you see are made not because *that* change is better for the users, but because the change is a pre-requisite for a *later* change which will be better for users.

    DRI2 and memory management are a good example. They're a big pain in the butt for users and for distros, 'cause they just change a bunch of APIs and muck up builds without making anything visibly better.

    But -- what both changes do is enable some other features (which have not been implemented yet) which users *do* definitely want -- flicker-free 3D under Compiz, higher levels of GL support, better performance, more reliable operation etc...

    It would be nice if there was some kind of ongoing changelog for Xorg written from a user perspective. At minimum that would separate out the changes which are supposed to be better (so one can speak up if they don't seem to be an improvement) from changes which add pain in the short term but which enable a brighter future.

    Or you could have a single dictator make all the decisions about what should be changed, which would probably give you a much cleaner roadmap and a more consistent vision of where X is going, but the open source community tends to be pretty intolerant of dictators unless they do most of the coding themselves.
    Test signature

    Comment


    • #12
      Originally posted by bridgman View Post
      But -- what both changes do is enable some other features (which have not been implemented yet) which users *do* definitely want -- flicker-free 3D under Compiz, higher levels of GL support, better performance, more reliable operation etc...

      That's what is most frustrating about X.org, changes made for non-existent features. Sure its great that you'll be able to provide flicker-free wobbly windows sometime in the future, meanwhile those of us that just want to get some work done and require a stable display are stuck with something that doesn't seem to be an improvement over the past and in many cases has regressed as legacy nvidia blob users found out in the last Ubuntu release.

      Sure progress is important, but changing things in /head (and therefore a stable release) for non-existent features is basically just being a dick to those people whose desktops were broken by those changes.

      Comment


      • #13
        I guess I don't understand why anyone would pick up changes from head unless they were participating in development either as a developer or as a tester. Releases are for users; head is for ongoing development and for verifying specific bug fixes.

        The most invasive changes (eg the current KMS and MM work) is typically done in a separate branch until the APIs stabilize. As far as I know anyone complaining about instability there is picking up the code from developers' private branches, or from F10 which also includes the very latest development code.

        One thing I don't understand is your comment about "changing things in head (and therefore a stable release)". The tips (head) and stable releases are usually two different things.
        Test signature

        Comment


        • #14
          Originally posted by bridgman View Post
          One thing I don't understand is your comment about "changing things in head (and therefore a stable release)". The tips (head) and stable releases are usually two different things.
          Stable releases come from head, what's difficult to understand about that? What's difficult to understand that changes for non-existent features shouldn't make a release under any circumstance?

          Comment


          • #15
            AFAIK stable releases either come from head at specific *times* (usually after a bug-fixing frenzy) or from branches created to stabilize the code while work proceeds in master. The ddx drivers tend to take the first approach, mesa tends to take the second approach. I understand what you are saying, I just don't fully agree with it

            Really invasive changes get pushed off to temporary branches (eg the KMS/GEM work being done now), but in cases where a few things need to come together in order it is not uncommon to put the individual pieces in when they are ready to allow broader testing.

            I suspect we are talking about two different things here, which is why we are disagreeing. If you are saying "half-finished work should not be dropped into master where it can break things" I think everyone would agree. What I'm talking about, however, is a multi-stage project where base functionality needs to be added into one component and made broadly available so that other components can be modified to make use of the new functionality and tested offline before *those* changes are pushed to master.

            It all depends on whether or not the new code can be kept cleanly isolated from existing functionality. Adding partial support for a new GPU is usually pretty safe to add, since the new code only runs when the new GPU is plugged in. Same for new APIs if you aren't taking out the old API - you can test the existing functionality pretty well before pushing any modifications to existing code required to support the new API.

            Other projects require making high-risk changes to existing functionality, and in those cases working in a temporary branch to protect master is the right thing to do. Usually the right decisions are made up front, but I'm sure there are occasional bad calls.
            Last edited by bridgman; 03 December 2008, 02:14 PM.
            Test signature

            Comment

            Working...
            X