Announcement

Collapse
No announcement yet.

Ubuntu 9.10 Off To A Great Performance Start

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Originally posted by bridgman View Post
    I guess the over-arching question is "what is the business model if there's only one Linux rather than a bunch of distros" ? Right now a good chunk of the common Linux work is done by developers working for major distros; how do those companies continue to succeed and continue funding Linux development ? Selling support contracts can be part of the solution but I don't think they're a complete solution.

    I think most of the participants understand the drawbacks of the current model, but AFAICS maintaining a bit of proprietary differentiation is an important part of funding the ongoing work, even if the source code for that proprietary work is freely available.
    So the logic is thus:

    1) Without proprietary packages, distros wouldn't be very unique since the packaging system is much of how they differentiate themselves from others, except for the support, several software projects that they are the main directors of, and their unique "starter bundle"/iso of Linux software which may be a good starting point for many Linux users wanting to get up and running quickly with a certain set of basic software.

    2) Getting rid of proprietary software packaging leaves you with everything else aside from the first thing.

    3) They're quite possibly clinging to proprietary package management then as a way to be more unique.

    Which comes at the cost of: fragmenting the Linux community, making it difficult for developers to supply Linux software for others to use, making it difficult for device manufacturers to supply easily installable drivers on CDs shipped with their products, and heavily reducing the sharing of software between Linux users.

    Yahoo IM User: "Here, try out this cool program!"
    AIM User: "OK! Wait, it's not in my repository, they haven't packaged it yet..."
    Yahoo IM User: "OK, here's the package I have!"
    AIM User: "Oh...my system can't read these kinds of packages." OR "It's not installing correctly...says that the dependencies can't be met."
    MSN User: "Haha, I'm on Windows, basically all software packages can be shared easily."
    Other two: *grumble*

    If what you say, and I also hypothesis, is true, then I'm their enemy. Creating packages that are proprietary is selfish, and I and everyone else should only support packaging solutions which will grant free and open access to anyone using a package manager that adopts the open format(s). Companies pushing things to be proprietary in order to force, what do they call it...their "stream", to force everyone to have to come to them for everything? Yeah, that. That goes against everything that made the Free Software Movement great. Lock-in is exactly what Linux users are trying to escape from, so aiding an environment of lock-in and proprietary packaging for Linux is just flat out wrong.

    Comment


    • #42
      Honestly I don't know enough about the pros and cons of various package managers to know if the package manager itself is an important part of distro differentiation. I suspect that it is not. Distros are mostly concerned with "delivering a complete solution" and choose the components & tools which they feel best meet their needs.

      The primary reason for each distro staying with its current packaging systems is probably nothing more than the high cost of changing and the fact that the groups which would have to fund the work would probably be hurt more than helped. Nobody likes it when companies put their own priorities ahead of the greater good, but (a) there is usually a lot of overlap between company priorities and user priorities, and (b) we do still expect companies to fund most of the work, so we either need solutions that allow those companies to continue to prosper or we need to find another way to get the work funded.

      My point was more that with our current understanding of business models there is probably a need to maintain some differentiation between distros rather than throwing everything completely into a common "Linux" pot. The current model is inefficient but it's the best anyone has come up with so far; there *are* barriers but they are soft and permeable; source code is there for anyone willing to pick it up and make use of it, but that requires enough time and effort that there can be some commercial payback to justify funding development in the first place.

      I'm not trying to discourage you here; quite the opposite. In the grand scheme of things both the free software movement and what we call modern civilization are both relatively young and everyone is still learning. The fact that we have a workable system today doesn't mean things can't change or improve, but unless there is some reasonably clear benefit to the folks who will have to invest in a new model it's going to be tough to sell the change.
      Last edited by bridgman; 18 May 2009, 03:13 PM.
      Test signature

      Comment


      • #43
        The buildpkg option is nice, but the current kernel support is crap, if i had to decide which i would prefer it would be current kernel support.

        Comment


        • #44
          Might respond to the rest later, but, that was just an asinine response and I have to comment..

          No, I expect Linux developers to want users to download and use the software they write. If they didn't want users using it, they shouldn't have released it to begin with. Since they apparently do, of course they want to release it in a universally accessible format which is also easy to install. They should only have to release it in this format, and nothing else.

          If I were a developer, I'd be releasing the source code, as required by law any way under my license, as well as a binary inside a universally supported package which actually integrates into the OS better than any straight binaries or installers ever could or will. Those packages are what everyone should be supporting, you know, actual important Linux standards for everybody, instead of proprietary garbage that only helps a select few.
          Cool. Except that the happy scenario won't happen unless everyone switches to this universal format at once. And since that won't happen, it's more work. Not something that everyone can justify even if it, as you say, brings more users. That is obviously because at that point, your dream format is only "another format" and there might even be more users gainable by packaging for ubuntu.

          Comment


          • #45
            Sourcecode is definitely the best way for Linux, maybe a beginner does not understand this. Some projects provide also deb or rpm packages, but without diff.gz/dsc for deb or spec files for rpm they do not help much. checkinstall is able to create those too but thats not reuseable. So the best would be certainly if somebody from the project would be the distro maintainer of that package. If a package is not binary compatible then you can just recompile it if build-deps are met.

            For commerical apps which are usually not provided as source the generation of a binary which runs on most systems is of course more problematic, but not impossible, maybe Svartalf could provide you with more infos about that. Basically you have to compile your app against a pretty old glibc and provide all used libs with it. LD_LIBRARY_PATH in a startup file will do the rest.

            Comment


            • #46
              Originally posted by deanjo View Post
              Did you even bother actually reading my initial comment? Seems you are arguing the exact same thing now.
              You're arguing this:
              Originally posted by deanjo View Post
              And be limited to crappy performance and accept 50% 3d performance as good enough? I don't think so. Now that would really put linux gaming another 10 years behind.
              Which is exactly the same as this:
              Originally posted by deanjo View Post
              "We can't fix their crap, screw them."
              Get it?

              Originally posted by deanjo View Post
              As far as "binary blobs which are impossible to properly support" that my friend is a load of horseshit. If the X layer didn't change on a weekly basis and they actually settled on some set standards then it becomes very easy to maintain blob compatibility. It's because of this volatile type of development that you do see blobs break. In other OS's you are fully capable of upgrading kernels and video subsystems without breaking the previously installed driver.
              Hmm, seems that over a hundred kernel developers and Linus disagree with you. I guess they just don't understand? Maybe you should enlighten them.
              In fact, just last year some guy explained it like this:
              Well, there’s – the lack of an ABI is two-fold: one is we really, really, really don’t want one. Every single time people ask for a stable ABI, the main reason for wanting a stable ABI is they want to have their binary drivers and they don’t want to give out source and they don’t – certainly don’t want to merge that source into the stable kernel or the standard kernel.

              And that, in turn, means that all the people who actually do all the kernel work and maintain the kernel are basically unable to work with that piece of hardware and that vendor because if there’s any bugs whatsoever, we can’t fix them.

              So, all the commercial vendors—even the ones who used to accept binary drivers—have moved or are moving away from wanting to have anything at all to do with binary drivers because they’re completely unmaintainable.

              So, there’s one of the reasons. Some people see it as political. There is probably a political aspect to it, but a lot of it is very pragmatic; we just can’t maintain it.

              (...)

              And when you have that kind of distributed support system when – where everybody ends up being involved at some point, you really can’t afford to have the situation where only a very small subset actually has access to the code that may be causing the problem. You need to have the code out there, not because of any social issues, but simply because you don’t know who’s going to be the one who has to fix it.

              So, there’s a real reason why we need to be able to have source code which means that to all kernel developers, a binary interface is basically – it is only a bad thing. There is – there are no upsides whatsoever.

              But there’s another reason which is that we actually do end up changing things in radical ways inside the kernel and that has led to the fact that even if we wanted to have a binary interface, we simply couldn’t or we could but it would then stop us from fixing stuff and changing how we do things internally.

              And this is something you do see in other projects where, yes, they have binary interfaces for one reason or another—quite often because of commercial reasons—and that just means that they cannot fix their fundamental design. They sign up not just the binary interfaces, they also sign up to the exact design they had when they came up with that interface.

              So, there’s – that’s the second major reason why a stable ABI is not going to make – in fact, that means that we don’t even guarantee a stable API; so, even on a source level we say, “Okay, this is the API and we’ll – if you have external drivers that use this, we’ll help you fix them up when we have to change the API. But we don’t guarantee that the same API will work across versions.” And it doesn’t.

              Comment


              • #47
                Here is a very enlightening article about kernels ABI's and API's!

                Comment


                • #48
                  Originally posted by bridgman View Post
                  unless there is some reasonably clear benefit to the folks who will have to invest in a new model it's going to be tough to sell the change.
                  You're right, and because companies see money in locking users into their system and software, Linux users will suffer. That's why I will continue to help projects like Zero Install which break down those barriers toward easy Linux software installation.

                  Originally posted by curaga View Post
                  Cool. Except that the happy scenario won't happen unless everyone switches to this universal format at once. And since that won't happen, it's more work. Not something that everyone can justify even if it, as you say, brings more users. That is obviously because at that point, your dream format is only "another format" and there might even be more users gainable by packaging for ubuntu.
                  No, it's called a technological challenge, and just because you haven't thought of a solution after thinking about it for maybe a fraction of a second doesn't mean there isn't one. One way of doing it is simply defining a bunch of metadata about a package and then letting the manager do what it wants to with it, and making the software be flexible and not using stupid static links so that the program can be completely manipulated in any way that package manager wants to. Such a solution means you can make a package manager be compatible with multiple formats and yet still install them all, you don't have to completely switch from RPM or DEB to NEWFORMAT, hell you may not even have to switch at all if DEB or RPM were updated to be more flexible so that they could be used as universal formats. Seriously, it's not that hard, it's not that complicated, it's just that these companies aren't interested in playing nice with the rest of the Linux community.

                  I'm talking about using format*S*, you don't have to force a single one on users, but you SHOULD use formats that any manager can easily adopt that are good enough formats to be worthy of the term Standard. Such formats should be promoted by the ISO, Freedesktop.org, and any other standards bodies as being of pivital importance in helping Linux interoperability.

                  (that is, if the ISO wasn't already possibly sold out to M$)

                  Originally posted by Kano View Post
                  Sourcecode is definitely the best way for Linux, maybe a beginner does not understand this.
                  Yes, because if Linux users could actually easily install programs from across the Internet instead of being controlled and limited by their distro company, that might help to increase the rate of adoption for all of Linux as a whole, it'd be horrible.

                  Clearly we have different opinions of what is best.

                  Originally posted by krazy View Post
                  Hmm, seems that over a hundred kernel developers and Linus disagree with you. I guess they just don't understand? Maybe you should enlighten them.
                  You can do whatever you want to. Those developers don't want to do it because then the distro companies can stay in control and have the focus on them instead of sharing with the Linux ecosystem. There is no technical reason for not having an API. The entire point and definition of an API is so that you can have progress while still having compatibility. Maybe you should read the definition sometime.

                  This immature motive is revealed in the very opening paragraph which maybe you should actually read: "Every single time people ask for a stable ABI, the main reason for wanting a stable ABI is they want to have their binary drivers and they don’t want to give out source and they don’t – certainly don’t want to merge that source into the stable kernel or the standard kernel."

                  Ooooh, so they don't WANT it just because they don't want anyone, random developers or companies, to be actually able to make drivers for their kernel without them being able to know about it, control it, and accept it. They want to be in control, and remove that control from those random individuals. So do the distro companies, they have the exact same goal. They want all the focus to be on them instead of the general Linux community itself, so *they* can be recognised and become the household name, and force everyone to have to come to them for everything.

                  There's a billion parallels I could tell you about in the business world, like how car companies do the same thing by trying to make it so that every part has to come from them, for example using proprietary computer systems in their cars that require either finding and copying the program, if you're lucky enough to find the program online, or you have to go to them for simple things like "resetting it" so they can charge you $$$. They don't want competition, they don't want an open ecosystem of sharing and interoperability and standards, and it's the exact same thing with these companies that fund the kernel development. Hmm maybe who writes their paychecks has something to do with it??? Naaaaaaaah.

                  So, because of these assholes, because of that reason, I can't easily install drivers, and are pushed to get all my software from Canonical instead of from the REAL developers. Yay! Linux sure is all about freedom and choice isn't it?!
                  Last edited by Yfrwlf; 20 May 2009, 06:24 PM.

                  Comment


                  • #49
                    Originally posted by Yfrwlf View Post
                    There is no technical reason for not having an API.
                    Yes there is. It's called Windows.

                    Windows has a 'stable API' which will still run binaries from twenty years ago; as a consequence it includes about a bazillion lines of crud that no-one should be using anymore, which includes a ton of bugs, tons of security holes, and requires vast amounts of time for developers to maintain.

                    The entire point and definition of an API is so that you can have progress while still having compatibility.
                    You only need binary compatibility if you refuse to hand out your source code. Linux kernel developers don't want to be tied down into some fossilised API which prevents them from making major design improvements, because they've seen what that's done to other operating systems.

                    'Stable APIs' are a really, really bad idea from a technical standpoint, because it ties you into bad design choices made years before. I don't know why this is so hard for some people to understand.

                    Comment


                    • #50
                      Yup, APIs may be comfortable but they can easily fill your code with a lot of garbage. This guy here explains it much better (it's part from the link I posted above)...
                      If you have any questions please go read this file. It explains why Linux doesn't have a stable in-kernel api, and why it never will. It all goes back to the evolution thing. If we were to freeze how the kernel works internally, we would not be able to evolve in ways that we need to do so.

                      Here's an example that shows how this all works. The Linux USB code has been rewritten at least three times. We've done this over time in order to handle things that we didn't originally need to handle, like high speed devices, and just because we learned the problems of our first design, and to fix bugs and security issues. Each time we made changes in our api, we updated all of the kernel drivers that used the apis, so nothing would break. And we deleted the old functions as they were no longer needed, and did things wrong. Because of this, Linux now has the fastest USB bus speeds when you test out all of the different operating systems. We max out the hardware as fast as it can go, and you can do this from simple userspace programs, no fancy kernel driver work is needed.

                      Now Windows has also rewritten their USB stack at least 3 times, with Vista, it might be 4 times, I haven't taken a look at it yet. But each time they did a rework, and added new functions and fixed up older ones, they had to keep the old api functions around, as they have taken the stance that they can not break backward compatibility due to their stable API viewpoint. They also don't have access to the code in all of the different drivers, so they can't fix them up. So now the Windows core has all 3 sets of API functions in it, as they can't delete things. That means they maintain the old functions, and have to keep them in memory all the time, and it takes up engineering time to handle all of this extra complexity. That's their business decision to do this, and that's fine, but with Linux, we didn't make that decision, and it helps us remain a lot smaller, more stable, and more secure.

                      And by secure, I really mean it. A lot of times a security problem will be found in one driver, or in one core part of the kernel, and the kernel developers fix it, and then go and fix it up in all other drivers that have the same problem. Then, when the fix is released, all users of all drivers are now secure. When other operating systems don't have all of the drivers in their tree, if they fix a security problem, it's up to the individual companies to update their drivers and fix the problem too. And that rarely happens. So people who buy the device, and then use the older driver that comes in the box with the device, which is insecure. This has happened a lot recently, and really shows how having a stable api can actually hurt end users, when the original goal was to help developers.

                      Comment

                      Working...
                      X