Announcement

Collapse
No announcement yet.

Red Hat Enterprise Linux 6.5 Preps New Capabilities

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #41
    Argh, I feel like I'm writing a feaking book here :P

    Comment


    • #42
      Originally posted by Serge View Post
      I tend to think of OS X as the golden standard for a good graphical experience. I think the Metro style has great potential and will replace the desktop metaphor in graphical shell design once it has matured a little, but at the moment, I think it is so incomplete that it is actually a regression from Aero and whatever they used to call it before that. I have never been a fan of the destkop metaphor and am always curious to see projects try to break the mold of what a traditional, desktop-themed graphical shell should look like. I think the desktop metaphor was a clever attempt to make the then-nascent graphical shells more naturally intuitive to office workers, and to some extent it did have success there, but overall I think people's comfort with such "traditional" approaches is due more to familiarity with the desktop metaphor than to a natural inclination towards interacting with their computer in this way.

      I think that somewhere out there is a much better way to graphically represent and facilitate interaction with the functionality of a computer to human beings than what has been done with the desktop metaphor. I don't know what that way is. I really liked where Maemo was going with their graphical experience. (note: I am referring to Maemo before Maemo 5; I have never tried Maemo 5 and have no idea what it looks like, so I can't comment on whether or not I think it's an improvement on pre-5 or if it is worse) In fact, when I first saw Maemo, my immediate reaction was, "Finally! This is the one! This is the way graphical interfaces should have always been!" But that's a dead end now. OLPC's Sugar sounds great in concept but in practice I'm always disappointed when I check it out. So that's why I am curious to see where Microsoft takes the "Metro style". It's horrible right now, though. So for the time being the only graphical environments that I find truly impressive are those that still build on the old desktop metaphor, and of those the best I think belongs to OS X. As for Linux-based GUIs vs Windows pre-Metro GUIs, I think that the later, more mature offerings in the GNOME 3 and KDE 4 series are now roughly up to par with the one found in Windows 7, which I feel remains the best Windows GUI to date.

      TL;DR: Ok, sorry, let me get back on topic. What I mean to say is, I think that GNOME and KDE are already comparable to the Windows graphical experience, but the real king of graphical experiences for the time being is OS X, not Windows.
      First, I TOTALLY agree with you about there being a better graphical paradigm than the desktop. I was really hoping it would be GS. As I've said many times previously the original G3 design doc had some really interesting ideas. The problem came down to implementation and lack of really creative types along with no UX experts. The UX situation has been getting better as they've started bringing in more folks and accepting criticism better, but the paradigm they've built isn't better enough and it's too late for them to do anything about it.
      My only experience with maemo comes from my old n800. I have to say I wasn't a big fan. The hardware had some nice features but the software interface was not well suited to touch.
      OLPC's sugar is an interesting one. That is genuinely different (if perhaps not unique) and is highly targeting their realistic audience (unlike GS's absurd personas).I tried sugar a few years ago and had some issues with it but, overall, that seemed a genuine advance in thinking. I also liked the old moblin interface (back when it was based on clutter and gtk). They were using categories at the top and interesting symbolic icons, along with developing their own toolkit (mx). If you haven't tried it you might be surprised (it was always buggy, though).
      The problems I had in mind with Gnome/KDE/Enlightenment/Unity/etc is their completeness. You HAVE to have graphical sysadmin tools (even if not intended for enterprise) b/c problems will occur and clis, as they are generally made now, are just not discoverable (a few exceptions are fish and final term, along with an older project from Colin Waters that built a shell that made heavy use of python instead of bash and had significant graphical capabilities---those projects are all pointing the way to the future of graphical clis and the ideas really need to be brought to fruition).
      I haven't played with metro as much. My biggest concern is the "root-less" aspect of moving around. I believe you need a root to move from. A "homescreen" is a great way to act as a launchpad to activities. Eventually I'd love to move away from even that, but, for now, the rootless aspect of metro bugs me. To be clear, when I say rootless I am referring to the homescreen being something that you freely move side to side (I'm not a big fan of side-to-side screen movement b/c it leaves you disoriented and without a fast way to return to a specific place) without a SINGLE frame that acts as HOME. Rather it is home extended across a navigable strip of uncertain length.
      That nit aside, metro is incredibly interesting and I think it could be pointing a way forward.
      OSX works well, but they've really stagnated over the last five or so years. Expose was a really nice idea, as was quicksilver (not apple's invention, but still developed FOR osx) but other than those I struggle to think of very useful, and unique, ui elements.



      The best technology doesn't always win. I'd like to see wider systemd adoption, but for the time being there are compelling reasons for Debian to not make the switch, and for the time being Ubuntu continues to stand behind Upstart, and as I've already mentioned in an earlier post, over 60% of the distributions listed on distrowatch.com are either Debian or Ubuntu based. In the future, all of this can of course change. The Debian project might decide that Debian misses out on too much functionality when not using systemd and this can lead to an upsurge in interest in switching to systemd, or Canonical might decide that continued Upstart development does not provide a sufficient return on their investment and that continued development of systemd compatibility layers is just not worth the effort when a switch to systemd would eliminate the need for these layers. But those are just two speculative scenarios. Right now, it does not appear that either project is going to be switching to systemd.
      Looking at number of distros is less useful than percentage of users per distro, imho. I don't think debian itself has a massive user base (smaller than ubuntu, suse, fedora, mint, maybe even mageia, afaict), so we need to look more towards ubuntu. I'm not too worried about ubuntu since I don't think they'll be a force for much longer. Their move to unity, and worse, mir, has caused brought about serious problems with the spins. I think those distros will increasingly ask the question "is using ubuntu as our upstream the best choice?" I think you'll see some movement away from ubuntu/debian and towards, the hopefully accepted, new fedora ring scheme. That is basically designed to act as platform for builders. Along with them you have the excellent suse studio service and obs (the later of which fedora MIGHT be moving to as well).
      That was mostly speculation backed by hope, but I do think it is a very possible, and reasonable, path forward that would also go a long ways towards making the linux ecosystem both more robust (by having more standard, flexible, well designed components) and a better target for proprietary development.


      Not too long ago I got it into my head that I wanted a CLI-only OS in a 512MB volume to handle boot management and do system recovery without a live USB. I started with Debian, as that's what I was most familiar with, but the standard installation image failed to create an install that fit. Next, I tried Arch Linux and ended up having to delete the localization files, the man pages, and mount the package cache in tmpfs in order to get a useable system without sacrificing useful tools like procps. Then I tried Slackware and got a fully useable system with all of the nice desired tools, with no hacking, in about half the space. That, to me, is a sign of fundamental philosophical differences that have real, practical consequences*. There are many projects that I feel do not bring anything significantly new or different to the table, but I feel that the major meta-distros all have something unique and worthwhile about them. As for specialty projects like ClearOS and BackTrack / Kali: sure, any meta-distro can do what they do, but sometimes it's nice to just get something that can perform such a specialized purpose with minimal hacking.

      *The philosophical differences that I learned about from that experiment is that Arch Linux's philosophical focus on the bleeding edge has caused the project to make sacrifices with other core philosophies like simplicity and minimalism, whereas Slackware, which values practicality over rapid technology adoption, has not had to sacrifice its own simplicity and minimalism philosophies.
      You could have started with something like puppy. That's been designed to load completely into memory (with a gui, but you could always strip out the gui). I would be willing to bet you could work from debian to puppy without a great deal of trouble, but I haven't actually tried it
      The fedora ring system I was talking about would've been hugely helpful to you. Ring 0, provides the BARE minimum for a bootable system (https://lists.fedoraproject.org/pipe...ly/186323.html).
      So, the issue doesn't seem insoluable, but needs a robust enough design that it can accomodate the VAST majority (as I recall, fedora.next was even intended to be a base for embedded systems, eventually).
      I'm mentioning fedora b/c that's what I'm most familiar with, and they have the most money behind them to actually enact these projects, along with people who are hugely passionate about open source and not hindered by someone like Shuttleworth.


      Yes, you are right about consensus reached through deliberation. The XDG/Portland/freedesktop.org standards, for example, required both GNOME and KDE developers to make compromises in the name of interoperability. But deliberations like that are kinda an exception. Normally, it is really hard to "force" consensus. You need various stakeholders to compromise, and the standard will fail if too many of them do not make the necessary compromises. I think it is more likely for Debian to switch to systemd, for example, out of necessity than as a voluntary compromise in the name of interoperability.
      FDO is a great example. It is hard, but it can be done. Moving to systemd (or at least supporting their api) ALONE goes so far to help matters that I wonder if that is at least in the back of their minds.


      Well, packaging is traditionally the responsibility of each distro and its packagers, not of upstream projects. The reason things have turned out that way is because for the longest time, the proprietary software vendors shunned free operating systems, so as a result the GNU project, the *BSD projects, and the Linux ecosystem developed a very rich library of free and open software that provided reasonable alternatives to the proprietary software. So packaging for a specific distro was a task best handled by the distro in question. But then proprietary software vendors start expressing an interest, and the established way of having the distros handle their own packaging is no longer an option because the source code is not available, and in that model it's the upstream vendors themselves who end up having to package for the various distros. See, it's a manageable problem as long as the distros have access to the source code. Then the distros can just say, "Well, 'foo' works on Ubuntu, but it takes us extra effort to package 'foo' for our distro because we're not completely compatible with Ubuntu so we have to solve some problems in order to get 'foo' to run on our system. Is it really worth the extra effort, or should we instead focus on removing the differences between our distro and Ubuntu so that it's easier to get programs like 'foo' running on our distro with less effort?" And they can then decide if their differences are really that important or not, because they're the ones who are having to pay the price of incompatibility. But when it's upstream that has to do the packaging, then it's not the distros' problem anymore, so the distros are not as motivated to remove the differences anymore, either. But I'm a supporter of free and open source software, and I try to avoid using proprietary software as much as possible, so to me the only interest in making it easier for proprietary software vendors to make software for Linux-based OS'es is in that it might bring greater attention to the ecosystem at large and hence lead to the improvement of the free and open software as well. So I'm in favor of seeing more proprietary software running on free OS'es, but it's not as important for me given my priorities, and I don't like the idea of compromises to free OS'es that mainly benefit proprietary vendors.
      Packaging is the area with, perhaps, the most duplication of effort. Each distro shouldn't have to repackage every damn thing. That's a massive waste of resources (if you can package apps, you can/do program), and this problem, as you note, isn't just for proprietary software. The fact that there are at least two (I'd guess many more) glibcs being packaged for every release is insane. Distros could be so much more reliable, and be able to focus on what they really want if they relied on some Common Base.

      The main reason I am still kinda hung up on marketing is because I think Linux-based and *BSD OS'es already have so much to offer to the world's computer users. I think that there is already so much here, there is definitely enough material for marketing experts to sink their teeth into if there was financial incentive for them to do so.

      I do think that Valve's attention to a Linux base, and the upcomming Steam OS, are doing a heck of a lot of the marketing that I'm hoping for. Google Chromebooks benefit from marketing. Presumably Intel's Tizen laptops will be backed with marketing efforts from Intel as well. All of this kind of marketing raises user awareness for Linux-based OS'es in general, and projects like Debian who rely on no budget and a tiny team of volunteers for their marketing will end up benefiting from this as well.

      So part of the reason why I keep coming back to marketing is because I think it's important, part of the reason is because I think we already have so much that is worth marketing, and finally part of the reason is because I see that marketing coming our way already. (And that's good news!) But it's as you say, otherwise we largely agree. There's plenty of room for improvement across the board, and I don't think that efforts in one area, like marketing, are mutually exclusive with efforts in another area, like continued efforts at standardization.
      Yes, I agree that there is currently enough for marketing people to get excited about...and they do. There are plenty of companies that cater to enterprise that spend big marketing dollars (don't forget that IBM has just pledged to spend $1 000 000 000 over the next decade on linux). Then you have something like Tivo, or roku, which are linux based, and, of course, android, but there are many others. If we're talking about DE's, however, I am simply not convinced any are quite ready yet.

      Comment


      • #43
        Originally posted by liam View Post
        ...
        I also liked the old moblin interface (back when it was based on clutter and gtk). They were using categories at the top and interesting symbolic icons, along with developing their own toolkit (mx). If you haven't tried it you might be surprised (it was always buggy, though).
        ...
        Ah, damn it! I actually meant Moblin, not Maemo, when I was talking about the interface that I really liked. I always mix up Maemo and Moblin. Conceptually, the Moblin graphical interface was brilliant.


        The problems I had in mind with Gnome/KDE/Enlightenment/Unity/etc is their completeness. You HAVE to have graphical sysadmin tools (even if not intended for enterprise) b/c problems will occur and clis, as they are generally made now, are just not discoverable (a few exceptions are fish and final term, along with an older project from Colin Waters that built a shell that made heavy use of python instead of bash and had significant graphical capabilities---those projects are all pointing the way to the future of graphical clis and the ideas really need to be brought to fruition).
        Do you feel that something has actually changed in CLI design itself that has made CLIs less discoverable than they used to be?

        I have always considered discoverability to be the main natural disadvantage of CLIs when compared to GUIs. CLIs simply rely more on rote memorization than GUIs do. What I think has changed is that modern software systems have become so complicated that intuitive discoverability is more important than before, so the CLI disadvantage in that regard has become more acute. More simply put: our systems are so complicated, consisting of so many layers on top of layers on top of layers all modifying each other in some way, we don't have time to memorize every CLI program's command syntax and arguments list. When systems were simpler, memorizing everything was easier.

        I do think there are actual design trends against CLI. Take dconf and GSettings, for example. Sure, you can still parse it, cut it, etc., but it's just a little more difficult, takes a few more keystrokes on the command line or a few more lines in your script, to do the same things with GSettings that you would do with a config system that stores everything in plain text. But I'm not sure if this movement is all that endemic. GSettings aside, I don't feel like CLIs are getting worse, but rather it's the complexity of the systems that is making CLIs less appealing.


        I haven't played with metro as much. My biggest concern is the "root-less" aspect of moving around. I believe you need a root to move from. A "homescreen" is a great way to act as a launchpad to activities. Eventually I'd love to move away from even that, but, for now, the rootless aspect of metro bugs me. To be clear, when I say rootless I am referring to the homescreen being something that you freely move side to side (I'm not a big fan of side-to-side screen movement b/c it leaves you disoriented and without a fast way to return to a specific place) without a SINGLE frame that acts as HOME. Rather it is home extended across a navigable strip of uncertain length.
        That nit aside, metro is incredibly interesting and I think it could be pointing a way forward.
        Yeah, "potential" was the first word that popped to mind when I first tried out Metro. As it stands right now, it is terrible.


        OSX works well, but they've really stagnated over the last five or so years. Expose was a really nice idea, as was quicksilver (not apple's invention, but still developed FOR osx) but other than those I struggle to think of very useful, and unique, ui elements.
        I agree that OS X is stagnating. I kinda feel that way about Apple in general.


        Looking at number of distros is less useful than percentage of users per distro, imho. I don't think debian itself has a massive user base (smaller than ubuntu, suse, fedora, mint, maybe even mageia, afaict), so we need to look more towards ubuntu.
        Oh, I totally agree with you, but "number of distros" is the only reliable statistic we have. Some distros (Fedora and OpenSUSE, for example) put a decent amount of effort into trying to figure out how many users they have, and do so transparently enough that we can have some amount of confidence in the numbers they report, but for the most part distros are so terribly inconsistent about this - Debian doesn't even try, while Ubuntu keeps their methodology secret (so how do we know that downstream users of distros such as Linux Mint and Bodhi aren't being counted as Ubuntu users?). That's why I resort to looking at number of derivatives. And while I agree that Debian is not as widely used as its name recognition would suggest, Debian is nonetheless a very popular base for niche distros: Crunchbang, siduction, etc. - no doubt their usage share is very small, but there's so many of them that surely their users add up.


        I'm not too worried about ubuntu since I don't think they'll be a force for much longer. Their move to unity, and worse, mir, has caused brought about serious problems with the spins. I think those distros will increasingly ask the question "is using ubuntu as our upstream the best choice?" I think you'll see some movement away from ubuntu/debian and towards, the hopefully accepted, new fedora ring scheme. That is basically designed to act as platform for builders. Along with them you have the excellent suse studio service and obs (the later of which fedora MIGHT be moving to as well).
        That was mostly speculation backed by hope, but I do think it is a very possible, and reasonable, path forward that would also go a long ways towards making the linux ecosystem both more robust (by having more standard, flexible, well designed components) and a better target for proprietary development.
        I agree with everything except the first sentence. I feel like the derivatives momentum is against Ubuntu, but those derivatives that do decide to rebase will more likely look to Debian first, as that's what's most similar to Ubuntu right now. Down the road, I do see more Fedora and OpenSUSE based distros, but I think it will be several years before the tide really shifts.


        ...
        Packaging is the area with, perhaps, the most duplication of effort. Each distro shouldn't have to repackage every damn thing. That's a massive waste of resources (if you can package apps, you can/do program), and this problem, as you note, isn't just for proprietary software. The fact that there are at least two (I'd guess many more) glibcs being packaged for every release is insane. Distros could be so much more reliable, and be able to focus on what they really want if they relied on some Common Base.
        I don't think we necessarily need a single "Common Base". In a way, that's already what Debian does for its direct derivatives and Ubuntu derivatives. Ubuntu makes changes where Ubuntu developers disagree with the course taken by Debian developers (such as packaging newer versions of major software like Apache and Firefox), but for the most part the vast majority of Ubuntu packages are directly merged from Debian. I haven't seen any recent stats, but given that it was like 92% three years ago, I wouldn't be surprised if it's still around 85-90%. Hopefully developments in Fedora and OpenSUSE will lead to even more convergence on common shared bases.


        Yes, I agree that there is currently enough for marketing people to get excited about...and they do. There are plenty of companies that cater to enterprise that spend big marketing dollars (don't forget that IBM has just pledged to spend $1 000 000 000 over the next decade on linux). Then you have something like Tivo, or roku, which are linux based, and, of course, android, but there are many others. If we're talking about DE's, however, I am simply not convinced any are quite ready yet.
        Well, I guess I have to agree, because although I do feel like our DE's are competitive with Windows when it comes to integrated features, I haven't had good experiences showing them to novice-level users who are more comfortable with Windows. But the various distros all have original features that, if combined together in a single distro, would make a great entry point. Basically, like what Ubuntu used to be before Unity and adware. So I think we're pretty close.

        Comment


        • #44
          Originally posted by MartinN View Post
          RHEL7 comes bundled with Gnome on Wayland!
          You must really hate RHEL. Gnome would kill it.

          Comment


          • #45
            Originally posted by Serge View Post
            Ah, damn it! I actually meant Moblin, not Maemo, when I was talking about the interface that I really liked. I always mix up Maemo and Moblin. Conceptually, the Moblin graphical interface was brilliant.
            Yeah, it was. I'd really like to see someone pick it up and work with it. I know it was designed for very screen constrained environments but I'd be curious to see if that could be extended, in some sense, to accomodate greater use cases. I wonder how that hub might work as a homescreen?



            Do you feel that something has actually changed in CLI design itself that has made CLIs less discoverable than they used to be?

            I have always considered discoverability to be the main natural disadvantage of CLIs when compared to GUIs. CLIs simply rely more on rote memorization than GUIs do. What I think has changed is that modern software systems have become so complicated that intuitive discoverability is more important than before, so the CLI disadvantage in that regard has become more acute. More simply put: our systems are so complicated, consisting of so many layers on top of layers on top of layers all modifying each other in some way, we don't have time to memorize every CLI program's command syntax and arguments list. When systems were simpler, memorizing everything was easier.

            I do think there are actual design trends against CLI. Take dconf and GSettings, for example. Sure, you can still parse it, cut it, etc., but it's just a little more difficult, takes a few more keystrokes on the command line or a few more lines in your script, to do the same things with GSettings that you would do with a config system that stores everything in plain text. But I'm not sure if this movement is all that endemic. GSettings aside, I don't feel like CLIs are getting worse, but rather it's the complexity of the systems that is making CLIs less appealing.
            I don't think clis have really changed much at all (maybe the main difference is tab-completion, along with deep tab-completion). As you say, clis are inherently less discoverable as compared to guis, but there are some clis, like the ones I mentioned, which attempt to make some serious progress towards rectifying that to some extent (again, colin walters' was especially ahead of its time, but final term looks very promising as it's being built with vala and intended to make full use of gtk-clutter along with including the easy extensibility of fish).
            Regarding dconf/gsettings, I think you're right that they (and something like dconf-editor for the gui) greatly help modernize/simplify system management. It's very much in the spirit of Pottering, whose vision for linux i think very highly of.


            Oh, I totally agree with you, but "number of distros" is the only reliable statistic we have. Some distros (Fedora and OpenSUSE, for example) put a decent amount of effort into trying to figure out how many users they have, and do so transparently enough that we can have some amount of confidence in the numbers they report, but for the most part distros are so terribly inconsistent about this - Debian doesn't even try, while Ubuntu keeps their methodology secret (so how do we know that downstream users of distros such as Linux Mint and Bodhi aren't being counted as Ubuntu users?). That's why I resort to looking at number of derivatives. And while I agree that Debian is not as widely used as its name recognition would suggest, Debian is nonetheless a very popular base for niche distros: Crunchbang, siduction, etc. - no doubt their usage share is very small, but there's so many of them that surely their users add up.
            Fair enough


            I agree with everything except the first sentence. I feel like the derivatives momentum is against Ubuntu, but those derivatives that do decide to rebase will more likely look to Debian first, as that's what's most similar to Ubuntu right now. Down the road, I do see more Fedora and OpenSUSE based distros, but I think it will be several years before the tide really shifts.
            I realise it's a bit of a long shot, but I just keep hoping that debian will either "pick a side" (I've nothing against bsds, but linux does have features that don't exist on bsds and that fact is holding debian-linux back, and thus many of the derivatives), or re-modularize (so as to allow different display servers, or init systems between kernels).
            Re-basing on suse would be fine as well since I think their external tooling is superior to, well, everyone else's. The reason I would hope they'd choose fedora is, however, that fedora has the biggest money behind it so there is no danger of it going away. Along with that you have a very, very strong commitment to upstream and maintaining a pure open source experience. Those things, along with their re-prioritization of where qa resources should go, and careful definition of deployment units really make them an excellent basis for solid, and versatile, respins.


            I don't think we necessarily need a single "Common Base". In a way, that's already what Debian does for its direct derivatives and Ubuntu derivatives. Ubuntu makes changes where Ubuntu developers disagree with the course taken by Debian developers (such as packaging newer versions of major software like Apache and Firefox), but for the most part the vast majority of Ubuntu packages are directly merged from Debian. I haven't seen any recent stats, but given that it was like 92% three years ago, I wouldn't be surprised if it's still around 85-90%. Hopefully developments in Fedora and OpenSUSE will lead to even more convergence on common shared bases.
            By common base, I mean low level expectations that developers can rely on (so, an api at a minimum, and, for lts type releases, abi would be great). Of course, for something like embedded systems, these types of requirements could be more flexible.


            I think Mint has done a pretty good job in the past (haven't used them recently). They make many, many things very easy. Something recent that fedora/gnome has added is builtin access to askfedora. It's a very friendly website established by Rahul, IIRC, that, well, lets people have their questions answered. The builtin aspect is that there is a client that will let you, I think, query, submit and receive responses in a more friendly way than irc (I say "think" b/c I'm still on GS 3.6 which only has very basic askfedora functionality).

            Comment


            • #46
              Originally posted by liam View Post
              ...
              I don't think clis have really changed much at all (maybe the main difference is tab-completion, along with deep tab-completion). As you say, clis are inherently less discoverable as compared to guis, but there are some clis, like the ones I mentioned, which attempt to make some serious progress towards rectifying that to some extent (again, colin walters' was especially ahead of its time, but final term looks very promising as it's being built with vala and intended to make full use of gtk-clutter along with including the easy extensibility of fish).
              ...
              Actually, I'm not familiar with fish or final term. They sound interesting, so I'm going to have to take a look. Thanks for pointing them out.


              ...
              By common base, I mean low level expectations that developers can rely on (so, an api at a minimum, and, for lts type releases, abi would be great). Of course, for something like embedded systems, these types of requirements could be more flexible.
              ...
              I agree that developers need to be able to have these kind of expectations about the underlying systems their software will run on, but I am just not seeing a way to get there. Thinking back on our discussion in this thread, I see three possibilities that we've talked about:
              •Formal standardization (example: XDG/fd.o, LSB, FHS)
              •Natural gravitation towards standardization (example: distros adopting systemd for its features or to better support software that benefits from systemd)
              •Proliferation of a common base (example: Debian & Ubuntu popularity as a base distro for niche distros to build on)

              All of those have some kind of problem or drawback that we have already talked about. Formal standardization's main drawback is that it requires acceptance from distro developers. Natural gravitation's main drawback is that there is no natural gravitation when multiple solutions providing the functionality in question serve some use cases better than others and no one solution addresses all major use cases in an optimal way. Finally, the main drawback of major distro proliferation is that the distros exist to address different philosophical desires, many of which are often contradictory and mutually exclusive (such as speed of technology adoption vs stability, minimalism and simplicity vs functionality and automation, etc.), and as long as these mutually exclusive project goals continue to have a following, there will continue to be distros developing their own incompatible solutions.

              Ultimately, most of the drawbacks can be reduced to be a result of the decentralized nature of development of Linux-based OS'es. If a single project was responsible for the upstream development of the kernel, the core userland, the horde of various shared libraries, the graphics stack and the package management, then there wouldn't be a problem with compatibility. But is that really worth it? Then we'd all be Ubuntu users and have no choice but to be happy about it. Is there some other solution that hasn't been thought of before?

              A little off-topic, but your suggestion that the requirements for embedded systems could be different reminds me of something I read about called Yocto Project. Yocto's work strikes me as being similar in some ways to some of the things that you have suggested in this thread. It's an attempt to build a common base and development environment, but opposite of your suggestion, Yocto is specifically for embedded systems. Check it out, you might like what they're doing.

              Comment


              • #47
                Originally posted by Serge View Post
                I agree that developers need to be able to have these kind of expectations about the underlying systems their software will run on, but I am just not seeing a way to get there. Thinking back on our discussion in this thread, I see three possibilities that we've talked about:
                ?Formal standardization (example: XDG/fd.o, LSB, FHS)
                ?Natural gravitation towards standardization (example: distros adopting systemd for its features or to better support software that benefits from systemd)
                ?Proliferation of a common base (example: Debian & Ubuntu popularity as a base distro for niche distros to build on)

                All of those have some kind of problem or drawback that we have already talked about. Formal standardization's main drawback is that it requires acceptance from distro developers. Natural gravitation's main drawback is that there is no natural gravitation when multiple solutions providing the functionality in question serve some use cases better than others and no one solution addresses all major use cases in an optimal way. Finally, the main drawback of major distro proliferation is that the distros exist to address different philosophical desires, many of which are often contradictory and mutually exclusive (such as speed of technology adoption vs stability, minimalism and simplicity vs functionality and automation, etc.), and as long as these mutually exclusive project goals continue to have a following, there will continue to be distros developing their own incompatible solutions.

                Ultimately, most of the drawbacks can be reduced to be a result of the decentralized nature of development of Linux-based OS'es. If a single project was responsible for the upstream development of the kernel, the core userland, the horde of various shared libraries, the graphics stack and the package management, then there wouldn't be a problem with compatibility. But is that really worth it? Then we'd all be Ubuntu users and have no choice but to be happy about it. Is there some other solution that hasn't been thought of before?

                A little off-topic, but your suggestion that the requirements for embedded systems could be different reminds me of something I read about called Yocto Project. Yocto's work strikes me as being similar in some ways to some of the things that you have suggested in this thread. It's an attempt to build a common base and development environment, but opposite of your suggestion, Yocto is specifically for embedded systems. Check it out, you might like what they're doing.
                I mentioned a way forward with the Fedora ring system (far fetched as it might be).
                Pushing fdo further would be a good idea (like supporting the systemd api). Perhaps another attempt at something like lsb? I don't know.
                This isn't really a technical problem. Your case of wanting a small, memory resident system that you could load for recovery (iirc) is a pretty specialized example, and might be outside the scope of what this would need to be, but even it should be doable with a kickstart file (yes, they are intended for mass installs but they give you incredible flexiblity).
                This isn't worth much I don't know of any of the, say, top ten distros that need to be using a different Common Base than what we're talking about. The exception that most readily spring to mind are the people with low-latency requirements. They tend to run systems with lots of patches since they're trying to get rid of as much latency as possible (perhaps a BIT like the old-school overcloclockers but with at least a real point to the work). The low-latency folks already have a number of options, assuming they just don't want to build it all themselves, and those options might be condensable to a single Base for low latency. That wouldn't be unprecedented as RH, and I think SUSE as well, keep their low-latency kernels separate from their desktop offerings.

                Regarding yocto, I looked at them a few years ago b/c, as you say, their goals were incredibly exciting. IIRC, they were something of an offshoot of LiMO (or open mobile, or one of those...they run together a bit) but have been, effectively, replaced by Linaro (which seems very focused on just android whereas yocto seemed interested in more general embedded scenarios).

                Interesting conversation, Serge.

                Comment


                • #48
                  Originally posted by liam View Post
                  I mentioned a way forward with the Fedora ring system (far fetched as it might be).
                  Pushing fdo further would be a good idea (like supporting the systemd api). Perhaps another attempt at something like lsb? I don't know.
                  This isn't really a technical problem. Your case of wanting a small, memory resident system that you could load for recovery (iirc) is a pretty specialized example, and might be outside the scope of what this would need to be, but even it should be doable with a kickstart file (yes, they are intended for mass installs but they give you incredible flexiblity).
                  This isn't worth much I don't know of any of the, say, top ten distros that need to be using a different Common Base than what we're talking about. The exception that most readily spring to mind are the people with low-latency requirements. They tend to run systems with lots of patches since they're trying to get rid of as much latency as possible (perhaps a BIT like the old-school overcloclockers but with at least a real point to the work). The low-latency folks already have a number of options, assuming they just don't want to build it all themselves, and those options might be condensable to a single Base for low latency. That wouldn't be unprecedented as RH, and I think SUSE as well, keep their low-latency kernels separate from their desktop offerings.

                  Regarding yocto, I looked at them a few years ago b/c, as you say, their goals were incredibly exciting. IIRC, they were something of an offshoot of LiMO (or open mobile, or one of those...they run together a bit) but have been, effectively, replaced by Linaro (which seems very focused on just android whereas yocto seemed interested in more general embedded scenarios).

                  Interesting conversation, Serge.
                  Interesting conversation indeed.

                  I think that there are certain distros, such as Debian and Slackware, that for various reasons are very well suited to providing a common "upstream" base for other projects. I agree that the Fedora rings scheme has a lot of potential for reducing fragmentation and incompatibilities. I see the Fedora rings scheme as making Fedora more desireable to be used as that kind of common base upstream distro, and given how many independent distros are out there right now that are not based on anything, I think one more major distro becomming more desireable as a base for derivatives will reduce more fragmentation than it will cause.

                  The problem that remains unaddressed, however, is that in many cases the various fragementations, incompatibilities, and replications of effort exist for tangible reasons. I think that a major explanation of why natural adoption and more formal initiatives (e.g., LSB) have failed is because these efforts were not sufficiently sensitive to these reasons. An initiative does not need to get everyone on board. For example, XDG (fd.o) succeeded because they had the support of GNOME and KDE. To use a real example, let's translate this to systemd adoption. With systemd, the XDG/fd.o example means that upstream developers will never be able to safely assume that the OS'es they are writing software for will have systemd unless Ubuntu makes the switch.

                  My assumption is that when most of a given domain is using a certain methodology, the advantages of using that methodology increase, and so smaller groups in that domain that previously were opposed to the methodology find that with the methodology becomming common, either the advantages of using that methodology are just too good to ignore or the disadvantages in not using that methodology are just too expensive to sustain. So not every DE and WM project participated in XDG/fd.o standards, but many of those that did not initially support fd.o's standards over time ended up supporting them because the standards were useful and the advantage in supporting them was worth whatever needed to be sacrificed to support them. That advantage would not have been near as great if both GNOME and KDE were not already backing it.

                  So the real question to ask in the example of systemd adoption would be, "What will it take to get Canonical to switch to systemd?" The solution would need to address the reasons for Canonical preferring Upstart. It would almost certainly involve changes in systemd to accomodate Ubuntu. Unfortunately, leaving behind the specific example of systemd and looking at fragmentation / incompatibility in general, this approach would need to be repeated several times across several problem domains. I think that pretty much every incompatibility - packaging formats, init systems, display servers, multimedia decoders, and so on - would need to be addressed piecemeal. Part of addressing each incompatibility would need to involve determining why past initiatives (because, to get back to reality for a second, we need to admit that pretty much everything that we've been writing about so far has been tried before, in some cases more than once) have failed and figuring out what will be done differently to make sure that *next time* the efforts at commonality in that given space do succeed.

                  We can't just say that this isn't a technical problem, because for those on the other side of the argument, it looks very much like a technical problem. That is, we gain little from arguing that the fragmentation is caused by NIH or dual licensing opportunities or whatever other ideological, political or business-motivated causes. Ultimately we need to acknowledge that the people on the other side of fragmentation do see it as technical problems, even if they don't convince the rest of the world to agree with them (so, for example, Canonical engineers feel that Upstart addresses Ubuntu's needs better than systemd). I guess the TL;DR summary of my argument would go something like, "standardization: easier said than done".

                  Comment

                  Working...
                  X