Announcement

Collapse
No announcement yet.

Will The Free Software Desktop Ever Make It?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • This thread is a loaded question. Its already "made" it for me and most other people on this forum.

    Comment


    • Originally posted by aussiebear View Post
      I might as well spend that time talking to my dying grandmother. At least that has some worth.
      So your grandmother was dying and you were out posting in a flame war instead of making her feel loved and appreciated in her last minutes in life? You have no soul.

      Comment


      • Originally posted by glxextxexlg View Post
        Companies don't feel hatred. They act on logic and statistics (like a mentat). The reason why nvidia doesn't release opensource drivers or just hardware specs like AMD did is that their hardware and drivers are by far superior to AMD/ATI's hardware and software.Their hardware and quality OpenGL drivers that work with linux workstations fuel the effects you see when you watch movies like Avatar and Pirates of The Carribean. Its perfectly understandable from a bussiness point of view why they don't want to share this know-how with an inferior competitor.

        once I get done laughing, I might consider adressing this point. Nvidia is only about promoting nvidia and they want the workstation market, largely AMD has let them have it. Rightfully so. while its high profits its not large enough to create enough cash flow to be self sustaining.

        BTW most movies are rendered on cpu farms and just now are we starting to see gpu farms pop into existance.

        Comment


        • Originally posted by glxextxexlg View Post
          Yes I do. because amd doesn't help brian paul and assist mesa developers implementing the GL 3x/4x stuff. Why don't they donate code to mesa like intel does? They have far more resources than intel has in this field.
          Stop the FUD please. If you knew about market situation you would know that intel money is >>> than AMD money. Intel could afford way more. AMD is relatively small on that scale. It's also not just a matter how many GPU guys you have but also a matter if they have free capacity.
          Stop TCPA, stupid software patents and corrupt politicians!

          Comment


          • Originally posted by Tudhalyas View Post
            I think we're missing the point here. In order to become a good desktop operating system, Linux needs 2 things:
            1. A stable API for its kernel and its main subsystems, namely the graphics and audio stack. If you want people outside the open source movement to support Linux, you'll have to give them stable APIs so they can write their modules without the worry that they won't work with the next release of the package. This would also ensures some degree of backward compatibility towards old, non-open binaries that are no more supported by the developers. Yes, we would all like 100% free software on our machines, but this is not a perfect world.
            As for the kernel, there are good reasons, as has been already discussed previously not to have a stable kernel API for Linux, however, Linus has talked about "stabilizing" some parts of the kernel API that deal with different subsystems. Mainly the first part (IIRC Michael posted an article about it here on Phoronix some time ago) had to do with the user space. Reading a bit more on the subject at the time I remember that in the very same thread in the LKML where this was discussed there were mentions of other subsystems as well. However, given the rapid development in recent months about new graphic support and capabilities stabilizing (for instance) the DRM subsystem in the kernel would be all too harmful. For the longest time (ever since I was building ALSA for my EMU10K1 Live and Audigy cards back in the 2.4 kernel days and mostly into the first incarnations of 2.6) I've thought that sound should be moved out of the kernel-space and only use the API to load them and link them, however the situation with audio drivers is MUCH better than that of graphics drivers.

            Originally posted by Tudhalyas View Post
            • One (and only one) thing for every critical task of the OS, so that people can find a similar environment on any distro they may try: one graphical server, one audio framework, one package management system, one GUI toolkit, etc. I know, Linux should be all about users' choice, but when it comes to critical parts of the OS we shouldn't have the luxury of choosing which parts fits us best. With an uniform system framework across distros, people (users and developers alike) would get more attracted to Linux, IMHO.
            Please allow me to dissect this point, as you touch on too many areas, on which I've spent way too much thinking about.

            I hate to bring the case of MacOS X into the mix, many call it the "successful Unix on the Desktop OS", and maybe it is, but it also has achieved it JUST because Apple has tamed the Big Cat (pun intended, and note the irony) and caged it so that there is no derail. To put into context, look at the situation with Android, even though it is governed by Google, it still is plagued by the many problems pretty much all other Linux distributons suffer: Fragmentation - And I don't mean as in "many versions", rather as in "many roads to go to Rome" and "many ways to achieve one thing", but mainly due to the lack of "cohesion".

            Strangely enough most of the tools that make up the "bare-bones" system (CLI only) do have said cohesion, and even though there are several tools to achieve the same, there is far less fragmentation with said tools, in the later list, you touch on so many aspects which exhibit just the degree of fragmentation in Linux that is hard to pick one with which to start...
            1. There is only one graphics server in Linux (deployed in the major distributions, that is. There are other X implementations that can work in Linux, but not routinely deployed), and that is the Xorg Xserver. The problem really is the many concurrent versions of it that are floating around at any given time. Especially with long-term supported distributions where you can see pretty old ones, problem with that is the immensely varying degree of supported features each carry... and being an "infrastructural" or "core" component, in many of these distributions it doesn't get upgraded, though the whole X stack could be relatively painlessly upgraded without wrecking havoc to the other software (Yes I'm looking at you Ubuntu LTS, CentOS and Red Hat!).
            2. I couldn't agree more about a single media framework. The way I see it, there are at least 3, which are GStreamer, Xine and MPlayer. All of them can and do make use of plugins which is extremely convenient as they are extensible and extremely flexible, to the point that nowadays one of the reasons behind the origin of GStreamer is no longer valid with Xine and MPlayer and that is that they used to have the restricted formats embedded into the core components. Both started out as media players, and in time evolved to complete media frameworks. So much so that they both can (even interchangeably) be used as backends for many actual players/applications, the difference between the three is that GStreamer was designed as a framework from the ground up rather than being an application first. If I had to choose one from the three I'd take GStreamer, however Xine and MPlayer are more mature and robust and better support some media types. However, GStreamer support for restricted media types has come a LONG way, and it is also possible to have legit support from third parties for more codecs (eg. Fluendo), even if they are restricted or distributed by another company.
            3. One piece of the Linux puzzle that is less talked about, but not least important for modern systems is the sound support. Although almost being trivial nowadays, there are a few dark corners (X-Fi, anyone?). With the advent of ALSA quite a LOT of the sound problems in the past (in the kernel 2.4 and Open Sound System era) are simply no more. However there are varying degrees of supportted features and some hardware chips downright do not support some features (like hardware mixing) on Linux and any other platform, wich added to the enormous mix of harware out in the wild where Linux is deployed (much like Windows, and very much in contrast to Mac OS), unlike Windows hardware support comes from a single project (which is actually a good thing in Linux), the ALSA project. The problem? the varying support of features from the different chip makers and maturity of the drivers mean that these cannot (and as some argue, should not) be used on their own. Add to the already complex pyramid of stacking components the despised by many, loved by others, and understood only by its creators, the sound server or sound system. Linux has its history of love-hate relationships with these. There's been a few in the past that attempted to tackle the problem. What do they do? Simple, they (try) to ensure that despite the hw support, not one application could lock the sound device and produce multiple sounds at the same time. Simple, right? Think again. Without hardware mixing that is plain impossible to do with the drivers only (in the understanding that the driver would deliver the streams to the device in a FIFO fashion). The driver could become complex enough to provide said support at the driver level... Meet ALSA, they attempted to attain this through a common (and very complex and robust) API and plugins such as dmix, and to a degree they suceeded at it, but again the varying degrees of hardware support played an important role. Different from the efforts made in the past, PulseAudio has been able to accomplish much better this task, extending the capabilities and even adding a few interesting features. Alas it is not perfect (yet), but a very important step in the right direction and a piece which is already in place. I would dare to call the Sound System (in this case PulseAudio) the X of the sound realm, the high level API control program through which the User can interact with the Hardware, and through which the drivers do their magic. Much like it is the case for graphics hardware.
            4. I don't want to beat a dead horse more than it is necessary, but talking about hardware support, especially here at Phoronix, we cannot elude the topic of graphics hardware. This is a rather hot topic and a hot potato many choose to better not deal with. Sure there are the ideological implications of using "closed" core components in the kernel to drive some brands, I'm one of those. Sure there is the much better approach to having the drivers being implemented by those who in the end shape up the system: the community of developers, and a LOT of users caught in the middle. I wont get into the discussion of which is better and yadda, yadda. Fortunately the recent work in all aspects of the graphics stack are strides in the right direction, despite them being a bit late, and making real fast progress, it never seems to be fast enough. However I applaud the efforts of all involved, companies, community and meda (such as Phoronix) to keep us all interested and hoping for that next release that will enable us to do so much more with our current generation hardware or improve things on our previous generations. In an ideal world, just like it was the case back in the mid 90s when consumer 3D accelerators first came to be, the chips manufacturers would release the specs to the different OEMs so THEY could make THEIR drivers (and also released the same specs to software companies so they could also have better support). The way I see what AMD and Intel do for their hardware is precisely that. They release the information so that Xorg and Linux developers can implement the drivers. I have to reckon, though, that modern day graphics hardware is riddled with third party IP, to which the manufacturing company (be it AMD, nVidia, Intel, VIA, etc) don't hold all the rights to, and as such cannot release their specs, only to those parts they fully own can they do this (the 2D/3D engine, memory managers, etc). Ideally one of these big companies would step up and "mass license" some components so that they could be finally implemented, even though support might have been in the hardware for several generations, and workarounds exist... (Yes, I'm looking at you S3TC!)*. Anyway, speaking of the Graphics Stack, we invariably have to talk about Mesa, and the current state of Mesa leaves much to be desired. Many complaint about lack of support for advanced stuff like OpenGL 3 and 4 and OGL ES 2, well the reason for that is that Mesa does not support the bulk (yet), efforts are in place to, but they are not quite there yet. So in order to ensure the best Linux Desktop Experience, support for this should be leveled, and as such cater for Mesa is paramount!
            5. I don't want to start yet another flamework, but it is true, we need a common, stabilized Desktop API. I'm not talking about who is "better" GTK+ or Qt or Fox or wxWidgets or TCL/TK; I'm talking about one cohesive common API for the Linux Desktop, despite what could be used under said Desktop API. Now, I don't have anything against Qt, but given that there are at least three Desktop environments sort of widespread built using GTK+ (GNOME, XFCE, LXDE), to me it is clear which should be the "defacto" Linux Desktop API (even though to some extent it is, it still could be further polished and and have more of a common feel to it [we all know the look can always be different, but the behavior should be consistent always]). As such at least the three major Desktop OSes would have each its own very distinctive Desktop API: Windows - WinAPI, MacOS X - Cocoa, Linux (and probably other Unices) - GTK+. However, there is one "problem" that might be thought as such: GTK+ is largely C code made to work in an Object Oriented sort of way through endless "tricks" and "hacks" (not much so, though it is very convoluted), which although are internal to the API, place a burden on the programmer in that it is not fully object oriented. The way I see it, the best course of action would be to migrate the whole of GTK+ (by the next major release, which would be 4) to at least C++/C# (yes, I know we can use those today in GTKmm or GTK# in versions 2 and 3, but is NOT the default for the API to use those languages as the main language to use the API), but that can easily be at least 10 years or more off (remember GTK+ 2 has been around since 2001-2002). I'm no longer all that familiar with GTK+ and very much ignore what has been done under the hood for GTK+ (other than GNOME 3 and GNOME-Shell), nowadays GTK+ seems to stand for GNOME Tool Kit rather than GIMP Tool Kit. At any rate, this only reflects the amount of fragmentation and lack of coherence and adhesion there is regarding Linux on the Desktop.
            6. "Central governance" is something that in "Linux" is not going to happen other than the kernel and a few other core components. It falls into the hands of the different distributors to become their own "central governance" for the "products" they ship with their distribution. This has lead to the endless amounts of duplicate work made pretty much in every distribution in existence. Was what initially caused the ALSA debacle and Open Sound System demise (I won't use the acronym OSS in this context, as it is nowadays too vague) a few years back lead by SuSE, is why we have or rather do not have a common and standard package management system and why we see a LOT of unpolished software all over the place. What shoud happen is that various "vendors" (Google, Red Hat, Mandriva, Ubuntu, Novell, etc) form a kind of consortium or foundation which would approve and polish the different components and agree on Common Core components and pieces for all their distributions (package management system, updates system, application market place, support, APIs, toolkits, Mesa, etc) and define what would then become "The Linux Desktop". Homogenization, though would come at a price: How, then would these different vendors distinguish their own distributions from one another? How, then woud they compete in a commercial setting? In the end is their competing commercially that most of the Open Source developers are able to: a) work on Open Source projects; and b) make a living out of it, too; taking Open Source out of University Campi. And no, that is not "prostitution of Open Source" (as I've read in the past call the companies that commercially support Linux and fund Linux' development).


            I wouldn't call this a list of what should happen for Linux to become a relevant Desktop OS, as much of what I just said already is being worked on, and with the very goal to allow for better work on the Desktop and better integration of the system.

            If one thing has been clear to me in the almost 15 years I have using Linux is that Desktop relevance seems not to be goal. And if it is, pretty much each Desktop distribution has its own idea of what a Linux Desktop should be (contributing to the whole fragmentation concept). I'd love to see all the players involved in the system go in the same direction cohesively and integrated, freedesktop.org is a start, but a "steering comettee" doesn't guarrantee things are done as they are "recommended"

            *This might be material for another discussion, being that we bougth the hardware, and the manufacturers already licensed the technology, in a way, we have already bought the right to use the technology and as such it woud be independent of the platform said technology is found, we own the right... But that is a bit convoluted.

            Comment


            • Stable APIs are evil; only those who haven't learned from the Windows debacle would even consider imposing them on Linux. The funny part is that people keep demanding stable APIs in Linux right at the time where the stable APIs in Windows have changed from a selling point to a curse.

              That doesn't mean that you should change the API every day for the heck of it, but continuing to support ancient, insecure, poorly-designed APIs or refusing to change them just so some ancient program will continue to run is insane. The only people who need stable APIs are developers who want to ship closed source binary software and never support it afterwards.

              Comment


              • There is no hard and fast rules on what you should and you should not do. You have to be flexible.

                The 100% entire point of having a desktop at all is the ability for you to run applications on that desktop. It exists for the sole purpose of facilitating the development and streamlining the use of other applications.


                The hierarchic of importance is:

                1. Applications
                2. Desktop environment.
                3. OS
                4. Kernel
                5. Hardware.

                Applications dictate OS and hardware. Not the other way around.

                With Linux there is and always has been a shitload of stable APIs and ABIs. The Linux kernel has stable APIs. Gnome has stable APIs. Everything. That is the only thing that makes the system usable.

                Comment

                Working...
                X