Announcement

Collapse
No announcement yet.

Bringing Up Hardware First In Linux, Then Windows

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • Bringing Up Hardware First In Linux, Then Windows

    Phoronix: Bringing Up Hardware First In Linux, Then Windows

    After reading the Linux 2.6.37-rc3 release announcement on the Linux kernel mailing list, another interesting thread was found and it's about getting hardware vendors to do their initial hardware bring-up under Linux prior to any Microsoft Windows or Apple Mac OS X support. A number of reasons are provided why hardware vendors should support their hardware first under Linux and also why they should foster open-source drivers along with its challenges...

    http://www.phoronix.com/vr.php?view=ODgxMQ

  • #2
    Yeah, like it was more easy.

    This is the main reason why Hardware developers doesn't make Linux's drivers:
    http://people.gnome.org/~federico/news-2009-01.html

    That simple as that, Kernel Linux doesn't have a stable API/ABI.

    Comment


    • #3
      no, that because hardware makers have no initiative to make quality software in general and quality drivers in particular.
      they only goal is to _make it fast_ so it will utilise all features of production hardware release _somehow_.

      but they don't take into equation fact that this software (which is necessary for utilisation of their hardware at all) also MUST be free, free "as beer". unless they selling "software-hardware complex", as they say. it would be beneficial to utilise free-as-freedom-and-as-a-beer OS for that (for many reasons which were discussed a thousand times already) in the first place.
      but managers prefer to just throw pack of money to few code monkeys in one corner and to "quality certification comity" in the other. "problem solved"

      Comment


      • #4
        It has nothing to do with code quality in most cases, it mostly comes down to resources. It's not a good use of resources writing drivers more than once, so you want to leverage as much shared code as possible across OSes if you can. If you have a huge driver stack or are a small IHV with limited resources it does not make sense to write a new version of your stack for every OS.

        HW today has gotten very complex. There are HW documents out there for countless chips many of which have no Linux driver as of yet. Some HW vendors try and do the right thing and support Linux, so they often release code that uses an OS-agnostic abstraction layer and they release shared code for Linux. Then the Linux community shits on them for not releasing a "proper" Linux driver. I admire the Linux driver standards, but it would really help if the community was a little less caustic to those vendors who are trying to provide support.

        Comment


        • #5
          What evidence do we have that proprietary drivers are bad / low-quality though?

          I mean yeah, everyone can point at the typical examples of fglrx and the Poulsbo crap, but if you look at Windows and Mac OS X drivers with 50-man development teams churning out drivers to exacting specs such as WDDM 1.1, it's definitely not quite as simple as "proprietary drivers are crap".

          In fact I think the open source "churn" leads to at least as bad (oftentimes worse) user experience for sufficiently complicated devices (such as GPUs and wireless chipsets). Because no matter whether you're writing closed source or open source, free software or proprietary, bazaar or cathedral, the simple fact is that newly written software is buggy. It takes time and extensive testing to empirically detect and eliminate defects. But the way that distros integrate drivers into their releases involves only incidental testing. That is, the only things that are actually tested are those that volunteers for the project decide that they want to test on their own free time. Even assuming 100% of these actual test scenarios work (when in fact more like 10% of these things work in reality), there are billions of other possible interleavings of function calls, ioctls, data structure states, etc. that go completely untested.

          Improving software quality is not about development methodology as much as it is about testing methodology. As an example, it is possible (though inefficient) to write a driver in pure x86 assembly without macros that is completely stable and contains an infinitesimal number of defects. To accomplish this, you just would have to test the hell out of it for several years with a team of several dozen people, restarting the testing process each time you modify the source assembly at all. Even though in terms of software engineering design quality, a procedural assembly program is terrible (no OOP, no layering, poor code reuse/readability/maintenance), the released, executing binary would have better "quality" than a driver that has excellent OOP / architecture practices but is full of bugs and untested codepaths.

          I detect that the folks on the LKML may be aware of this situation, and if so, it sounds like they are just advocating good software architecture / design for its own sake. I think this is insanely stupid and pointless. Having beautiful software with ideal layering, coupling and object orientation is a waste of everyone's time if the software is untested crap. The purpose of writing software is not to produce code that's aesthetically pleasing to look at / contemplate; the purpose of writing software (especially device driver software) is to produce a resulting executable that does what the user (or applications acting on behalf of the user) expects.

          So if you are going to try and tell me that Gallium3d is a better driver than the proprietary alternatives (even fglrx at this point), you're out of your mind. Gallium3d fails at its core mission, assuming that the core mission is to produce a high-performance, hardware-accelerated 3d graphics device driver that is able to stably and reliably support the state-of-the-art graphics APIs. It's slow, almost completely untested (if you take into account all the countless possible code paths that have never been executed before), and it doesn't support APIs that have been around for several years.

          On the other hand, any WHQL certified driver on Windows at least does what it claims to do, in full form. There may be some bugs, but the rate of their exposure to users is incredibly low on a modern system such as Windows 7.

          Face it: the cathedral model of device driver development is kicking the bazaar's ass right now. We are too obsessed with getting a common infrastructure that "looks pretty" and generalizes concepts / facilities that support all conceivable hardware now and for the next 10 years. Instead, we need to borrow a little wisdom from the proprietary folks, and get things to just work. Otherwise the device driver development goals for the free desktop boil down to basically an academic exercise, rather than something actually useful to people who purchase real products trying to accomplish real tasks.

          What we need is more explicit, deliberate testing for open source drivers, especially graphics and wireless. People (probably, in reality, people paid a salary to do this) will have to sit down with a long list of tests to exercise the driver, and run a program testing each one in turn, eventually exercising every possible interleaving of every branch point in the entire codebase. With the results in hand, we then need a (much) larger workforce of device driver developers available to analyze the tests and determine the best course to fix the broken test scenarios. Things get interesting when you have vaguely-written or open-ended standards that make it unclear what the correct functionality should actually be, but in those cases, it's probably better (unfortunately) to support the two most popular ways of interpreting that part of the standard, such that both alternatives can work, at least without causing a catastrophic failure of the driver.

          I see too many open source developers obsessed with purity of the codebase: trying to strictly interpret the standard in exactly one way, and if a caller doesn't conform 100% to that interpretation, the driver does something unfriendly like abort the program or terminate the connection. Completely unacceptable. Implementing a necessarily vaguely-worded spec in a natural language like English is necessarily an act of compromise, much like politics. It may be easier to shrug your shoulders at those who disagree with your interpretation, but ultimately you screw over the users.

          Sorry for this long rant, but I just don't see how trying to further generalize our driver architectures to be universal enough to support multiple operating systems is going to improve the actual situation with the lack of features/performance in drivers. It's basically an academic exercise with no practical utility.

          Comment


          • #6
            Interesting point but...

            I disagree. Mostly because IMHO the closed-source drivers and software that I use on Windows works just fine. From Synaptics to Catalyst, my hardware works and I can cofigure how will work.

            But on Linux distributions, there are tree problems. First, the lack of software that configure how my hardware will work. Simple things like a touchpad configuration interface are just missing. Second, obviously, when the driver don't work correctly. The other problem is when some software doesn't work well with the hardware, like Ubuntu Unity with ATi graphics.

            Two of these problems will be solved if the hardware vendors make drivers for Linux distributions. And it doesn't need to be open source drivers to do so, just good drivers. Saddly, Hardware vendors have two reasons why this can't be done, because Linux distributions' market isn't redituable and because Linux hasn't a stable API/ABI and Linux distributions lacks of , so technically they have to create a driver for every kernel version, for every distro.

            Comment


            • #7
              Originally posted by JairJy View Post
              Linux hasn't a stable API/ABI and Linux distributions lacks of , so technically they have to create a driver for every kernel version, for every distro.
              Uh, no they don't. They just need to release the source so it goes into the kernel and is updated as required for API changes.

              Stable APIs are the reason why Windows is such a colossal pile of security holes; they're the last thing Linux needs.

              Comment


              • #8
                Originally posted by movieman View Post
                Uh, no they don't. They just need to release the source so it goes into the kernel and is updated as required for API changes.
                For the upstream kernel, yes; except that no distro ships the upstream kernel, so tons of stuff has to get backported for every distro. Either the distro or the hw vendor has to deal with it if they want the hw to work on a particular distro version.

                Comment


                • #9
                  LinuxIsPerfect(TM) it doesn't need to change but others has to.

                  Originally posted by movieman View Post
                  Uh, no they don't. They just need to release the source so it goes into the kernel and is updated as required for API changes.

                  Stable APIs are the reason why Windows is such a colossal pile of security holes; they're the last thing Linux needs.
                  LinuxIsPerfect(TM) it doesn't need to change but others has to. Isn't any wonder why Linux distros are the last OSes that have drivers, that's because these oses have the only kernel that forces the hardware vendors to have to release the source of their drivers so they can be update with the kernel releases.

                  And of curse, Linux doesn't need a stable API/ABI to have security holes. Like this one, most of them doesn't require it.

                  Comment


                  • #10
                    Originally posted by movieman View Post
                    Uh, no they don't. They just need to release the source so it goes into the kernel and is updated as required for API changes.

                    Stable APIs are the reason why Windows is such a colossal pile of security holes; they're the last thing Linux needs.
                    I don't think so.
                    Isn't it interesting that here are talks that proprietary stuff has worse code quality and at the same time the kernel folks do not manage to provide a stable API?

                    Is _their_ code quality so bad that they fear a stable API? Maybe they should learn to promise something themselves instead of asking others to do just that.

                    And for a start they could make an API promise for X kernel versions and see how that turns out.

                    Comment


                    • #11
                      not that starting to be pathetic: one guy here talking that good design and planing is nothing compared to bruteforcing random code for all the corner cases, the other says that "stable API" somehow by itself is sign of quality.
                      no, not the carefully designed code rapidly changing to fit present situation and having with some fallbacks but frozen API for unmaintained one-time-written third-party code to function. riiiight....

                      here's my example of the work of 'that' stable-API/ABI code you so adore for you:
                      here i have GA-MA78G-DS3H, Athlon 6000+X2, 2Gb DDR2, Radeon X4730 computer with 3 multimedia devices:
                      1) bt8xx-based PCI analog receiver
                      2) b2c2 "skystar2" DVB-S receiver
                      3) AVermedia AverTV USB2 analog receiver
                      it having Windows(tv)7(r) 64 bit and Gentoo with 2.6.36 kernel and r600g&xf86-video-ati installed.

                      Gentoo working flawlessly with #1 and #2. but #3 unfortunately was not reverse-engineered and there is zero feedback from manufacturer so it just dead weight.
                      But Windows(tm) is so cool that is has no drivers for #1 and #3 whatsoever: official support for #1 is long dropped and unofficial only consists of Vista 32 bit-capable 2005 year or something hacky thingy, #3 is not even mentioned on official website (not A8xx device) anymore. but all that is not interesting part.
                      the interesting part that is this Windows(tm) is unable do shutdown, reboot or otherwise properly end its existence in the system's memory. it just sitting there for 15-30 minutes and then gruesomely dies. googling revealed that this is very widespread issue and cause is some driver inconsistencies (one guy analyzed coredump from other guy's machine). in my case - b2c2 device drivers (from official page and with official support) trying do some shit with device and bus power states and repeating until OS just shoots itself (only removing them from "device list" _and_ manually removing its files from "Windows" directory helps).
                      potential problem is not limited to dvb or any device in particular and may happen with anyone with anything.

                      good fucking code practises here. i say, you love stable API/ABI ways so much - go and "marry" them, just code for OS with those and keep this shit far away from Linux.

                      And for a start they could make an API promise for X kernel versions and see how that turns out.
                      and the next time some developer will write: "hey, guys, i think <this> can be better, just change it like this and like that. here, i got a patch for you!".
                      they should answer with: "are you nuts ? you can't do that, we are keeping API fidelity for dudes on the Internets! they say that is only good way for kernel development and this is only way we can find enough vendor support to run it on those pesky x86 machines we have almost 0 drivers. and we also think we should remake kernel to be micro - that will show them that we mean business"

                      humor us, just write your API/ABI proposal on LKML and not forget to add that entire kernel API should be like this, because it shows quality and stuff. i would really like to read some replies.

                      PS: and don't start that "officially supported", "discontinued", "this is third-party fault" on me or whatnot. you talking about different models, that are performance results of those models in action.
                      but they, probably, _just didn't tested my case_, riiight ?

                      Comment


                      • #12
                        Originally posted by dfx. View Post
                        [...]
                        and the next time some developer will write: "hey, guys, i think <this> can be better, just change it like this and like that. here, i got a patch for you!".
                        they should answer with: "are you nuts ? you can't do that, we are keeping API fidelity for dudes on the Internets! they say that is only good way for kernel development and this is only way we can find enough vendor support to run it on those pesky x86 machines we have almost 0 drivers. and we also think we should remake kernel to be micro - that will show them that we mean business"

                        humor us, just write your API/ABI proposal on LKML and not forget to add that entire kernel API should be like this, because it shows quality and stuff. i would really like to read some replies.

                        PS: and don't start that "officially supported", "discontinued", "this is third-party fault" on me or whatnot. you talking about different models, that are performance results of those models in action.
                        but they, probably, _just didn't tested my case_, riiight ?
                        Great really, few examples to suit every case right? As if the FOSS stuff was tested thoroughly, just go back to read that piece of Torvalds on DRM. Or look at the oh so great Intel driver mess, crippling their already crippled hardware even more and then you see guys argueing "just use an old kernel then" or "those features aren't needed anyway", yeah say the people having different hardware. And reality check for you, this driver is all FOSS, and many years ago all the people hailed Intel for their great work as it "just works", now it doesn't anymore.

                        And if you did not know, there are merge windows now already, so if you had such a great change you could get a "no merge window now, so screw you".

                        It is funny that toolkits provide a stable API and there it is good and accepted, but please don't do something like that anywhere else. And the same time -- as was mentioned -- hardly any distribution is using the vanilla kernel and when there are bugs it's always the downstream's fault.

                        Btw. I am no kernel dev so I would be the last one to do such a proposal, that does not mean though that it would not have a lot of good aspects as well but you are only talking about black and white here, as if things were that easy.

                        Comment


                        • #13
                          Originally posted by mat69 View Post

                          Btw. I am no kernel dev
                          Thank you for admitting you are incompetent.

                          Now try to learn to shut up when you are incompetent!

                          Comment


                          • #14
                            Originally posted by DebianAroundParis View Post
                            Thank you for admitting you are incompetent.

                            Now try to learn to shut up when you are incompetent!
                            Thank you for admitting that you are an idiot.

                            Comment


                            • #15
                              Originally posted by allquixotic View Post
                              I detect that the folks on the LKML may be aware of this situation, and if so, it sounds like they are just advocating good software architecture / design for its own sake. I think this is insanely stupid and pointless. Having beautiful software with ideal layering, coupling and object orientation is a waste of everyone's time if the software is untested crap. The purpose of writing software is not to produce code that's aesthetically pleasing to look at / contemplate; the purpose of writing software (especially device driver software) is to produce a resulting executable that does what the user (or applications acting on behalf of the user) expects.
                              Actually, you're missing the maintenance and porting. The hardware will still have to be used many years after the code was written and tested, and there are many changes which will happen during this time.

                              Properly written code is easier to maintain, and easier for new people to pick up after the original author disappears.

                              So if you are going to try and tell me that Gallium3d is a better driver than the proprietary alternatives (even fglrx at this point), you're out of your mind. Gallium3d fails at its core mission, assuming that the core mission is to produce a high-performance, hardware-accelerated 3d graphics device driver that is able to stably and reliably support the state-of-the-art graphics APIs. It's slow, almost completely untested (if you take into account all the countless possible code paths that have never been executed before), and it doesn't support APIs that have been around for several years.
                              Sorry, but Gallium3d is extremely new. It hasn't even had the chance to fail yet. In the short time since it has been introduced, it has shown to be FAR easier to develop for than the traditional Mesa way, and the progress on Gallium drivers has been incredibly fast given the small number of developers.

                              Its core mission was to improve code re-use and to help new developers get involved. It has succeeded at both of these, and most of the exciting new developments (video decoding, computing, new features, etc.) have only started appearing in the free drivers after the switch to Gallium3d.

                              Face it: the cathedral model of device driver development is kicking the bazaar's ass right now. We are too obsessed with getting a common infrastructure that "looks pretty" and generalizes concepts / facilities that support all conceivable hardware now and for the next 10 years. Instead, we need to borrow a little wisdom from the proprietary folks, and get things to just work.
                              It's easier to get things to "just work" when you have 500 full-time programmers on your payroll writing a driver with no worries about any legal issues.

                              You're comparing the work of a few hobbyists and a couple of paid programmers over 3 years to the work of 300+ full-time programmers over 15 years.

                              This is clearly a bogus comparison. Pay 300 developers to develop for Gallium3d full time and then we'll compare.

                              Sorry for this long rant, but I just don't see how trying to further generalize our driver architectures to be universal enough to support multiple operating systems is going to improve the actual situation with the lack of features/performance in drivers. It's basically an academic exercise with no practical utility.
                              Many of the new features were simply not possible without a major redesign of the 3d driver model. You couldn't do OpenGL 2 with the old driver model of Mesa, period.

                              Comment

                              Working...
                              X