Announcement

Collapse
No announcement yet.

Bringing Up Hardware First In Linux, Then Windows

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Bringing Up Hardware First In Linux, Then Windows

    Phoronix: Bringing Up Hardware First In Linux, Then Windows

    After reading the Linux 2.6.37-rc3 release announcement on the Linux kernel mailing list, another interesting thread was found and it's about getting hardware vendors to do their initial hardware bring-up under Linux prior to any Microsoft Windows or Apple Mac OS X support. A number of reasons are provided why hardware vendors should support their hardware first under Linux and also why they should foster open-source drivers along with its challenges...

    Phoronix, Linux Hardware Reviews, Linux hardware benchmarks, Linux server benchmarks, Linux benchmarking, Desktop Linux, Linux performance, Open Source graphics, Linux How To, Ubuntu benchmarks, Ubuntu hardware, Phoronix Test Suite

  • #2
    Yeah, like it was more easy.

    This is the main reason why Hardware developers doesn't make Linux's drivers:


    That simple as that, Kernel Linux doesn't have a stable API/ABI.

    Comment


    • #3
      no, that because hardware makers have no initiative to make quality software in general and quality drivers in particular.
      they only goal is to _make it fast_ so it will utilise all features of production hardware release _somehow_.

      but they don't take into equation fact that this software (which is necessary for utilisation of their hardware at all) also MUST be free, free "as beer". unless they selling "software-hardware complex", as they say. it would be beneficial to utilise free-as-freedom-and-as-a-beer OS for that (for many reasons which were discussed a thousand times already) in the first place.
      but managers prefer to just throw pack of money to few code monkeys in one corner and to "quality certification comity" in the other. "problem solved"

      Comment


      • #4
        It has nothing to do with code quality in most cases, it mostly comes down to resources. It's not a good use of resources writing drivers more than once, so you want to leverage as much shared code as possible across OSes if you can. If you have a huge driver stack or are a small IHV with limited resources it does not make sense to write a new version of your stack for every OS.

        HW today has gotten very complex. There are HW documents out there for countless chips many of which have no Linux driver as of yet. Some HW vendors try and do the right thing and support Linux, so they often release code that uses an OS-agnostic abstraction layer and they release shared code for Linux. Then the Linux community shits on them for not releasing a "proper" Linux driver. I admire the Linux driver standards, but it would really help if the community was a little less caustic to those vendors who are trying to provide support.

        Comment


        • #5
          What evidence do we have that proprietary drivers are bad / low-quality though?

          I mean yeah, everyone can point at the typical examples of fglrx and the Poulsbo crap, but if you look at Windows and Mac OS X drivers with 50-man development teams churning out drivers to exacting specs such as WDDM 1.1, it's definitely not quite as simple as "proprietary drivers are crap".

          In fact I think the open source "churn" leads to at least as bad (oftentimes worse) user experience for sufficiently complicated devices (such as GPUs and wireless chipsets). Because no matter whether you're writing closed source or open source, free software or proprietary, bazaar or cathedral, the simple fact is that newly written software is buggy. It takes time and extensive testing to empirically detect and eliminate defects. But the way that distros integrate drivers into their releases involves only incidental testing. That is, the only things that are actually tested are those that volunteers for the project decide that they want to test on their own free time. Even assuming 100% of these actual test scenarios work (when in fact more like 10% of these things work in reality), there are billions of other possible interleavings of function calls, ioctls, data structure states, etc. that go completely untested.

          Improving software quality is not about development methodology as much as it is about testing methodology. As an example, it is possible (though inefficient) to write a driver in pure x86 assembly without macros that is completely stable and contains an infinitesimal number of defects. To accomplish this, you just would have to test the hell out of it for several years with a team of several dozen people, restarting the testing process each time you modify the source assembly at all. Even though in terms of software engineering design quality, a procedural assembly program is terrible (no OOP, no layering, poor code reuse/readability/maintenance), the released, executing binary would have better "quality" than a driver that has excellent OOP / architecture practices but is full of bugs and untested codepaths.

          I detect that the folks on the LKML may be aware of this situation, and if so, it sounds like they are just advocating good software architecture / design for its own sake. I think this is insanely stupid and pointless. Having beautiful software with ideal layering, coupling and object orientation is a waste of everyone's time if the software is untested crap. The purpose of writing software is not to produce code that's aesthetically pleasing to look at / contemplate; the purpose of writing software (especially device driver software) is to produce a resulting executable that does what the user (or applications acting on behalf of the user) expects.

          So if you are going to try and tell me that Gallium3d is a better driver than the proprietary alternatives (even fglrx at this point), you're out of your mind. Gallium3d fails at its core mission, assuming that the core mission is to produce a high-performance, hardware-accelerated 3d graphics device driver that is able to stably and reliably support the state-of-the-art graphics APIs. It's slow, almost completely untested (if you take into account all the countless possible code paths that have never been executed before), and it doesn't support APIs that have been around for several years.

          On the other hand, any WHQL certified driver on Windows at least does what it claims to do, in full form. There may be some bugs, but the rate of their exposure to users is incredibly low on a modern system such as Windows 7.

          Face it: the cathedral model of device driver development is kicking the bazaar's ass right now. We are too obsessed with getting a common infrastructure that "looks pretty" and generalizes concepts / facilities that support all conceivable hardware now and for the next 10 years. Instead, we need to borrow a little wisdom from the proprietary folks, and get things to just work. Otherwise the device driver development goals for the free desktop boil down to basically an academic exercise, rather than something actually useful to people who purchase real products trying to accomplish real tasks.

          What we need is more explicit, deliberate testing for open source drivers, especially graphics and wireless. People (probably, in reality, people paid a salary to do this) will have to sit down with a long list of tests to exercise the driver, and run a program testing each one in turn, eventually exercising every possible interleaving of every branch point in the entire codebase. With the results in hand, we then need a (much) larger workforce of device driver developers available to analyze the tests and determine the best course to fix the broken test scenarios. Things get interesting when you have vaguely-written or open-ended standards that make it unclear what the correct functionality should actually be, but in those cases, it's probably better (unfortunately) to support the two most popular ways of interpreting that part of the standard, such that both alternatives can work, at least without causing a catastrophic failure of the driver.

          I see too many open source developers obsessed with purity of the codebase: trying to strictly interpret the standard in exactly one way, and if a caller doesn't conform 100% to that interpretation, the driver does something unfriendly like abort the program or terminate the connection. Completely unacceptable. Implementing a necessarily vaguely-worded spec in a natural language like English is necessarily an act of compromise, much like politics. It may be easier to shrug your shoulders at those who disagree with your interpretation, but ultimately you screw over the users.

          Sorry for this long rant, but I just don't see how trying to further generalize our driver architectures to be universal enough to support multiple operating systems is going to improve the actual situation with the lack of features/performance in drivers. It's basically an academic exercise with no practical utility.

          Comment


          • #6
            Interesting point but...

            I disagree. Mostly because IMHO the closed-source drivers and software that I use on Windows works just fine. From Synaptics to Catalyst, my hardware works and I can cofigure how will work.

            But on Linux distributions, there are tree problems. First, the lack of software that configure how my hardware will work. Simple things like a touchpad configuration interface are just missing. Second, obviously, when the driver don't work correctly. The other problem is when some software doesn't work well with the hardware, like Ubuntu Unity with ATi graphics.

            Two of these problems will be solved if the hardware vendors make drivers for Linux distributions. And it doesn't need to be open source drivers to do so, just good drivers. Saddly, Hardware vendors have two reasons why this can't be done, because Linux distributions' market isn't redituable and because Linux hasn't a stable API/ABI and Linux distributions lacks of , so technically they have to create a driver for every kernel version, for every distro.

            Comment


            • #7
              Originally posted by JairJy View Post
              Linux hasn't a stable API/ABI and Linux distributions lacks of , so technically they have to create a driver for every kernel version, for every distro.
              Uh, no they don't. They just need to release the source so it goes into the kernel and is updated as required for API changes.

              Stable APIs are the reason why Windows is such a colossal pile of security holes; they're the last thing Linux needs.

              Comment


              • #8
                Originally posted by movieman View Post
                Uh, no they don't. They just need to release the source so it goes into the kernel and is updated as required for API changes.
                For the upstream kernel, yes; except that no distro ships the upstream kernel, so tons of stuff has to get backported for every distro. Either the distro or the hw vendor has to deal with it if they want the hw to work on a particular distro version.

                Comment


                • #9
                  LinuxIsPerfect(TM) it doesn't need to change but others has to.

                  Originally posted by movieman View Post
                  Uh, no they don't. They just need to release the source so it goes into the kernel and is updated as required for API changes.

                  Stable APIs are the reason why Windows is such a colossal pile of security holes; they're the last thing Linux needs.
                  LinuxIsPerfect(TM) it doesn't need to change but others has to. Isn't any wonder why Linux distros are the last OSes that have drivers, that's because these oses have the only kernel that forces the hardware vendors to have to release the source of their drivers so they can be update with the kernel releases.

                  And of curse, Linux doesn't need a stable API/ABI to have security holes. Like this one, most of them doesn't require it.

                  Comment


                  • #10
                    Originally posted by movieman View Post
                    Uh, no they don't. They just need to release the source so it goes into the kernel and is updated as required for API changes.

                    Stable APIs are the reason why Windows is such a colossal pile of security holes; they're the last thing Linux needs.
                    I don't think so.
                    Isn't it interesting that here are talks that proprietary stuff has worse code quality and at the same time the kernel folks do not manage to provide a stable API?

                    Is _their_ code quality so bad that they fear a stable API? Maybe they should learn to promise something themselves instead of asking others to do just that.

                    And for a start they could make an API promise for X kernel versions and see how that turns out.

                    Comment

                    Working...
                    X