Announcement

Collapse
No announcement yet.

Bringing Up Hardware First In Linux, Then Windows

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • dfx.
    replied
    not that starting to be pathetic: one guy here talking that good design and planing is nothing compared to bruteforcing random code for all the corner cases, the other says that "stable API" somehow by itself is sign of quality.
    no, not the carefully designed code rapidly changing to fit present situation and having with some fallbacks but frozen API for unmaintained one-time-written third-party code to function. riiiight....

    here's my example of the work of 'that' stable-API/ABI code you so adore for you:
    here i have GA-MA78G-DS3H, Athlon 6000+X2, 2Gb DDR2, Radeon X4730 computer with 3 multimedia devices:
    1) bt8xx-based PCI analog receiver
    2) b2c2 "skystar2" DVB-S receiver
    3) AVermedia AverTV USB2 analog receiver
    it having Windows(tv)7(r) 64 bit and Gentoo with 2.6.36 kernel and r600g&xf86-video-ati installed.

    Gentoo working flawlessly with #1 and #2. but #3 unfortunately was not reverse-engineered and there is zero feedback from manufacturer so it just dead weight.
    But Windows(tm) is so cool that is has no drivers for #1 and #3 whatsoever: official support for #1 is long dropped and unofficial only consists of Vista 32 bit-capable 2005 year or something hacky thingy, #3 is not even mentioned on official website (not A8xx device) anymore. but all that is not interesting part.
    the interesting part that is this Windows(tm) is unable do shutdown, reboot or otherwise properly end its existence in the system's memory. it just sitting there for 15-30 minutes and then gruesomely dies. googling revealed that this is very widespread issue and cause is some driver inconsistencies (one guy analyzed coredump from other guy's machine). in my case - b2c2 device drivers (from official page and with official support) trying do some shit with device and bus power states and repeating until OS just shoots itself (only removing them from "device list" _and_ manually removing its files from "Windows" directory helps).
    potential problem is not limited to dvb or any device in particular and may happen with anyone with anything.

    good fucking code practises here. i say, you love stable API/ABI ways so much - go and "marry" them, just code for OS with those and keep this shit far away from Linux.

    And for a start they could make an API promise for X kernel versions and see how that turns out.
    and the next time some developer will write: "hey, guys, i think <this> can be better, just change it like this and like that. here, i got a patch for you!".
    they should answer with: "are you nuts ? you can't do that, we are keeping API fidelity for dudes on the Internets! they say that is only good way for kernel development and this is only way we can find enough vendor support to run it on those pesky x86 machines we have almost 0 drivers. and we also think we should remake kernel to be micro - that will show them that we mean business"

    humor us, just write your API/ABI proposal on LKML and not forget to add that entire kernel API should be like this, because it shows quality and stuff. i would really like to read some replies.

    PS: and don't start that "officially supported", "discontinued", "this is third-party fault" on me or whatnot. you talking about different models, that are performance results of those models in action.
    but they, probably, _just didn't tested my case_, riiight ?

    Leave a comment:


  • mat69
    replied
    Originally posted by movieman View Post
    Uh, no they don't. They just need to release the source so it goes into the kernel and is updated as required for API changes.

    Stable APIs are the reason why Windows is such a colossal pile of security holes; they're the last thing Linux needs.
    I don't think so.
    Isn't it interesting that here are talks that proprietary stuff has worse code quality and at the same time the kernel folks do not manage to provide a stable API?

    Is _their_ code quality so bad that they fear a stable API? Maybe they should learn to promise something themselves instead of asking others to do just that.

    And for a start they could make an API promise for X kernel versions and see how that turns out.

    Leave a comment:


  • JairJy
    replied
    LinuxIsPerfect(TM) it doesn't need to change but others has to.

    Originally posted by movieman View Post
    Uh, no they don't. They just need to release the source so it goes into the kernel and is updated as required for API changes.

    Stable APIs are the reason why Windows is such a colossal pile of security holes; they're the last thing Linux needs.
    LinuxIsPerfect(TM) it doesn't need to change but others has to. Isn't any wonder why Linux distros are the last OSes that have drivers, that's because these oses have the only kernel that forces the hardware vendors to have to release the source of their drivers so they can be update with the kernel releases.

    And of curse, Linux doesn't need a stable API/ABI to have security holes. Like this one, most of them doesn't require it.

    Leave a comment:


  • agd5f
    replied
    Originally posted by movieman View Post
    Uh, no they don't. They just need to release the source so it goes into the kernel and is updated as required for API changes.
    For the upstream kernel, yes; except that no distro ships the upstream kernel, so tons of stuff has to get backported for every distro. Either the distro or the hw vendor has to deal with it if they want the hw to work on a particular distro version.

    Leave a comment:


  • movieman
    replied
    Originally posted by JairJy View Post
    Linux hasn't a stable API/ABI and Linux distributions lacks of , so technically they have to create a driver for every kernel version, for every distro.
    Uh, no they don't. They just need to release the source so it goes into the kernel and is updated as required for API changes.

    Stable APIs are the reason why Windows is such a colossal pile of security holes; they're the last thing Linux needs.

    Leave a comment:


  • JairJy
    replied
    Interesting point but...

    I disagree. Mostly because IMHO the closed-source drivers and software that I use on Windows works just fine. From Synaptics to Catalyst, my hardware works and I can cofigure how will work.

    But on Linux distributions, there are tree problems. First, the lack of software that configure how my hardware will work. Simple things like a touchpad configuration interface are just missing. Second, obviously, when the driver don't work correctly. The other problem is when some software doesn't work well with the hardware, like Ubuntu Unity with ATi graphics.

    Two of these problems will be solved if the hardware vendors make drivers for Linux distributions. And it doesn't need to be open source drivers to do so, just good drivers. Saddly, Hardware vendors have two reasons why this can't be done, because Linux distributions' market isn't redituable and because Linux hasn't a stable API/ABI and Linux distributions lacks of , so technically they have to create a driver for every kernel version, for every distro.

    Leave a comment:


  • allquixotic
    replied
    What evidence do we have that proprietary drivers are bad / low-quality though?

    I mean yeah, everyone can point at the typical examples of fglrx and the Poulsbo crap, but if you look at Windows and Mac OS X drivers with 50-man development teams churning out drivers to exacting specs such as WDDM 1.1, it's definitely not quite as simple as "proprietary drivers are crap".

    In fact I think the open source "churn" leads to at least as bad (oftentimes worse) user experience for sufficiently complicated devices (such as GPUs and wireless chipsets). Because no matter whether you're writing closed source or open source, free software or proprietary, bazaar or cathedral, the simple fact is that newly written software is buggy. It takes time and extensive testing to empirically detect and eliminate defects. But the way that distros integrate drivers into their releases involves only incidental testing. That is, the only things that are actually tested are those that volunteers for the project decide that they want to test on their own free time. Even assuming 100% of these actual test scenarios work (when in fact more like 10% of these things work in reality), there are billions of other possible interleavings of function calls, ioctls, data structure states, etc. that go completely untested.

    Improving software quality is not about development methodology as much as it is about testing methodology. As an example, it is possible (though inefficient) to write a driver in pure x86 assembly without macros that is completely stable and contains an infinitesimal number of defects. To accomplish this, you just would have to test the hell out of it for several years with a team of several dozen people, restarting the testing process each time you modify the source assembly at all. Even though in terms of software engineering design quality, a procedural assembly program is terrible (no OOP, no layering, poor code reuse/readability/maintenance), the released, executing binary would have better "quality" than a driver that has excellent OOP / architecture practices but is full of bugs and untested codepaths.

    I detect that the folks on the LKML may be aware of this situation, and if so, it sounds like they are just advocating good software architecture / design for its own sake. I think this is insanely stupid and pointless. Having beautiful software with ideal layering, coupling and object orientation is a waste of everyone's time if the software is untested crap. The purpose of writing software is not to produce code that's aesthetically pleasing to look at / contemplate; the purpose of writing software (especially device driver software) is to produce a resulting executable that does what the user (or applications acting on behalf of the user) expects.

    So if you are going to try and tell me that Gallium3d is a better driver than the proprietary alternatives (even fglrx at this point), you're out of your mind. Gallium3d fails at its core mission, assuming that the core mission is to produce a high-performance, hardware-accelerated 3d graphics device driver that is able to stably and reliably support the state-of-the-art graphics APIs. It's slow, almost completely untested (if you take into account all the countless possible code paths that have never been executed before), and it doesn't support APIs that have been around for several years.

    On the other hand, any WHQL certified driver on Windows at least does what it claims to do, in full form. There may be some bugs, but the rate of their exposure to users is incredibly low on a modern system such as Windows 7.

    Face it: the cathedral model of device driver development is kicking the bazaar's ass right now. We are too obsessed with getting a common infrastructure that "looks pretty" and generalizes concepts / facilities that support all conceivable hardware now and for the next 10 years. Instead, we need to borrow a little wisdom from the proprietary folks, and get things to just work. Otherwise the device driver development goals for the free desktop boil down to basically an academic exercise, rather than something actually useful to people who purchase real products trying to accomplish real tasks.

    What we need is more explicit, deliberate testing for open source drivers, especially graphics and wireless. People (probably, in reality, people paid a salary to do this) will have to sit down with a long list of tests to exercise the driver, and run a program testing each one in turn, eventually exercising every possible interleaving of every branch point in the entire codebase. With the results in hand, we then need a (much) larger workforce of device driver developers available to analyze the tests and determine the best course to fix the broken test scenarios. Things get interesting when you have vaguely-written or open-ended standards that make it unclear what the correct functionality should actually be, but in those cases, it's probably better (unfortunately) to support the two most popular ways of interpreting that part of the standard, such that both alternatives can work, at least without causing a catastrophic failure of the driver.

    I see too many open source developers obsessed with purity of the codebase: trying to strictly interpret the standard in exactly one way, and if a caller doesn't conform 100% to that interpretation, the driver does something unfriendly like abort the program or terminate the connection. Completely unacceptable. Implementing a necessarily vaguely-worded spec in a natural language like English is necessarily an act of compromise, much like politics. It may be easier to shrug your shoulders at those who disagree with your interpretation, but ultimately you screw over the users.

    Sorry for this long rant, but I just don't see how trying to further generalize our driver architectures to be universal enough to support multiple operating systems is going to improve the actual situation with the lack of features/performance in drivers. It's basically an academic exercise with no practical utility.

    Leave a comment:


  • agd5f
    replied
    It has nothing to do with code quality in most cases, it mostly comes down to resources. It's not a good use of resources writing drivers more than once, so you want to leverage as much shared code as possible across OSes if you can. If you have a huge driver stack or are a small IHV with limited resources it does not make sense to write a new version of your stack for every OS.

    HW today has gotten very complex. There are HW documents out there for countless chips many of which have no Linux driver as of yet. Some HW vendors try and do the right thing and support Linux, so they often release code that uses an OS-agnostic abstraction layer and they release shared code for Linux. Then the Linux community shits on them for not releasing a "proper" Linux driver. I admire the Linux driver standards, but it would really help if the community was a little less caustic to those vendors who are trying to provide support.

    Leave a comment:


  • dfx.
    replied
    no, that because hardware makers have no initiative to make quality software in general and quality drivers in particular.
    they only goal is to _make it fast_ so it will utilise all features of production hardware release _somehow_.

    but they don't take into equation fact that this software (which is necessary for utilisation of their hardware at all) also MUST be free, free "as beer". unless they selling "software-hardware complex", as they say. it would be beneficial to utilise free-as-freedom-and-as-a-beer OS for that (for many reasons which were discussed a thousand times already) in the first place.
    but managers prefer to just throw pack of money to few code monkeys in one corner and to "quality certification comity" in the other. "problem solved"

    Leave a comment:


  • JairJy
    replied
    Yeah, like it was more easy.

    This is the main reason why Hardware developers doesn't make Linux's drivers:
    http://people.gnome.org/~federico/news-2009-01.html

    That simple as that, Kernel Linux doesn't have a stable API/ABI.

    Leave a comment:

Working...
X