If developers actually owned your hardware then they would of fixed it since they don't want their shit to be broke. Only Lusers were so tolerant of broken drivers that they would get used to having to edit a text file to be able to use a webcam microphone or whatever. It's easier to edit your asoundrc file to 'fix' a broken audio setup then it is to file a bug and have it fixed. Especially when you have a lot of people documenting it and telling you that it is how you should be fixing your stuff.
So for years and years Linux had a slew of broken drivers that were never fixed until PA came along and exposed it.
Of course there are still bugs and stuff still needs to be alerted to the developers. Many of these drivers were developed without anybody actually having physical access to the hardware, so there is a lot of guesswork.
It added so many new and interesting issues. Even recently with ubuntu the mic still doesn't work properly (why is it showing my one channel mic with 2 channels, with them by default locked together, with one channel canceling out the other channel?
If that is not the issue then you may just need to configure your I/O correctly. In the applet you can select what type of audio connections you want to use for your audio device. Many cards have multiplexed audio plugins... meaning that the same plug can be used for digital out/digital in or digital out/microphone in and all sorts of other combination.
Allot of chat programs auto adjust the mic volume, which causes them to resync together, muting the mic volume.
Audio software for the desktop should NEVER:
1. Probe for hardware
2. Configure audio levels for anything but themselves
Things like Audicity and Ekiga are nightmares. It's understandable considering that they were designed for old interfaces, but they are still effectively usability minefields.
Who the heck made it so upping the volume on a mic channel mutes it!? It took me several days to figure that one out.)
Thus hardware makers design their audio mixing interfaces for driver developers... not users.
Previously Linux forced users to handle audio levels by having them interact with the hardware mixing devices directly. Things like 'alsamixer' or 'gnome-alsamixer' controlled hardware mixing devices directly. Each hardware had it's own setup and no two Linux systems would have the same set of mixing controllers, unless they were using exactly the same set of hardware drivers, hardware devices, and alsa user software versions. So it's impossible to document and it's a usability nightmare. Lots of confusing terms, mislabeled things, weird bugs, incompatible or illogical mixing settings were possible, etc etc.
With other operating systems... Like OS X, or Windows, they had everything abstracted so that when you wanted to turn on your Mic or enable digital out, or set their max audio settings it was mostly the same and well documented as far as the OS was concerned (although with Windows each audio device had their own shit proprietary software for doing settings that tended to confuse the hell out of everybody)