Funny, considering that according to a post of the KLANG author, ALSA is currently broken from an engineering point of view.
Originally Posted by dimko
Read it if you can (itís German, maybe some translation tool helps): http://www.heise.de/open/news/foren/...22197656/read/
I am an ALSA user, and while I don't see anything wrong with it, I see what this project is trying to address - putting hardware management back into the kernel. The same thing happened with video (KMS), and people were initially complaining, but now everyone is happier. Furthermore, if it is going to be transparent to OSS-supporting apps, it will be a drop-in replacement. I mean, ALSA supports OSS-compatible apps, Pulse supports ALSA and OSS... This is more of a "reworking the plumbing" than "reworking the dashboard".
Alsa and PA
Why not just port Alsa or PA into the Kernel?
Alsa and PA are working really fine or at least for me.
And it seems OSS4 ain't really popular here.
So what are the technical implications and why isn't he doing it?
Originally Posted by PaulDavis
Paul could you also answer why mac os x does a better job with a single "stack", why we can't have the same in linux and also why there is no effort trying to merge jack and PA (lack of manpower was the reason last time i heard about it).
Wow, Paul. i didn't even realize you visited/post @ phoronix.
Originally Posted by PaulDavis
Thanks for posting this information. I was aware of the floating point problem, as well as re-writing drivers problem, but others i had not really thought about too in depth. it's also nice to hear the perspective of someone who is in the know, and as knowledgeable about this stuff as you are. thanks
But, regardless of reading your post @ ardour.org ~ i think KLANG is generally a stupid idea. We don't need yet another sound-system for linux. if there are problems with ALSA, PA and Jack - it seems for more reasonable and realistic to work on those instead.
I did also find Ben l0ftis (from harrison consoles) comments interesting;
i thought i'd post his comments for those who are lazy, or missed Paul's original post.
Originally Posted by ben loftis
Last edited by ninez; 07-31-2012 at 12:40 PM.
Given the fact that the audio subsystem works into userspace,
Originally Posted by RealNC
if you need a latency around 5ms when the kernel has to process tons of events (interrupts), then you need real time responces for the audio thread and the kernel needs to be preemptible.
Standard linux can't do it right now, even by tweaking granularity settings:
...and/or givin a process a real time priority RR/FIFO
So if the video driver at kernel level is interrupt driven, and block it hardly during an effect, the audio skips is perfectly normal.
Building a preemptible kernel via rt patchsets or having the audio subsystem at kernel level will work.
Normally, I'd be interested, but now that we have all these different Linux audio standards this generally feels pointless. The guy does have good points of wanting to put all the audio stuff into the kernel, but is it really that necessary?
I'm no audio engineer or musician, but isn't PulseAudio doing a fairly good job as far as low latency goes? It had problems when it was first introduced, but not so much anymore from what I understand, and generally many people just seem satisfied with it. It should be good enough for games at least, right?
And audio engineers, like many others mentioned here, can simply use JACK and a low latency kernel if they need that low of a latency for their stuff.
You know...PulseAudio I despise the extra daemon running in the background. Oh.. *Idea* lets hide the whole shebang in the kernel! that way they can't see it right off... You know it really would be better if the parts that have to be running all the time got merged into the kernel and the parts that don't stayed in userspace. The problem with this is audio problems are now harder to fix as it increases the likelihood of requiring a kernel recompile.
Clearly you can use FP in the kernel http://www.linuxsmiths.com/blog/?p=253 I can see how there could be drawbacks however if you tried doing alot of FP in the kernel.
Shouldn't a pentium 133Mhz be able to play an mp3 fluently?
Originally Posted by peppepz
Its complicated. OS X does a better job with a single stack because it has very clever design that we unfortunately missed (read: didn't know about) back in the early days of ALSA. i don't feel so bad about this - windows didn't get anything remotely resembling this idea till years later either. The crux of the matter is that CoreAudio is NOT dependent on the flow of interrupts from the device to be able to know with fairly high precision where the current audio interface read and write locations are in memory. This means that applications with entirely different latency requirements can write into a shared memory buffer without requiring that they all do so synchronously (the way that JACK requires). An application that wants to write 8 samples at a time can do so, and at the same time, another application that wants to deliver 4096 or even 64k samples at a time can also do so - CoreAudio can ensure that both of them end up putting audio in the "right" place in the mix buffer so as to match their latency requirements. It does this using an engineering design called a delay locked loop (DLL), which actually makes the latency slightly worse than on Linux - CoreAudio drivers have a "safety buffer" to account for possible errors in the estimates that the DLL provides for the current read/write pointer locations, and this adds to the overall latency. Its fairly small though - typically 8-32 samples. Now, there is nothing fundamental stopping ALSA from adopting this kind of design. But nothing stopping it isn't the same as making it happen - that would require a significant engineering effort.
Originally Posted by 89c51
In addition, even without this, you still have to decide where sample rate conversion and sample format conversion will take place - in user space, or in the kernel. OS X does not have a prohibition on floating point in kernel space (which is contributes a teeny bit to why their kernel is slower than linux for many things). So historically, they have done some of this in the kernel. they won't talk much about the details, but it appears that in recent OS X releases (Lion & Mountain Lion) they have moved away from this and now have a user space daemon that is conceptually similar to Pulse and/or JACK through which all audio flows. this is slight speculation - you can see the server running with ps(1) but apple have never said anything about it. Its also not clear whether the shared memory buffer into which applications ultimately write their audio data is in user space or kernel space, and this may also have changed recently. The key point is that even with the DLL-driven design in the kernel, there are still tricky, fundamental aspects of API design that you have to tackle, and that even on OS X, the answers are not fixed in stone.
interestingly note that PulseAudio has (or was going to have) this DLL-driven design too - lennart calls it "glitch free" - but adding to pulseaudio (or JACK) doesn't do anything what goes on at the ALSA layer.
as for merging JACK + PulseAudio, manpower remains an issue, but more importantly, the goals of the two projects are not really that similar even though to the many idiot-savants who post to reddit and slashdot, they sound as if they should be. there are ways that it could happen, but it would require a huge level of desire on the part of all involved, and given the difficulties we have internally with two different jack implementations, it just seems unlikely.