Originally posted by Rallos Zek
View Post
Announcement
Collapse
No announcement yet.
"PulseAudio Is Still Awesome"
Collapse
This topic is closed.
X
X
-
Originally posted by TeamBlackFox View PostWhen I say blocking I/O, I mean there's no provision for multiple inputs to be handled to ALSA, it has a tendency to listen to one process at a time, which leads to very unpredictable behavior.
Originally posted by TeamBlackFox View PostWhere I am now, BSD and Commercial UNIX-land, multiple sound systems is neither acceptable nor the norm. You don't simply tell someone that when as it turns our, JACK and PA don't have anywhere near the same API, so that forces a user to find a new program if the one they want to use relies on PulseAudio. So no, try again.
Developers have been targeting multiple APIs and platforms for years. If your problem is choosing between JACK and PA you might be needing an architecture redesign of your app.
Originally posted by TeamBlackFox View PostIf I were to design a sound stack, it would consist of a kernel-space device driver which could communicate with a master set of userspace processes through STREAMS, it would have a centralised error log, autodetect device configuration with a set of sysctls to control it, and would be designed from the start to be multithreaded and capable of handling multiple sound inputs and mixing them through an optional curses based, realtime mixer module. STREAMS, in my opinion, is the only way to design this to where it doesn't suck, and uses a well-established, zero copy protocol for IPC.
Comment
-
I always have a hard time to purge and remove pulseaudio. If you have a decent card, it works better without. A decent card allows concurrent access by multiple applications, since mixing is done on the card. Most PC's come with low quality cards (single pcm). That in itself can also be ok. I use jackd for an exynos5422 based desktop and that works quite nice: replugging input and outputs, routing some outputs to the headset, and some to the amplifier. For a digital audio workstation you either use bare alsa, or jackd on alsa.
Now there is a time you don't want to mess around and need a quick and dirty fix. That's about the only time I use pulseaudio. It comes with an audio latency though, so it is a temporary fix.
What might be useful is to do pulseaudio on jackd for those applications that you don't want to bother with, like chrome. For now I have an alsa->jackd bridge for the browsers.
Comment
-
Originally posted by matt_g View PostAnyone talking about OSS4 or doing sound mixing in kernel space (including the BSD guy above me) is either a) ignorant or b) a troll I assume both. It's clearly spelled out in a link in the very article for this thread why it's not done. I.e. Advanced mixing requires floating point ops - Linux kernel guys will not allow FP ops in kernel forget about it mixing in Linux has to be userspace code end of story. I'm happy BSD lets you use floating points ops in your kernel its not applicable to linux.
Originally posted by mdias View PostYeah, there's something called abstraction layers...
Developers have been targeting multiple APIs and platforms for years. If your problem is choosing between JACK and PA you might be needing an architecture redesign of your app.
Originally posted by mdias View PostSounds a lot like ALSA (kernel) + PA (user). How would you solve the problem of having zero copy when multiple clients with varying sample frequency and formats playing at the same time?
Anyways, the way this would likely be handled by a STREAMS stack of processes, like this: Sound Dev <-> Kernel Driver <-> Master Sound Service <-> Slave Sound Service <-> Client Program <-> User
The Master sound service would act as the interface between kernel land and user land and would be the mixing source. The slave sound service would either be the main multithreaded or multiprocess part and would take the feed from the program, use the appropriate codec to turn it into a standard PCM stream, then send it along to the master. Should be relatively quick, honestly and able to guarantee latency, but I'd need to do some serious research into it and refine the process stack used to make it as quick as possible.
Comment
-
Originally posted by computerquip View Post
He quit due to an overwhelming negative response from Linux developers. It's hard to tell if he was capable of such a task but some of the responses he got were some serious hatred. Some of the early comments were removed but have a taste: http://www.reddit.com/r/linux/commen...ext_generation
EDIT: Oh and he got a massive trolling on Phoronix forums and some other forums as well.
Of course, this is coming from me, a young guy with very little to no knowledge on how linux audio works.Last edited by profoundWHALE; 05 June 2015, 12:10 PM.
Comment
-
Originally posted by wagaf View PostDTS-MA and TrueHD are just proprietary lossless codecs (like FLAC, which is free). They don't improve/add anything over PCM as far as audio quality is concerned (PCM is just plain uncompressed audio).
This means you will get the exact same audio if the DTS-MA track is decoded by Kodi then sent as PCM to your amp.
Well, when I tried DTS(without MA) on Kodi with PulseAudio it sent stereo PCM over S/PDIF on amp and I thought this also applies for DTS. Maybe configuration was a problem. After removing PulseAudio DTS and Dolby Digital just works.
Originally posted by wagaf View PostBut calling PulseAudio "useless" because of this is not reasonable.
Comment
-
Originally posted by TeamBlackFox View PostWell, my applications now don't even officially support Linux, because of this. I support FreeBSD and commercial UNIX in the products I am spending time on - and it serves me well. I don't think they're the end all, be all, but it works better in my case - go figure. Linux is not where people like me are welcomed it seems.
Originally posted by TeamBlackFox View PostYou know that PA isn't the only userspace audio system right? :P.
Originally posted by TeamBlackFox View PostAnyways, the way this would likely be handled by a STREAMS stack of processes, like this: Sound Dev <-> Kernel Driver <-> Master Sound Service <-> Slave Sound Service <-> Client Program <-> User
The Master sound service would act as the interface between kernel land and user land and would be the mixing source. The slave sound service would either be the main multithreaded or multiprocess part and would take the feed from the program, use the appropriate codec to turn it into a standard PCM stream, then send it along to the master. Should be relatively quick, honestly and able to guarantee latency, but I'd need to do some serious research into it and refine the process stack used to make it as quick as possible.
No matter what you come up with, you will always have to implement a way to resample client streams to whatever your HW supports and is set to output.
Most people also use onboard audio these days, and you don't get multiple voices/channels on those. And even if you did, the fact is that a software mixing+resampling solution is, most of the time, vastly superior to whatever resampling capabilities your multi-voice-multi-sample-rate sound card has. The only advantage of HW is speed, and thus low latency. That said my ALC889 can reach low latency with Jack and enabling rtirq.
There really is no escape to separating normal users from pro users when it comes to audio.
Normal users want it to just work, and for things to just work you need at least a system in place ready to resample your audio comming from multiple clients; they don't care about buffer sizes, periods or sample rate/format or who's mixing the audio. If you're a pro-audio user you'll just have to live with the current situation and use Jack; your software should work great with it too!
ASIO also has limitations on windows, and it seems to be good enough for professionals...Last edited by mdias; 05 June 2015, 04:36 PM.
Comment
-
It's funny how people are getting so up in arms about PA grabbing the ALSA device when that's exactly the same thing Xorg does with the DRI device =P Nobody seems to be complaining that only one process can claim and own a graphics card device at a time..
- Likes 1
Comment
-
Originally posted by LightBit View PostOf course I prefer PCM over proprietary crap, however S/PDIF only supports stereo PCM and my amp doesn't have HDMI.
Well, when I tried DTS(without MA) on Kodi with PulseAudio it sent stereo PCM over S/PDIF on amp and I thought this also applies for DTS. Maybe configuration was a problem. After removing PulseAudio DTS and Dolby Digital just works.
Comment
-
Originally posted by mdias View Post
I don't really think it's a question of you being welcome or not. You are just complaining about something that looks to me like a somewhat easily solved problem by using an abstraction interface. Maybe PortAudio would suit you well; hard to tell without knowing the specifics of the problem you're solving in your apps.
Originally posted by mdias View PostI'll be honest and say I don't really see the purpose of the "Slave Sound Service" you describe, or how multithreading would help things here since it seems like all it does is pass the buffer to the "Master Sound Service". Seems like you have more layers in the system you thought up than what we currently have with App->PA--->ALSA->HW.
No matter what you come up with, you will always have to implement a way to resample client streams to whatever your HW supports and is set to output.
Most people also use onboard audio these days, and you don't get multiple voices/channels on those. And even if you did, the fact is that a software mixing+resampling solution is, most of the time, vastly superior to whatever resampling capabilities your multi-voice-multi-sample-rate sound card has. The only advantage of HW is speed, and thus low latency. That said my ALC889 can reach low latency with Jack and enabling rtirq.
There really is no escape to separating normal users from pro users when it comes to audio.
Normal users want it to just work, and for things to just work you need at least a system in place ready to resample your audio comming from multiple clients; they don't care about buffer sizes, periods or sample rate/format or who's mixing the audio. If you're a pro-audio user you'll just have to live with the current situation and use Jack; your software should work great with it too!
ASIO also has limitations on windows, and it seems to be good enough for professionals...
Redesigning a sound system to work quickly in userland with a fast, well documented protocol like STREAMS would be my ideal, including a real time sound latency guarantee from the system. In any case, as much as OSS is imperfect, it is a much, much simpler solution that is somewhat closer to an ideal vs Pulseaudio/ALSA.
Comment
Comment