Announcement

Collapse
No announcement yet.

"PulseAudio Is Still Awesome"

Collapse
This topic is closed.
X
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • mdias
    replied
    Originally posted by TeamBlackFox View Post
    Well, my applications now don't even officially support Linux, because of this. I support FreeBSD and commercial UNIX in the products I am spending time on - and it serves me well. I don't think they're the end all, be all, but it works better in my case - go figure. Linux is not where people like me are welcomed it seems.
    I don't really think it's a question of you being welcome or not. You are just complaining about something that looks to me like a somewhat easily solved problem by using an abstraction interface. Maybe PortAudio would suit you well; hard to tell without knowing the specifics of the problem you're solving in your apps.

    Originally posted by TeamBlackFox View Post
    You know that PA isn't the only userspace audio system right? :P.
    Sure, but since this thread is about PA and we're discussing it's pros/cons, I think it's fair we talk about that.

    Originally posted by TeamBlackFox View Post
    Anyways, the way this would likely be handled by a STREAMS stack of processes, like this: Sound Dev <-> Kernel Driver <-> Master Sound Service <-> Slave Sound Service <-> Client Program <-> User

    The Master sound service would act as the interface between kernel land and user land and would be the mixing source. The slave sound service would either be the main multithreaded or multiprocess part and would take the feed from the program, use the appropriate codec to turn it into a standard PCM stream, then send it along to the master. Should be relatively quick, honestly and able to guarantee latency, but I'd need to do some serious research into it and refine the process stack used to make it as quick as possible.
    I'll be honest and say I don't really see the purpose of the "Slave Sound Service" you describe, or how multithreading would help things here since it seems like all it does is pass the buffer to the "Master Sound Service". Seems like you have more layers in the system you thought up than what we currently have with App->PA--->ALSA->HW.

    No matter what you come up with, you will always have to implement a way to resample client streams to whatever your HW supports and is set to output.
    Most people also use onboard audio these days, and you don't get multiple voices/channels on those. And even if you did, the fact is that a software mixing+resampling solution is, most of the time, vastly superior to whatever resampling capabilities your multi-voice-multi-sample-rate sound card has. The only advantage of HW is speed, and thus low latency. That said my ALC889 can reach low latency with Jack and enabling rtirq.

    There really is no escape to separating normal users from pro users when it comes to audio.
    Normal users want it to just work, and for things to just work you need at least a system in place ready to resample your audio comming from multiple clients; they don't care about buffer sizes, periods or sample rate/format or who's mixing the audio. If you're a pro-audio user you'll just have to live with the current situation and use Jack; your software should work great with it too!

    ASIO also has limitations on windows, and it seems to be good enough for professionals...
    Last edited by mdias; 05 June 2015, 04:36 PM.

    Leave a comment:


  • LightBit
    replied
    Originally posted by wagaf View Post
    DTS-MA and TrueHD are just proprietary lossless codecs (like FLAC, which is free). They don't improve/add anything over PCM as far as audio quality is concerned (PCM is just plain uncompressed audio).

    This means you will get the exact same audio if the DTS-MA track is decoded by Kodi then sent as PCM to your amp.
    Of course I prefer PCM over proprietary crap, however S/PDIF only supports stereo PCM and my amp doesn't have HDMI.
    Well, when I tried DTS(without MA) on Kodi with PulseAudio it sent stereo PCM over S/PDIF on amp and I thought this also applies for DTS. Maybe configuration was a problem. After removing PulseAudio DTS and Dolby Digital just works.

    Originally posted by wagaf View Post
    But calling PulseAudio "useless" because of this is not reasonable.
    Calling it "awesome" is also not reasonable. It has many advantages compared to pure ALSA, but it still has too many disadvantages.

    Leave a comment:


  • profoundWHALE
    replied
    Originally posted by computerquip View Post

    He quit due to an overwhelming negative response from Linux developers. It's hard to tell if he was capable of such a task but some of the responses he got were some serious hatred. Some of the early comments were removed but have a taste: http://www.reddit.com/r/linux/commen...ext_generation

    EDIT: Oh and he got a massive trolling on Phoronix forums and some other forums as well.
    It makes me a little sad to hear that. He seemed to know quite a lot about audio and it's a great read seeing people go back and forth. What people don't seem to understand is that this is one guy, and he seems fairly young (like around 30). You can't expect one person to know about all of the new features of all the different audio things.

    Of course, this is coming from me, a young guy with very little to no knowledge on how linux audio works.
    Last edited by profoundWHALE; 05 June 2015, 12:10 PM.

    Leave a comment:


  • TeamBlackFox
    replied
    Originally posted by matt_g View Post
    Anyone talking about OSS4 or doing sound mixing in kernel space (including the BSD guy above me) is either a) ignorant or b) a troll I assume both. It's clearly spelled out in a link in the very article for this thread why it's not done. I.e. Advanced mixing requires floating point ops - Linux kernel guys will not allow FP ops in kernel forget about it mixing in Linux has to be userspace code end of story. I'm happy BSD lets you use floating points ops in your kernel its not applicable to linux.
    :sigh: You didn't even read my entire post to see I'm not opposed to a userspace audio system - in fact if I was responsible for doing one I would do it with a somewhat similar setup to ALSA/PA, but I'd do several things differently to facilitate a better product that could handle the needs of all users without compromising too much in any area, and ALSA and PA and the two together have various compromises that, as a developer, I would see unacceptable. I have no opposition to userspace processes, for crying out loud IBM i, a system I rather admire for some of its unique design principles, follows a rather interesting kernel design similar to a ukernel.

    Originally posted by mdias View Post
    Yeah, there's something called abstraction layers...
    Developers have been targeting multiple APIs and platforms for years. If your problem is choosing between JACK and PA you might be needing an architecture redesign of your app.
    Well, my applications now don't even officially support Linux, because of this. I support FreeBSD and commercial UNIX in the products I am spending time on - and it serves me well. I don't think they're the end all, be all, but it works better in my case - go figure. Linux is not where people like me are welcomed it seems.

    Originally posted by mdias View Post
    Sounds a lot like ALSA (kernel) + PA (user). How would you solve the problem of having zero copy when multiple clients with varying sample frequency and formats playing at the same time?
    You know that PA isn't the only userspace audio system right? :P.

    Anyways, the way this would likely be handled by a STREAMS stack of processes, like this: Sound Dev <-> Kernel Driver <-> Master Sound Service <-> Slave Sound Service <-> Client Program <-> User

    The Master sound service would act as the interface between kernel land and user land and would be the mixing source. The slave sound service would either be the main multithreaded or multiprocess part and would take the feed from the program, use the appropriate codec to turn it into a standard PCM stream, then send it along to the master. Should be relatively quick, honestly and able to guarantee latency, but I'd need to do some serious research into it and refine the process stack used to make it as quick as possible.

    Leave a comment:


  • Ardje
    replied
    I always have a hard time to purge and remove pulseaudio. If you have a decent card, it works better without. A decent card allows concurrent access by multiple applications, since mixing is done on the card. Most PC's come with low quality cards (single pcm). That in itself can also be ok. I use jackd for an exynos5422 based desktop and that works quite nice: replugging input and outputs, routing some outputs to the headset, and some to the amplifier. For a digital audio workstation you either use bare alsa, or jackd on alsa.
    Now there is a time you don't want to mess around and need a quick and dirty fix. That's about the only time I use pulseaudio. It comes with an audio latency though, so it is a temporary fix.
    What might be useful is to do pulseaudio on jackd for those applications that you don't want to bother with, like chrome. For now I have an alsa->jackd bridge for the browsers.

    Leave a comment:


  • mdias
    replied
    Originally posted by TeamBlackFox View Post
    When I say blocking I/O, I mean there's no provision for multiple inputs to be handled to ALSA, it has a tendency to listen to one process at a time, which leads to very unpredictable behavior.
    Exacly. It behaves more like a driver and less like a user-land audio server. PA is here to do the user-land audio server part of the job. Sometimes ALSA problems are revealed with this setup but since without PA they're not exposed, people tend to think it's PA's fault.


    Originally posted by TeamBlackFox View Post
    Where I am now, BSD and Commercial UNIX-land, multiple sound systems is neither acceptable nor the norm. You don't simply tell someone that when as it turns our, JACK and PA don't have anywhere near the same API, so that forces a user to find a new program if the one they want to use relies on PulseAudio. So no, try again.
    Yeah, there's something called abstraction layers...
    Developers have been targeting multiple APIs and platforms for years. If your problem is choosing between JACK and PA you might be needing an architecture redesign of your app.


    Originally posted by TeamBlackFox View Post
    If I were to design a sound stack, it would consist of a kernel-space device driver which could communicate with a master set of userspace processes through STREAMS, it would have a centralised error log, autodetect device configuration with a set of sysctls to control it, and would be designed from the start to be multithreaded and capable of handling multiple sound inputs and mixing them through an optional curses based, realtime mixer module. STREAMS, in my opinion, is the only way to design this to where it doesn't suck, and uses a well-established, zero copy protocol for IPC.
    Sounds a lot like ALSA (kernel) + PA (user). How would you solve the problem of having zero copy when multiple clients with varying sample frequency and formats playing at the same time?

    Leave a comment:


  • Passso
    replied
    Originally posted by Rallos Zek View Post
    Every time I try a Linux distro and have sound problems nine times out of ten its pulse audio and removing it makes my problems magically disappear. Pulse is pure shit, makes me miss the days of ESD and aRts.
    And ninety nine times out of hundred your sound system just works fine when you try a Linux distro, so your argument is "pure shit", according to your wording.

    Leave a comment:


  • Chewi
    replied
    Originally posted by Rallos Zek View Post
    LOL If ALSA is shit then Pulse Audio is shit, iss and bile all rolled in to one.
    You don't have to use PulseAudio on top of ALSA, use OSS as a backend if you're mad enough.

    Haters are gonna hate and I've posted about this before so I'll spare you the details but I'll just stick a +1 for PA here to offset the negativity.

    Leave a comment:


  • 89c51
    replied
    Originally posted by Luke View Post
    "Most people doing audio are on Macs?" Not necessarity! I got my start on audio reporting and never had nor wanted a Mac. I always considered Macs to be hard to use clunkers, sometimes with no mic jacks, always with an unfamiliar interface. I had been used to first old versions of Windows on public computers, then to Linux machines with GNOME 2, in both cases using Audacity to edit sound. It was the natural free and open source replacement for Cooledit, with no paid version needed to output to mp3. Outputting from Cooledit to .wav back in 2004 made files too big for websites, outputting to .ogg made files nobody would open from a default Windows machine because their players didn't handle it by default. That was on a Pentium laptop with a 2GB HDD and Windows 95 on it. It was replaced by a far superior Athon 500MHZ/10GB HDD machines with what was probably Debian Woody on it later that year. Sure as hell XMMS was more reliable than Winamp for playback when running as an unattended sound server!
    Never claimed that you cant do it on linux. Its the convenience and the ease of use of Plug and play in macs whether it is a headset or the latest coolest console. On linux you have to use JACK for low latency. Its the wrong way. We need one thing.

    Leave a comment:


  • matt_g
    replied
    Anyone talking about OSS4 or doing sound mixing in kernel space (including the BSD guy above me) is either a) ignorant or b) a troll I assume both. It's clearly spelled out in a link in the very article for this thread why it's not done. http://0pointer.de/blog/projects/jeffrey-stedfast.html
    Jeffrey thinks that audio mixing is nothing for userspace. Which is basically what OSS4 tries to do: mixing in kernel space. However, the future of PCM audio is floating points. Mixing them in kernel space is problematic because (at least on Linux) FP in kernel space is a no-no. Also, the kernel people made clear more than once that maths/decoding/encoding like this should happen in userspace. Quite honestly, doing the mixing in kernel space is probably one of the primary reasons why I think that OSS4 is a bad idea. The fancier your mixing gets (i.e. including resampling, upmixing, downmixing, DRC, ...) the more difficulties you will have to move such a complex, time-intensive code into the kernel.
    I.e. Advanced mixing requires floating point ops - Linux kernel guys will not allow FP ops in kernel forget about it mixing in Linux has to be userspace code end of story. I'm happy BSD lets you use floating points ops in your kernel its not applicable to linux.

    PA and Jack also have different use cases:

    JACK has been designed for a very different purpose. It is optimized for low latency inter-application communication. It requires floating point samples, it knows nothing about channel mappings, it depends on every client to behave correctly. And so on, and so on. It is a sound server for audio production. For desktop applications it is however not well suited. For a desktop saving power is very important, one application misbehaving shouldn't have an effect on other application's playback; converting from/to FP all the time is not going to help battery life either. Please understand that for the purpose of pro audio you can make completely different compromises than you can do on the desktop. For example, while having 'glitch-free' is great for embedded and desktop use, it makes no sense at all for pro audio, and would only have a drawback on performance. So, please stop bringing up JACK again and again. It's just not the right tool for desktop audio, and this opinion is shared by the JACK developers themselves.
    My take away from this is PA is a good system for general purpose audio "desktop" use and if you need to start caring about latency then JACK is what you want to work with. For my generic desktop use Pulse Audio has served me well if your having problems with
    Last edited by matt_g; 05 June 2015, 01:19 AM.

    Leave a comment:

Working...
X